text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Improved Dielectrically Modulated Quad Gate Schottky Barrier MOSFET Biosensor
A novel Schottky barrier MOSFET with quad gate and with source engineering has been proposed in this work. A high-κ dielectric is used at the source side of the channel, while SiO2 is used at the drain side of the channel. To improve the carrier mobility, a SiGe pocket region is created at the source side of the channel. Physical and electrical characteristics of the proposed device are compared with conventional double gate Schottky barrier MOSFET. It has been observed that the proposed device exhibits better performance, with a higher ION/IOFF ratio and lower subthreshold slope. The high-κ dielectric, along with the SiGe pocket region, improves tunneling probability, while aluminum, along with SiO2 at the drain side, broadens the drain/channel Schottky barrier and reduces the hole tunneling probability, resulting in a reduced OFF-state current. Further, the proposed device is used as a biosensor to detect both the charged and neutral biomolecules. Biosensors are made by creating a nanocavity in the dielectric region near the source end of the channel to capture biomolecules. Biomolecules such as streptavidin, biotin, APTES, cellulose and DNA have unique dielectric constants, which modulates the electrical parameters of the device. Different electrical parameters, viz., the electric field, surface potential and drain current, are analyzed for each biomolecule. It has been observed that drain current increases with the dielectric constant of the biomolecules. Furthermore, the sensitivity and selectivity of the proposed biosensors is better than that of conventional biosensors made using double gate Schottky barrier MOSFETs. Sensitivity is almost twice that of a conventional sensor, while selectivity is six to twelve times higher than a conventional one.
Introduction
Accurate measurement of important physiological parameters facilitates the timely discovery of potential diseases that deteriorate the health of a patient. The necessity for a early and precise identification of ailments and other vital examinations of living organisms produced a growing need for economic, highly selective and sensitive biosensors. A biosensor is a device which transforms the biological properties of biomolecules into corresponding electrical characteristics. Factors such as inexpensive fabrication, quick response time, congeniality with modern state-of-the-art systems, smaller size and labelfree detection give an edge to Field Effect Transistor (FET) based biosensors among the other semiconductor-based biosensors. Presently, FET biosensors are principally used in various industries such as food and beverage, medicines, agriculture and environmental monitoring. The FET-based biosensors work on the principle of modulating the electrical parameters of the device with the dielectric constant of the biomolecules in the cavity. The existence or nonexistence of the target biomolecule in the cavity modifies the dielectric constant of the dielectric where the cavity has been created, which changes the drain current of the device [1][2][3]. Though the dielectrically modulated tunnel-FET (TFET) based sensor has gathered the focus of researchers, as it reduces short-channel effect, TFET-based sensors suffer from high thermal budget requirements for source-drain formation and random dopant fluctuation arising from the difficulty in achieving the abrupt doping profile at source/drain-channel junctions [4]. Alternatively, Schottky barrier (SB) MOSFETs are considered as a prospective contender for high-performance CMOS ICs, as it is easy to form low-resistivity ultra-shallow junctions using metal source/drain instead of doped source/drain. Use of metal source/drain contact holds the advantage, as it commendably minimizes the issue of higher S/D series resistance and lessens the severe limitations imposed on conventionally implanted S/D. Further, intrinsic Schottky potential barrier of SB-MOSFETs results in greater control of OFF-state leakage current, and the subthreshold slope (SS) of SB-MOSFET has a lower limit of 60 mV/dec at room temperature [5][6][7][8][9][10]. Features such as low thermal budget requirements, higher invulnerability to short-channel effects, low source/drain (S/D) parasitic resistances, sub-10 nm gate length scalability and simple fabrication steps make Schottky barrier MOSFETs (SB-MOSFETs) a suitable FET biosensor for the detection of different biomolecules [3,4]. Nevertheless, SB-MOSFETs have many advantages compared to conventional MOSFETs. The intrinsic Schottky barrier between metal S/D and the semiconductor results in lower drive current [11][12][13]. Many SB devices, like dual metal gate, source/drain pocket doping, work function engineering, and plasma-based structures have been suggested in recent years to resolve the difficulties of SB-MOSFET, including ambipolar conduction and lower drive current [14][15][16][17][18][19][20]. Sumit Kale et al. employed dual-material S/D SB MOS with erbium silicide as the main S/D material and hafnium metal as the S/D extension material to suppress ambipolar leakage current in the SB-MOSFET [14]. Highly doped, dopant-segregated (DS) layers, which modulate the Schottky barrier (SB) height and width for improving the drive current of conventional SB-MOSFET, have been explored [15,16]. However, DS SB-MOSFET suffers from random dopant fluctuations (RDF) and increased thermal budget. Source engineered (SE) SB-MOSFET, using the charge-plasma concept to modify the SB width to eliminate RDF, has been studied. However, SE SBMOS suffer from ambipolar leakage current even for negative gate bias [17,18]. X Liu et al. have proposed the novel high-SB, bidirectional tunnel field effect transistor, which results in reduced thermionic emission and robust band-to-band (BTBT) forward tunneling current [10]. Sangeeta Singh et al. investigated the charge-plasma SB tunnel FET (CP-SB-TFET), and this study reveals that a pocket at both the drain and source end results in reduced ambipolar current, DIBL, and improved drive current [19]. Sumit Kale et al. have demonstrated that dopant segregation (DS) at the source-channel junction aids to increase tunneling area, which results in improved device performance with high ON current [20]. A ferroelectric SB tunnel FET (Fe SB-TFET) with a highly doped pocket at the source/drain and channel interface and gate-drain underlap reduce tunneling barrier width at the source side SB, resulting in improved device performance with low subthreshold swing (SS), reduced ambipolar current and high I ON /I OFF [6]. Investigation of temperature's effect on reliability issues of ferroelectric DS SB TFET reveals that the presence of a ferroelectric layer and the resulting negative capacitance effect increases the ON current, achieves highest I ON /I OFF ratio and reduces the SS to 23 mV/dec at 300 K [21]. Silicon on insulator SB-MOSFET (SOI SB-MOSFET) with source extension (SE) and with source drain extension (SDE) significantly reduces drain-induced barrier tunneling and produces higher I ON /I OFF and lower subthreshold swing (SS) than SOI SB-MOSFET [22].
The motivation of this work is to improve the performance of SB-MOSFET using structural modification and with source engineering. With this objective, a novel SB-MOSFET was made with quad gate structure and with SiGe at the source side of the channel. In this work, novel SB-MOSFET, named as quad gate SB-MOSFET, has been designed using TCAD, and its performance is analyzed using different electrical parameters such as ON current, I ON /I OFF and subthreshold slope (SS). With this novel quad gate structure and source engineering, both I ON /I OFF ratio and subthreshold slope have been improved. To substantiate, the performance improvement of the proposed quad gate SB-MOSFET's electrical parameters is compared to the double gate SB-MOSFET. In this work, double gate SB-MOSFET is referred to as the conventional device as it has already been reported in the literature [23,24]. Further, the proposed device is used as a biosensor for detecting different biomolecules such as DNA, cellulose, APTES, biotin and streptavidin. This paper is organized as follows. The device structure and simulation approach are given in Section II, and simulation results of the proposed device and its comparison to the conventional device are presented in Section III. Application of the proposed device as a biosensor is given in Section IV, and the conclusion and future work are given in Section V.
Device Structure and Simulation Strategy
A 2D schematic cross section of the novel SB-MOSFET and conventional double gate SB-MOSFET are shown in Figures 1 and 2, respectively. In both the devices, source is a heavily doped p-region with doping concentration of 10 20 cm −3 and drain is n + doped with a concentration of 10 18 cm −3 . The channel is p-type silicon with doping concentration of 10 15 cm −3 . Lower doping in the channel region is preferred, as it improves the carrier mobility, resulting in higher drain current. With the objective of designing a novel structure, two gate electrodes, with one gate dielectric at the source side and another at the drain side of the channel, have been used in this work. Both HfO 2 and SiO 2 are used as the gate dielectrics. The gate electrode, along with a high-κ dielectric (HfO 2 ) at the source side of the channel, produces a higher electric field and enhances the tunneling rate, resulting in a higher ON-state current. Use of high-κ gate dielectric at the source side increases the internal electric field, and the high dielectric constant of HfO 2 increases gate capacitance, resulting in a higher I ON /I OFF ratio of the proposed device. Further, high-κ dielectric reduces the OFF-state leakage current due to direct tunneling, as it enables the use of a thicker gate oxide for the same gate capacitance, thereby reducing the power dissipation. The gate electrode at the drain side, along with SiO 2 as gate dielectric, broadens the barrier at the drain/channel junction and prevents carrier tunneling, thereby suppressing the ambipolar current [24,25]. Aluminum with a work function of 4.1 eV is used as the metal contact. Aluminum at the source and drain, along with silicon substrate, form a Schottky contact. In the proposed device, the gate is not continuous throughout the channel and is present only at the source and drain side of the channel. The gate at the source side improves the tunneling, while the gate at the drain side reduces the leakage current. As there are four gate contacts, two at the top side and two at the bottom side of the channel, the name "quad gate" was given to the proposed SB-MOSFET. In this work, the gate structure is novel, i.e., the gate is present only at the source side and drain side of the channel. Further, to improve the carrier mobility, SiGe is introduced at the source side of the channel. Use of SiGe at the source side, and the resulting lower Schottky barrier, enhance the injection of the electron into the channel, thereby improving current drive capability. In contrast, a higher Schottky barrier for holes effectively suppresses hole injection into the silicon channel, thereby preventing the flow of holes in the OFF-state. Further, SiGe improves electron mobility by straining the crystal lattice, resulting in higher drive currents [26,27]. Silvaco TCAD is used for the design and simulation of the proposed device. The Universal Schottky Tunneling (UST) model captures tunneling close to the source channel junction while mobility models, viz., concentration mobility (CONMOB) and field dependent mobility (FLDMOB), have been used to capture different types of mobility. Band-gap narrowing models take into account the band-gap narrowing due to high doping concentrations of the source and drain while the Shockley-Read-Hall (SRH) model captures the effect of thermal generation leakage currents. Further, the transport mechanism in the device is simulated by the drift-diffusion model. Table 1 shows the parameters of different regions of both the proposed and conventional devices considered for simulation.
Result and Discussion
Different electrical characteristics, viz., surface potential, electric field, band energy and transfer characteristics of both the proposed and conventional DG SB-MOSFET, are analyzed and the corresponding results are presented in Figures 3-6. It should be noted that in Figures 3-5, the left end of the plot represents the drain side, while the right end corresponds to the source side of the channel. Surface potential variation along the channel length of both the proposed and conventional SB-MOSFET is presented in Figure 3. It has been observed that surface potential, a vital factor in estimating the DC property of thinfilm transistor, is better in the proposed quad gate SB-MOSFET than in the conventional SB-MOSFET. The surface potential is almost 50% higher for the proposed device than conventional device. The surface potential is 1.15 V for the proposed device while the value is 0.75 V for the conventional device at 0.0215 µm for the bias voltage of V gs = 0.5 V and V ds = 0.8 V. Figure 4 presents variation of the electric field across the channel of both devices. It has been observed that the proposed quad gate SB-MOSFET produces a higher electric field at both the source and drain side of the channel than the conventional SB-MOSFET. The electric field is approximately three times higher in the quad gate SB-MOSFET than the conventional device. The electric field is 5.7 MV/cm in the proposed device while the value is 1.9 MV/cm in the conventional device. This higher electric field results in higher tunneling current, as BTBT generation rate is a strong function of the electric field. It can also be observed that in the proposed device, the electric field at the source/channel junction is higher than that value at the drain/channel junction. This could be attributed to the effect of the different gate electrodes at the source and drain side of the channel. HfO 2 at the source side of the channel produces a higher electric field, which is useful for achieving a higher tunneling rate. The lower electric field resulting from the use of SiO 2 as a gate dielectric at the drain side of the channel is useful for inhibiting tunneling of charges in the ambipolar state.
Different electrical characteristics, viz., surface potential, electric field and transfer characteristics of both the proposed and conventional DG SB analyzed and the corresponding results are presented in Figures 3-6. It sh that in Figures 3-5, the left end of the plot represents the drain side, whil corresponds to the source side of the channel. Surface potential varia channel length of both the proposed and conventional SB-MOSFET is prese 3. It has been observed that surface potential, a vital factor in estimating th of thin-film transistor, is better in the proposed quad gate SB-MOSFET th ventional SB-MOSFET. The surface potential is almost 50% higher for the vice than conventional device. The surface potential is 1.15 V for the pr while the value is 0.75 V for the conventional device at 0.0215 μm for the Vgs = 0.5 V and Vds = 0.8 V. Figure 4 presents variation of the electric f channel of both devices. It has been observed that the proposed quad gat produces a higher electric field at both the source and drain side of the ch conventional SB-MOSFET. The electric field is approximately three times quad gate SB-MOSFET than the conventional device. The electric field is the proposed device while the value is 1.9 MV/cm in the conventional devi electric field results in higher tunneling current, as BTBT generation ra function of the electric field. It can also be observed that in the propos electric field at the source/channel junction is higher than that value at the junction. This could be attributed to the effect of the different gate electrode and drain side of the channel. HfO2 at the source side of the channel pro electric field, which is useful for achieving a higher tunneling rate. The low resulting from the use of SiO2 as a gate dielectric at the drain side of the ch for inhibiting tunneling of charges in the ambipolar state. It could be observed that the energy barrier width is significantly reduced at source side of the channel at the position of 0.0225 µm for the proposed quad gate SB-MOSFET than for the conventional device. The reason for SB thinning at the source side of the channel could be attributed to the use of a high-κ gate dielectric combined with the SiGe pocket region at the source side of the channel. A thinner energy barrier results in higher tunneling of electrons through the channel. Figure 6 presents drain current versus gate voltage characteristics of the proposed quad gate SB-MOSFETs compared to the conventional DG SB-MOSFET. It has been observed that there is a slight decrease in the ON current of quad gate SB-MOSFET; however, the proposed device exhibits a much lower OFF-state current. The OFF-state current of the proposed device is more than five orders lower than the conventional device. A significantly lower OFF-state current in the proposed device could be attributed to two reasons. The presence of the SiGe pocket region at the source side of the channel acts as an additional barrier in the OFF-state, ensuing in low leakage current. The other reason is aluminum, along with SiO 2 at the drain side of the channel, broadens the drain/channel junction barrier, which results in a low tunneling probability of hole through the drain-side SB. It can also be observed that in the subthreshold region, there is a sharp decrease in drain current with respect to gate voltage in the proposed device compared to the conventional one, resulting in better subthreshold behavior of the proposed device. Electrical parameters of both the proposed and conventional DG SB-MOSFETs, calculated from the transfer characteristics, are compared and the results are given in Table 2. The proposed device exhibits superior performance with improved I ON /I OFF ratio and reduced SS. The I ON /I OFF ratio of the quad gate SB-MOSFET is five orders more than the conventional device while the subthreshold slope of quad SB-MOSFET is 25% lower than that of the conventional device. These results imply that the quad gate SB-MOSFET is suitable for future nano-scale ICs.
Applications
Biosensors have had a long journey, beginning from ion-sensitive FET [28] with a high sensitivity only to charged biomolecules like DNA, to present-day biosensors, which can detect neutral biomolecules like biotin and streptavidin. Biomolecules exist in different forms extending from nucleic acid, viruses, bacteria and proteins, and dimensions ranging from nm to µm. Knowledge of how these biomolecules function and their impact on several fields such as medicine, agriculture, and the food industry necessitated the early detection of biomolecules [29][30][31][32][33][34]. FET-based biosensors are realized by carving out a nanocavity at the source and/or drain end of the MOSFET. Existence of biomolecules in the nanocavity results in a change in the coupling between the gate and channel due to a change in dielectric constant of the gate oxide [34]. Further, this change in the dielectric constant results in SB thinning, resulting in higher tunneling current. Change in the drain current of the proposed quad gate SB-MOSFET biosensor can be used as an electrical parameter to detect the target biomolecules. The ability of the biosensor to detect the target biomolecules can be determined using the parameter of drain current sensitivity (S ID ) and is given by [7] where I Bio and I 0 represent the ON-state current in the presence and absence of the biomolecules in the nanocavity, respectively. Higher sensitivity implies a higher chance of detecting the target species. Few works on FET-based biosensors have been reported in the literature [1,7]. S.A. Hafiz et al. have proposed source-engineered SB-FET, using the charge-plasma concept for sensing biomolecules, and observed that SE SB-FET exhibit much higher sensing capability for both neutral and charged biomolecules [1]. The L-shaped SB-FET biosensor designed with Al and Cu as a dual-material gate and with HfO 2 as a gate dielectric exhibit better sensitivity at both low and high temperatures [7]. In this work, the proposed quad gate SB-MOSFET can be used as a biosensor for label-free detection of both charged and neutral biomolecules. The proposed device is converted into a biosensor by creating a nanocavity by etching gate oxide (HfO 2 ) near the source junction of the channel. The size of the nanocavity is 4 nm × 0.5 nm. The proposed biosensor can be used for detecting different biomolecules such as streptavidin, biotin, APTES, cellulose and DNA which have unique dielectric constants, as given in Table 3. Biomolecules are placed in the nanocavity near the source end of the channel, and the corresponding variation in different electrical parameters such as surface potential, electric field and drain current are observed. The presence of different biomolecules in the nanocavity is modeled as oxide having different dielectric constants, as shown in Figure 7. To mirror the influence of a charged biomolecule, a fixed-interface oxide charge is included in the dielectric layer. Figure 8 shows the electric field variation across the channel with different biomolecules in the nanocavity. It can be seen that the electric field near the source end of the channel is highest for DNA and lowest for streptavidin, which concludes that the electric field increases with dielectric constant of the biomolecules. This could be attributed to the decrease in width of the depletion region due to thinning of the Schottky barrier, resulting in a higher electric field near the source. where IBio and I0 represent the ON-state current in the presence and absence of the biomolecules in the nanocavity, respectively. Higher sensitivity implies a higher chance of detecting the target species. Few works on FET-based biosensors have been reported in the literature [1,7]. S.A. Hafiz et al. have proposed source-engineered SB-FET, using the charge-plasma concept for sensing biomolecules, and observed that SE SB-FET exhibit much higher sensing capability for both neutral and charged biomolecules [1]. The L-shaped SB-FET biosensor designed with Al and Cu as a dual-material gate and with HfO2 as a gate dielectric exhibit better sensitivity at both low and high temperatures [7]. In this work, the proposed quad gate SB-MOSFET can be used as a biosensor for label-free detection of both charged and neutral biomolecules. The proposed device is converted into a biosensor by creating a nanocavity by etching gate oxide (HfO2) near the source junction of the channel. The size of the nanocavity is 4 nm × 0.5 nm. The proposed biosensor can be used for detecting different biomolecules such as streptavidin, biotin, APTES, cellulose and DNA which have unique dielectric constants, as given in Table 3.
Biomolecules are placed in the nanocavity near the source end of the channel, and the corresponding variation in different electrical parameters such as surface potential, electric field and drain current are observed. The presence of different biomolecules in the nanocavity is modeled as oxide having different dielectric constants, as shown in Figure 7. To mirror the influence of a charged biomolecule, a fixed-interface oxide charge is included in the dielectric layer. Figure 8 shows the electric field variation across the channel with different biomolecules in the nanocavity. It can be seen that the electric field near the source end of the channel is highest for DNA and lowest for streptavidin, which concludes that the electric field increases with dielectric constant of the biomolecules. This could be attributed to the decrease in width of the depletion region due to thinning of the Schottky barrier, resulting in a higher electric field near the source. Figure 9 shows the surface potential variation across the channel length with different biomolecules. It could be observed that potential is higher for biomolecules that have a higher dielectric constant, which could be ascribed to an increase in the capaci- Figure 9 shows the surface potential variation across the channel length with different biomolecules. It could be observed that potential is higher for biomolecules that have a higher dielectric constant, which could be ascribed to an increase in the capacitance with the dielectric constant of biomolecules in the cavity.
icromachines 2023, 14, x Figure 9. Surface potential variation for the proposed biosensor with different biomolec It could be observed from Figure 10 that, for lower gate voltage, drain c creases with the dielectric constant, as DNA exhibits higher drain current wh tavidin produces lower drain current. An increase in the coupling capacitance dielectric constant results in higher charge concentration in the channel, resul increase in drain current with the dielectric constant of the biomolecules. Surfa tial variation along the length of the channel and transfer characteristics given 9 and 10 are almost matching for both the biomolecules streptavidin and biotin difference between the dielectric constants could be the reason behind almos responses to these two biomolecules. To demonstrate the advantage of the pro osensor for the detection of different biomolecules, conventional biosensors ar signed by creating a nanocavity near the source side of the channel in the diele the cross section and transfer characteristics of conventional biosensors are giv ures 11 and 12, respectively. It could be observed from Figure 10 that, for lower gate voltage, drain current increases with the dielectric constant, as DNA exhibits higher drain current while streptavidin produces lower drain current. An increase in the coupling capacitance with the dielectric constant results in higher charge concentration in the channel, resulting in an increase in drain current with the dielectric constant of the biomolecules. Surface potential variation along the length of the channel and transfer characteristics given in Figures 9 and 10 are almost matching for both the biomolecules streptavidin and biotin. A small difference between the dielectric constants could be the reason behind almost-identical responses to these two biomolecules. To demonstrate the advantage of the proposed biosensor for the detection of different biomolecules, conventional biosensors are also designed by creating a nanocavity near the source side of the channel in the dielectric. Both the cross section and transfer characteristics of conventional biosensors are given in Figures 11 and 12, respectively. Drain current sensitivity (SID) of both the conventional and proposed biosensors are calculated by using the Equation (1) at the gate to source voltage of 0.5 V and given in Figure 13. It can be observed that drain current sensitivity (SID) increases with the increase in the dielectric constant, and the change is better in the proposed device than the conventional sensor made from DG SB-MOSFET. This could be ascribed to enhanced SB thinning resulting in a higher ON-current of the proposed device. Further, the SID value of biomolecules such as APTES, cellulose and DNA of the proposed sensor is found to be It can be observed that drain current sensitivity (SID) increases with the i the dielectric constant, and the change is better in the proposed device than th It can be observed that drain current sensitivity (S ID ) increases with the increase in the dielectric constant, and the change is better in the proposed device than the conventional sensor made from DG SB-MOSFET. This could be ascribed to enhanced SB thinning resulting in a higher ON-current of the proposed device. Further, the S ID value of biomolecules such as APTES, cellulose and DNA of the proposed sensor is found to be twice that of streptavidin and biotin. The S ID value of DNA is 32.33 while the value of biotin is 15.67. This sensitivity analysis implies that the proposed biosensor outperforms the conventional sensor in the detection of all the five molecules and sensitivity is higher for biomolecules having a dielectric constant of more than 3.5. Selectivity is one of the vital parameters of the biosensor that determine how effectively the sensor detects the target biomolecule among the other biomolecules present in the cavity. Selectivity is calculated by taking the relative ratio of the drain current at different dielectric constant and is given by [31] ∆S = I ON (k = 3.57, 6.1, 8.7) − I ON (k = 2.63) Figure 14 presents the selectivity of biotin among the APTES, cellulose and DNA of both the proposed and conventional sensors. It could be observed that selectivity of the proposed sensor is six to twelve times higher than the conventional sensor based on the target biomolecules. biotin is 15.67. This sensitivity analysis implies that the proposed biosensor outperforms the conventional sensor in the detection of all the five molecules and sensitivity is higher for biomolecules having a dielectric constant of more than 3.5. Selectivity is one of the vital parameters of the biosensor that determine how effectively the sensor detects the target biomolecule among the other biomolecules present in the cavity. Selectivity is calculated by taking the relative ratio of the drain current at different dielectric constant and is given by [31] ∆ = = 3.57,6.1,8.7 − = 2.63 = 2.63 (2) Figure 14 presents the selectivity of biotin among the APTES, cellulose and DNA of both the proposed and conventional sensors. It could be observed that selectivity of the proposed sensor is six to twelve times higher than the conventional sensor based on the target biomolecules.
Conclusions
In this paper, the novel SB-MOSFET-based biosensor for detecting the changes in physiological parameters of living organism has been proposed. The proposed device consists of a quad gate with a SiGe pocket region near the source end of the channel. Use of high-κ gate dielectric at the source end of the channel produces a higher electric field and enhances tunneling probability. On the other hand, SiO2 as the gate dielectric at the drain end broadens SB at the drain/channel junction, resulting in a reduced OFF-state current. The proposed device is found to be suitable for nano biosensors for detecting various biomolecules as it exhibits better electrical characteristics with a higher ION-IOFF ratio and lower subthreshold slope. The ION-IOFF ratio of the proposed device is six orders higher than the conventional double gate SB-MOSFET, and the subthreshold slope is 25% lower than that of the conventional device. Biosensors, both the proposed and conven-
Conclusions
In this paper, the novel SB-MOSFET-based biosensor for detecting the changes in physiological parameters of living organism has been proposed. The proposed device consists of a quad gate with a SiGe pocket region near the source end of the channel. Use of high-κ gate dielectric at the source end of the channel produces a higher electric field and enhances tunneling probability. On the other hand, SiO 2 as the gate dielectric at the drain end broadens SB at the drain/channel junction, resulting in a reduced OFF-state current. The proposed device is found to be suitable for nano biosensors for detecting various biomolecules as it exhibits better electrical characteristics with a higher I ON -I OFF ratio and lower subthreshold slope. The I ON -I OFF ratio of the proposed device is six orders higher than the conventional double gate SB-MOSFET, and the subthreshold slope is 25% lower than that of the conventional device. Biosensors, both the proposed and conventional devices, are made by creating a nanocavity near the source end of the channel. It has been observed that drain current sensitivity increases with the dielectric constant of the biomolecules in the cavity. It is concluded that the proposed quad gate SB-MOSFET Biosensor has exceptional biosensing capability with higher sensitivity and selectivity than conventional bio sensors made from DG SB-MOSFET. In the future, further modification of the proposed device is required to distinguish biomolecules that have slightly different dielectric constants. | 7,138.6 | 2023-03-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Emergence of a Bose polaron in a small ring threaded by the Aharonov-Bohm flux
The model of a ring threaded by the Aharonov-Bohm flux underlies our understanding of a coupling between gauge potentials and matter. The typical formulation of the model is based upon a single particle picture, and should be extended when interactions with other particles become relevant. Here, we illustrate such an extension for a particle in an Aharonov-Bohm ring subject to interactions with a weakly interacting Bose gas. We show that the ground state of the system can be described using the Bose polaron concept -- a particle dressed by interactions with a bosonic environment. We connect the energy spectrum to the effective mass of the polaron, and demonstrate how to change currents in the system by tuning boson-particle interactions. Our results suggest the Aharonov-Bohm ring as a platform for studying coherence and few- to many-body crossover of quasi-particles that arise from an impurity immersed in a medium.
Introduction
In the idealized model of a ring threaded by the Aharonov-Bohm (AB) flux, a particle moves in a region with zero fields, and the presence of an electromagnetic potential manifests itself only in a minimal substitution −i∂/∂x → −i∂/∂x + Φ, where the position-independent parameter Φ determines the strength of the flux. This model provides insight into many physical phenomena. For example, it illustrates the significance of potentials in quantum mechanics [1], geometric phases [2], the Josephson effect and persistent currents [3,4]. Foundations of the AB physics are based upon a single-particle picture [5,6], which already has the power to explain some experiments qualitatively such as spectroscopy in semiconductor rings [7]. However, one-body studies do not take into account interactions with other particles, in particular, with the environment. Therefore, they should be extended for realistic systems. In this paper, we discuss such an extension assuming a one-dimensional bosonic environment.
Before we proceed, let us briefly review known fewbody physics in the AB ring. If all particles are identical, then the flux couples only to the total angular momentum of the system. It can change the global minimum of the energy leaving the internal [i.e., in relative coordinates] dynamics intact, see, e.g., [6,[8][9][10]. In short, there is no interplay between particle-particle interactions and the AB flux for identical particles. This conclusion holds true also for distinguishable [by spin or quasi-spin] particles with identical masses and AB fluxes. In this case the strength of the AB flux can however change the symmetry of the ground state, see, e.g., Pecci et al. [11]. For particles with non-identical charges and/or masses, such as electrons and holes [12,13], the internal structure of a one-dimensional system is coupled to the AB flux. At the two-body level, this coupling can modify the threshold for binding (which may preclude formation of excitons for weakly attractive potentials in one dimension [14]) or lead to formation of dark excitonic states [15]. Systems with more than two particles are less explored, to the best of our knowledge.
In this paper, we study one of the simplest twocomponent many-body models -a particle (impurity) coupled to the AB flux that interacts with a Bose gas. The system is motivated by recent cold-atom experiments on Bose polarons [16][17][18][19][20][21][22], and by theoretical and experimental progress in realizing ring-shape potentials and artificial gauge fields with neutral cold atoms. For reviews of these advances see [23][24][25]. Ring-shaped condensates with effective gauge potentials have so far not been engineered together with impurities. As we show below such a combination may lead to rich physics. Note that recent advances in engineering ring shaped potentials [26][27][28] and tunable gauge fields [29] suggest experiments with polaritons as other set-ups to test our results.
The focus of the paper is on 'dressing' the impurity -a typical question addressed in many-body physics -which determines properties of the system such as transport and 'magnetization'. As such, our results complement previous works that investigated small systems using fewbody methods and approaches.
One of the main findings of our work is that the system can be described using ideas developed for the onedimensional Bose-polaron problem [30][31][32][33][34][35][36][37][38][39]. This connection leads to a number of useful conclusions. First, previous studies of the Bose polaron contribute insight into properties of our system, and provide an intuitive interpretation of our results. This insight can be also useful for understanding numerical lattice simulations where electron-phonon interactions are taken into account, see, e.g., Monisha et al. [40]. Second, persistent currents can be an experimental measure of validity of the Bosepolaron concept in one dimension. In particular, they can be used to investigate phase coherence of the polaron across the AB ring -a necessary condition for the existence of persistent currents. Third, the AB ring provides a conceptual model for defining the effective mass in a finite-size system, allowing one to better understand a few-to many-body crossover of one-dimensional systems. In particular, our work paves the way for studying this crossover beyond the standard testbed -the groundstate energy [41].
Results and Discussion
System. We study a one-dimensional system of N bosons and a single impurity atom, see Fig. 1. The system is in a ring of length L, which corresponds to periodic boundary conditions. The position of the impurity (ith boson) is given by Ly (Lx i ); the mass of the impurity (a boson) is m (M ). We assume that only the impurity is coupled to the AB flux Φ/L. For neutral particles, Φ is not generated by a magnetic flux threading the ring. Instead, other techniques are used [23,42], e.g., stirring with a weak external potential with speed v, in which case Φ = mvL/h. Note that the more gen-eral case, which might be more suitable for experimental realization, where the artificial flux is coupled to both particle species can be easily incorporated in our model, see Suppl. Note 1 for the flux coupled to bosons.
The Hamiltonian in first quantization reads where h =h 2 2mL 2 (−i∂/∂y + Φ) 2 describes the impurity; for the bosons, we have The impurity-boson, V ib , and boson-boson, V bb , interactions are parameterized by delta-function potentials where c and g define the strength of interactions. For simplicity, we shall use the system of units in which h = M = 1. In the main part of the paper, a boson and the impurity have identical masses, m = M . [A mass imbalance does not change the main conclusions of our study, see Suppl. Note 2]. For a fixed value of N , dimensionless parameters that determine all physical properties are c/g and γ = gL/N . For our numerical simulations, we shall use γ = 0.2, which corresponds to a weakly-interacting Bose gas amenable to the mean-field treatment discussed below. We focus on c > 0 to avoid bound states [43][44][45] that are beyond the polaron physics. Note that the case with γ = 0.2 and N = 19 for Φ = 0 was considered in [46] providing us with a reference point to benchmark our numerical calculations.
In what follows, we shall use the Hamiltonian from Eq. (1) in our analysis. However, it is worthwhile noting that the parameter Φ can in principle be excluded from this Hamiltonian via a gauge transformation Ψ → e iΦy Ψ, where Ψ is the wave function. The effect of the flux is then incorporated in a 'twisted' boundary condition that demands that the wave function acquires a phase e i2πΦ after a full turn [47,48]. Such a condition implies that the energy spectrum must be a periodic function with period Φ/(2π) as shown in Fig. 1. Note that for general multi-component systems (e.g, strongly interacting Bose-Fermi mixtures) a smaller period of the ground-state energy is also possible, see, e.g., [10,11]. As we demonstrate below, this does not happen for an impurity in a weakly-interacting Bose gas whose low-energy spectrum resembles that of a single particle.
The Hamiltonian H with Φ = 0 corresponds to one of the most studied one-dimensional models [49][50][51][52]. Therefore, we can use the already known methods to tackle our problem with Φ ̸ = 0. We choose to work in the frame co-moving with the impurity (see below), where the mean-field approach (MFA) and flow equations (IM-SRG) provide powerful theoretical tools for our investigation (see Methods). These methods allow us to investigate the effect of the AB flux on the properties of the one-dimensional polaron problem beyond previous studies [53,54], which investigated relevant molecularcrystal models. In particular, we can define and study flux-independent properties of the Bose polaron (e.g., the effective mass) in a finite system.
Co-moving frame. The total momentum of the system is conserved since all interactions are translationinvariant. Therefore, we eliminate the impurity coordinate by writing the wave function as (cf. [55]) where is the Heaviside step function], and P/L is the total momentum. The transformation {y, x i } → {z i } can be seen as a coordinate-space analogue of the Lee-Low-Pines transformation [56]. The parameter P is quantized to fulfill the periodic boundary conditions: P = 2πn, where n is an integer. Note that the transformation to the co-moving frame has been already used to study the few-to many-body transition in the ground state of the Bose-polaron problem with Φ = 0 [35]. Here, we study this transition for nonvanishing P where a continuous parameter Φ provides a bridge between the discrete values of P . The Schrödinger equation in the co-moving frame reads as follows where E is the dimensionless energy of the system [to obtain the dimensionful energy one needs to multiply E byh 2 /(mL 2 )]. Note that Φ and P enter this equation together as a sum P = P + Φ. P is a continuous variable that can be seen as an effective total momentum that determines total currents in the system. This observation will be crucial for interpreting our results in terms of an effective one-body picture, see below. Note that Ψ * P solves the Schrödinger Eq. (4) with −P, which is a manifestation of time-reversal symmetry.
Energy spectrum. The energy spectrum for Φ = P = 0 and finite values of N was calculated in Volosniev et al. [35]. Therefore, in what follows we only calculate E(P, Φ) − E(P = 0, Φ = 0), where E(P, Φ) is the energy of the Hamiltonian for a given value of the total momentum, P , and the AB flux, Φ. E(P = 0, Φ = 0) approaches the ground-state energy of the Bose-polaron problem in the thermodynamic limit (N, L → ∞ with a fixed value of N/L), see also Suppl. Note 4. Due to the periodicity of the energy (cf. Fig. 1), it is enough to focus on fluxes −π ≤ Φ ≤ π. Furthermore, the energy spectrum is symmetric with respect to Φ → −Φ due to time-reversal symmetry. Therefore, in what follows we shall calculate the lowest-energy states for fixed values of P , so called Yrast states (cf. [6]), and currents only for 0 ≤ Φ ≤ π. This also fixes the values of the flux needed to observe our findings experimentally. ]. The data are obtained using the mean-field ansatz (solid curves, MFA) and the in-medium similarity renormalization group (crosses with errorbars, IM-SRG), black triangles show results from Yang et al. [46] for quantized momenta. The dotted curves show the energy of the non-interacting system.
Energy for Φ ̸ = 0. We illustrate the energies calculated with MFA and IM-SRG in Fig. 2. For P = 0 and |P |= 2π both methods agree reasonably well on the energy, demonstrating that the MFA is a useful analytical tool to describe the system. We observed a worse agreement for |P |≥ 2π. The failure of the mean-field approach is expected for high values of P as there are various ways to distribute momentum between the bosons and the impurity, see also Suppl. Note 3.
Let us give a few general remarks about Fig. 2. For Φ = ±π there is a level crossing between two Yrast states with P = 0 and |P |= 2π. It is a consequence of the rotational symmetry of the problem. If a defect is introduced into a system, then it will lead to an avoided crossing, see below where we discuss the role of defects. In Fig. 2, we also present the ground-state energy of a non-interacting impurity (c = 0), E = (P + Φ) 2 /2. We see that the solid curves are always below this value. The effect is more pronounced for stronger impurity-boson interactions -compare the left and right panels of Fig. 2. These features can be easily understood using the concept of a polaron and its effective mass.
Effective mass. For the thermodynamic limit with Φ = 0, one finds that the low-energy spectrum of the system is quadratic in the total momentum (see, e.g., [39,57] for one-dimensional Bose polarons): where we introduce an effective mass, m TD eff ; other definitions of the effective mass are discussed in Suppl. Note 5. Eq. (5) is a cornerstone of the polaron concept and effective one-body descriptions of mobile impurities. Note that for small systems, the limit in Eq. (5) should be re-defined since P is discrete.
The parameter Φ is continuous. The mean-field solution as well as time-reversal symmetry suggest that dΦ 2 for Φ → 0 with P = 0 and |P |= 2π. The parameters of the system are N = 19, γ = 0.2. The green curve shows the known result for P → 0 in the thermodynamic limit (TD. limit) [39,57]. The data in both panels are obtained using the mean-field approach. E(Φ, P = 0) is proportional to Φ 2 . By analogy to the Bose-polaron problem, we can define the effective mass of an impurity in a small AB ring This expression connects our problem to the body of knowledge developed by solving polaron problems. The connection allows one to make predictions about the behavior of an impurity in the AB ring. For example, the effective mass is an increasing function of c. Therefore, one reduces the current associated with the impurity by increasing c, see the discussion below. In addition, Eq. (6) demonstrates that the AB ring can be a physical testbed for studying the few-to many-body crossover in one-dimensional Bose-polaron problems. We illustrate this crossover for the effective mass in Fig. 3. For weak interactions (c/g = 0.05), we see that the effective mass converges to the thermodynamic limit quickly. This is not the case for strong interactions (c/g = 5), meaning that many bosons are needed to screen the impurity for large values of c/g. Although, our analysis suggests that the effective mass converges somewhat slower than the energy towards the thermodynamic limit (see also Suppl. Note 4), the basic mechanism is the same: A high compressibility of a weakly-interacting Bose gas requires a large number of bosons to screen a strongly interacting impurity. Note that the number of bosons needed for screening heavily depends on the parameter γ. In particular, in the limit γ → ∞, the system fermionizes and the impurity is screened by a handful of particles [41,58,59]. This observation highlights the fact that the few-to many-body crossover should be studied separately for fermions and weakly-interacting bosons.
Finally, we note that the effective mass computed with Eq. (6) describes only the Yrast curve with P = 0 well. To illustrate this, we calculate the second derivative of the energy in the limit Φ → 0 using the MFA. For a noninteracting impurity, this derivative is given by 1/m for all values of P . For an interacting impurity, this is not the case. The second derivative for P = 0 is by definition given by 1/m eff . The right panel of Fig. 3 shows that the effective mass increases for stronger impurity-boson repulsion in agreement with our expectations. The figure also shows that for the |P |= 2π state additional effects come into play and change the second derivative. The physical picture behind these effects will become more clear below, when we consider currents. The difference between 'effective masses' defined for P = 0 and |P |= 2π illustrates a shortcoming of the use of the quasiparticle picture for a small AB ring. However, even then, the polaron picture explains the qualitative features of the spectrum well.
Currents. The AB flux in our system induces currents that can be defined via the continuity equations for the impurity and the bosons in the laboratory frame: where tL 2 is time, ρ I (ρ B ) is the probability density of the impurity (bosonic) cloud. The (local) probability cur-rents are defined as The rotational symmetry implies that j I , j B , ρ I and ρ B are position-independent, allowing us to work with the integral quantities, e.g., ρ I = ρ I dy/(2π), which is more convenient. For example, using these quantities, it is easy to show that j I + N j B = P. Therefore, the total current -the current that corresponds to the total density ρ I + N ρ B -is given by P = P + Φ. Note that even though the AB flux is coupled only to the impurity, boson-impurity interactions also generate a current of bosons. We illustrate these currents for c/g = 1 and c/g = 5 in Fig. 4 (other parameters are N = 19, γ = 0.2). The increase in the boson-impurity interaction leads to an increase in the bosonic current. This observation is most easily explained using the Bose-polaron picture.
Using the Hellmann-Feynman theorem, it is straightforward to show that which coincides with the standard definition of the current in a one-body problem, see, e.g., [6]. [Note that this expression provides an indirect way for measuring currents by studying the energy landscape of the problem with RF spectroscopy (cf. Scazza et al. [60]).] For the polaron picture with P = 0, this leads to j I = Φ/m eff connecting the current (and transport properties) of the impurity to its effective mass. The bosonic current in the same approximation is given by The bosonic current generated by the AB flux follows the impurity, and leads to renormalization of its mass. We conclude that in the polaron picture the currents depend linearly on Φ, with the slope fully determined by the effective mass. The region of validity of this picture is determined by the boson-impurity interaction, see the dashed lines in panels a) and c) of Fig. (4). For c/g = 1 we observe that the polaron approximation, in which the energy is related to the AB flux via Eq. (6), is accurate for |P|≲ 2π, but for stronger interactions, c/g = 5, it is appropriate merely for |P|≲ π. For even stronger interactions, Eq. (6) is accurate only in the limit P → 0. Our interpretation here is that the coherent propagation of the impurity is not possible for strong boson-impurity interaction and large fluxes. Indeed, for c → ∞ an impurity can exchange its position with a boson in a coherent manner only at timescales given by 1/c. Thus strong (fast) impurity currents excite bosons. This leads to a nonlinear increase of the currents with Φ, see also [46] and Suppl. Note 3. To quantify these effects, it is convenient to rely on the second derivative of the energy (effective mass), which is larger for |P |= 2π, see Fig. 3 (note that the bosonic current is related to the impurity current via N j B = P + Φ − j I , i.e., one can reach the same conclusion by considering N j B instead of j I ). The IM-SRG results show a somewhat stronger generation of bosonic currents than the MFA, but the qualitative picture stays the same.
In addition limits of validity of the polaron picture can be investigated by considering states with higher values of |P |. For example, Fig. 4 shows that for |P |= 4π, the current of the impurity (almost) does not depend on Φ. This current is critical in a sense that by changing the flux of the impurity one generates only the current of bosons. The value of the critical current, j cr I , is decreased by increasing c, in agreement with the mean-field studies [61,62].
The critical current can be seen as an analogue of the critical velocity of a classic impurity that moves in a superfluid (cf. Landau critical velocity). Using this analogy, we can understand why the mean-field approximation in the co-rotating frame does not provide the correct value of the critical current. The MFA does not describe accurately the excitations of the Bose gas when it is decoupled from the impurity. In particular, the MFA leads to an incorrect phononic dispersion relation and implies that the critical velocity can be larger than the speed of sound for small values of c/g [62], which is unphysical. Furthermore, it does not capture type-II excitations of the Lieb-Liniger gas [63], which define the lowest energy state for a given value of the momentum of the Bose gas, see [64] for tutorial. Note that our IM-SRG method also does not capture these states well -the flow equations diverge when the type-II excitations become relevant. For some additional details, see Suppl. Note 3.
Role of defects. The rotational symmetry of the problem makes the Yrast energy spectrum of Fig. 2 double degenerate at Φ = ±π. In realistic systems, the symmetry is typically broken due to the presence of defects, leading to avoided crossings in the energy spectrum (cf. Fig. 5). At the maxima of the avoided crossings one-body currents defined via ∂E/∂Φ vanish affecting transport properties of the system [5]. We also note that the simplest experimental realization of the AB flux in cold-atom setups can be achieved with a rotating weak link [65], which utilizes the equivalence between the Coriolis force in a non-inertial frame and the Lorentz force on a charged particle in a uniform magnetic field. The rotating link introduces a 'defect' potential into the problem whose effect can be studied using the methods discussed in this section.
Our two-component set-up offers unique possibilities to modify currents that are not present in a single-body AB physics. To illustrate this, we add to the Hamiltonian a small perturbation: where H is the original Hamiltonian from Eq. (1) and This additional term describes a short-range potential coupled exclusively to the Bose gas. The current of the impurity is sensitive to W only via the boson-impurity interaction, and therefore the avoided crossing should contain information about the boson-impurity correlation function.
As long as a is small (a → 0), we can assume that the defect has only a minor influence on our system unless the system is close to the degeneracy point, Φ = ±π. Close to these points, we calculate the dimensionless energy using degenerate state perturbation theory: ) is the energy of the Yrast state with P = 0 (|P |= 2π); Ψ 0 (Ψ 1 ) is the corresponding eigenstate. Within the MFA, the matrix elements read as where α = f * i (z)f j (z)dz, and subscripts determine the Yrast state, e.g., i = 0 corresponds to P = 0.
To provide insight into the avoided crossing, we focus on Φ = ±π. In this case f 1 = f 0 , and ⟨Ψ i |W |Ψ j ⟩ depends only on the density in the co-moving frame (or equivalently on the impurity-boson correlation function in the laboratory frame). This density, hence, the splitting of the energy levels, is sensitive to the values of c. For example, if c = 0, then the defect should destroy the rotational invariance of the Bose gas only. Indeed, in this case, |f | 2 is constant and ⟨Ψ i |W |Ψ j ⟩ = 0.
Panel a) of Fig. 5 illustrates the avoided crossing for a small value of a. Note that the energy of the system, E(P, Φ), increases for a > 0. This effect does not appear in the figure as we only show the energy difference. The interesting part is that in the presence of W the energies of the first and second Yrast state no longer cross. The splitting of the energies, ∆E = 2aLN I, is determined by the integral which can be estimated using the density in the thermodynamic limit at Φ = 0 [66]: Finally, we note an interference effect that appears if we place a second small perturbation into the system: In analogy to above, we can define a coupling integral as If d = 1/2, this matrix element vanishes and the energy levels cross again (within the lowest order of perturbation theory). This happens because the perturbations are placed opposite to one another, which effectively restores the rotational symmetry in this case. This effect can be seen also for more than two perturbations, as long as they are placed in a symmetric order on the ring (for example three defects in a form of an equilateral triangle).
Conclusions
To summarize, we studied an impurity coupled to the AB flux in a Bose gas. We argued that i) the system can be described using the ideas developed for the Bose polaron, ii) the AB ring can be a testbed for studies of the fewto many-body crossover in cold-atom polaron problems.
In particular, observation of persistent currents in the AB ring with an impurity can shed light onto coherence properties of the Bose polaron. Note that the 1D world has inherent phase fluctuations, which can be captured using the IM-SRG approach (see also Suppl. Note 3). These fluctuations should be necessarily taken into account when studying persistent currents of polarons.
Our investigation of currents shows that the AB ring can provide a platform for studying a few-body analogue of the critical velocity in a Bose-polaron problem. Furthermore, if we assume that the critical current, j cr I , does not depend on Φ, then (according to Eq. (10)) the energy of the system is E = E cr + j cr I Φ. This expression connects the bosonic current and the energy of the system motivating a study of few-body precursors of collective excitations in a Bose gas.
Finally, we note that ∂ 2 E/(∂Φ 2 ) relates to the inverse of the effective mass, which defines transport properties of a polaron. This relation bears some similarity to the Thouless conductance in a disordered medium [67]. It might be interesting to explore this connection further, in particular, for a weakly interacting light impurity that within the Born-Oppenheimer approximation experiences a disorder potential created by heavy bosons.
Methods
Mean-field approach. We use two methods to investigate the system. The first one is the mean-field approach (MFA) in relative coordinates. It assumes that all bosons occupy one state in the frame co-moving with the impurity, so that the total wave function of the system can be approximated as where f (z) is a normalized function determined by minimizing the Hamiltonian. The variational procedure leads to the Gross-Pitaevskii equation, which can be solved semi-analytically, see Cominotti et al. [68] and Suppl. Note 3. Flow equations. The MFA is a well-established approach by now whose accuracy for stationary impurity problems has been shown by comparing to numerical quantum Monte Carlo [35,69] and state-of-the-art RG methods [35,39,68,70]. The MFA in time-dependent problems was discussed in [71,72]. In spite of those previous tests of MFA, we still find it necessary to validate it for the problem at hand. To this end, we shall use flow equations in the form of the so-called in-medium similarity renormalization group method (IM-SRG). This is an ab initio method that has been employed in condensed matter and nuclear physics [73][74][75] (for applications for one-dimensional problems with impurities, see [35,45,70]). For convience of the reader, we provide a brief introduction into IM-SRG and further compare its results to the MFA in Suppl. Note 3.
Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Code Availability
The code used for this study is available from the corresponding author upon reasonable request.
In the main part of the paper, we consider the AB flux coupled to the impurity. Here, we discuss a more general scenario in which the AB flux is coupled also to the bosons. The corresponding Hamiltonian reads: As in the main part, we analyze the Schrödinger equation in the frame comoving with the impurity Note the differences between this equation and the one where the AB flux coupled only to the impurity. In particular, one cannot introduce a single effective variable (cf. P) that parameterizes the energy of the system. Still, one can cast Eq. (S2) into the form discussed in the main part . Therefore, one can use the methods and results of the main part to study a system with the AB flux coupled to bosons.
We illustrate this in Fig. S1, which presents the energy for the parameters of Fig. 2 assuming that Φ I = 0. The energies at Φ B = 0 are not modified, but there is dramatic change for Φ B > 0. First, the term N Φ 2 B /2 leads to a faster change of the energy with the flux. Second, we see that there is no energy level crossing for the Yrast states with P = 0 and |P |= 2π. While the energy spectrum must still be periodic in Φ B with the period The data are obtained using the mean-field ansatz (solid curves) and the IM-SRG (crosses with errorbars), black triangles are the results of Ref. [2] for Φ = 0. The dotted curves show the energy of the non-interacting system. Φ B /2π, the energy is no longer parameterized by an effective total momentum P, which means that the states with 0 < Φ B < π are not connected to the states with P = −2π and Φ B > π.
Finally, we mention the special case of equal fluxes Φ I = Φ B . This scenario corresponds to a 'rotation' of all particles around the ring with the same 'velocity'. In this case, the energy depends on the AB flux only via a constant energy shift. The flux couples exclusively to the center-of-mass motion of the system, in agreement with previous studies, see, e.g., [1].
Supplementary Note 2: Mass imbalance
In the main part of the paper, we assume that the masses of the impurity and the bosons are equal. Here, we briefly discuss a mass-imbalanced system. In this case, the Schrödinger equation in the co-moving frame reads: The corresponding Gross-Pitaevskii equation is: S2) with κ = m/(m + 1) the reduced mass. (Recall that we use a system of units in whichh = M = 1.) Note that κ is the only parameter that contains information about mass imbalance. One can write the solution to the mean-field equation using the same approach (see Supplementary Note 3), allowing us to transfer the conclusions of the main part of the paper to the mass-imbalanced system.
In Fig. S2, we present the energies as well as the currents for a heavy (m = 5) and a light (m = 0.5) impurities. [Note that the definition of the impurity current should contain an additional factor 1/m in Eq. 8 for the mass-imbalanced case. In practice, to calculate j I , we use conservation of momentum, i.e., we calculate the bosonic current first, and then use that j I = P − N j B .] First of all, note that the qualitative behavior of the observables is similar to that in the equal-mass case. The main differ-ence is that a light (heavy) impurity generates stronger (weaker) bosonic currents: the kinetic energy of the impurity is large (small) and therefore it is energetically more (less) favorable to excite the Bose gas.
Finally, we remark that for a heavy impurity, the mean-field results are in agreement with IM-SRG over a large range of Φ and P . This might be connected to the two observations: First, like discussed above, a heavy impurity leads to a weak bosonic current. Second, the mean-field approximation neglects the mixed derivative terms which scale as 1/m, therefore, a higher impurity mass leads to a weaker effect of this simplification. For a light impurity, the mean-field approximation is accurate only for low values of P.
Supplementary Note 3: Methods
Here, we provide some technical details about the methods used to investigate the system. In particular, we discuss the semi-analytical mean-field ansatz, its validity, and the numerical IM-SRG method.
Mean-field ansatz
For the mean-field approximation, we use a product state in the co-moving framẽ where f minimizes the expectation value of the Hamiltonian. Note that this ansatz is conceptually different from the strong-coupling approach in which the product state is written in a laboratory frame, and which predicts self-localization of the impurity [3]. f is computed using the Gross-Pitaevskii equation: where µ is the chemical potential, P = P +Φ is the 'total' momentum, κ = m/(m + 1) is the reduced mass, and is the momentum of one boson. The impurity-boson interaction leads to the boundary condition: The derived Gross-Pitaevskii equation has been discussed before, see, e.g., Ref. [4]. For convenience of the reader, we summarize here the steps to find its solution semianalytically. To simplify the notation, we introduce the momentum We assume that which leads to the two coupled differential equations: whose solutions read: p . sn and dn are Jacobi elliptic functions, F (K) is the (complete) elliptic integral of first kind, E the elliptic integral of second kind, Π the elliptic integral of third kind and am is the Jacobi amplitude [5].
The parameters s 1 , s 2 , s 3 determine the solution depending on the particle number and interactions. The boundary condition due to the impurity interaction gives an expression for s min as a function of s 1 , s 2 , s 3 . Note that there exist three solutions. We take the physical solution that reproduces the behavior of the Bose polaron in the limit P → 0 discussed in Ref. [6].
To find the parameters s 1 , s 2 , s 3 , we solve a set of equations that demand that the state solves the Gross-Pitaevskii equation, it is normalized, and its phase is periodic: These equations are solved numerically with s 1 ≤ s 2 ≤ s 3 . (Other orderings produce solutions that do not reduce to the mean-field solution of Ref. [6] in the limit P → 0.) The chemical potential is given by: Having obtained the solution, we can calculate properties of the system, e.g. the energy of the system:
Flow Equation Approach (IM-SRG)
Flow equations. The flow equation approach (also called in-medium similarity renormalization group or IM-SRG) is a beyond-mean-field method that (block)diagonalizes the Hamiltonian in second quantization, via the so-called flow equation Here, s is the flow parameter, which formally plays a role of (imaginary) time. The generator of the flow η should be chosen such that the off-diagonal matrix elements vanish in the limit s → ∞ [7]. As we are only interested in the lowest energy state of the system for a given value of P, we normal order the Hamiltonian using a condensate as a reference state, see Ref. [8]. This leads to the normal-ordered Hamiltonian where we denote normal ordered operators with : O :.
The matrix elements f ij and Γ ijkl describe one-and twoparticle excitations from the reference state. For the generator, we use these are the matrix elements that need to vanish in order to decouple the lowest energy state from the excitations. Once the flow equation reaches a steady state, the lowest energy state state is decoupled.
To calculate observables other than the energy, we evolve the operator O together with the Hamiltonian, i.e., we solve the flow equation in addition to Eq. (S16). We can calculate different observables, e.g., the current of a single boson, where ϕ k is the one-body basis function in which we expand the Hamiltonian and a k (a † k ) are annihilation (creation) operators.
Reference state. The transformation governed by the flow equations can be understood as a mapping between the reference state and an eigenstate of the system. Since we are interested in a system of bosons, it is reasonable to use condensate as reference state. Our reference state is constructed iteratively: Starting from the ground state solution of the non-interacting Hamiltonian for P = 0, the density and phase are calculated with IM-SRG and used as a new reference state. This procedure is repeated until both, the phase and the density, do not change upon iteration. Then, P is increased by a small amount and the reference state is calculated again. We repeat this until the desired value of P is reached.
Note that other choices for the reference state are possible in principle, e.g., the mean-field solution discussed in Supplementary Note 3, see Ref. [9]. However, we observed that for the system under consideration the procedure outlined above allowed us to calculate properties of the system for a larger range of P.
Accuracy of IM-SRG. Higher order terms induced in evolution of Eq. (S16) make it impossible to find a numerically exact solution for the considered values of N , and should be truncated. In our truncation scheme, we truncate at the two-body level. Three-body operators that contain at least one a † 0 a 0 operator are also kept, however, a † 0 a 0 is treated as a c-number. This means that we work only with zero-, one-and two-body operators in Eq. (S16) which leads to a system of coupled, closed, non-linear differential equations, which we solve numerically [8][9][10].
We estimate the error due to the neglected pieces (called W ) using second order perturbation theory where Φ p is a state that contains three-body excitations and Φ ref is our reference state. We construct the Hamiltonian in second quantization using the eigenfunctions of the one-body Hamiltonian without momentum or flux (P = 0). Since we can only work with a finite Hilbert space, we solve the flow equations for different numbers of basis states (in our case n ∈ [11,13,15,17,19,21]). For the energy, we fit these values with to estimate the result in infinite Hilbert space, E(n → ∞); b 1 and b 2 are fitting parameters. For other observables, such a fit is not always possible. In such cases, we use instead the result for the largest Hilbert space (n = 21) and estimate its error from the largest deviation to smaller Hilbert spaces (n = 11,13,15,17,19).
Therefore, there are in total two contributions to our error bars: The truncation error from neglecting higher order terms in the flow equation and the truncation error due to a finite Hilbert space. We assume that both errors are not correlated, and add their absolute values for our final error estimation For a more detailed description of the method we refer to Ref. [8], where the flow equations and our estimate of the truncation error are introduced, see also Ref. [9] for information about calculation of observables and a detailed explanation of our estimate of error bars.
To further validate our numerical results, we benchmark them against Ref. [2]. Overall we observe a good agreement between IM-SRG and Ref. [2], see the figures in the main part of the paper. Only for c/g = 1 and P = 4π there are some small deviations. We interpret this deviation as follows: IM-SRG uses a condensate as a reference state, and fails to describe excited states of the Bose gas, which become important when the impurity current reaches its critical value. This argument is supported by the fact that the bosonic currents calculated with IM-SRG are always smaller than those in Ref. [2]. To improve the IM-SRG method, a reference state that includes boson-boson interactions could be used, like it is done in nuclear physics for fermionic systems, see e.g. Ref. [11,12]. Such references states are outside the scope of the present work.
Validity of mean-field ansatz
In the main part of the paper, we showed results for the energy (Fig. 2) and the currents (Fig. 4). For P ≤ 3π, we observed a good agreement between the MFA and IM-SRG, justifying the use of the MFA. For larger values of P, the MFA was less accurate, see, in particular the currents for P = 4π. Here, we use IM-SRG results to further test the validity of the MFA.
To this end, we calculate the densities, ρ(z) = |f (z)| 2 , and phases, θ(z), of the Bose gas for Φ = 0 and |P |= 0, 2π, 4π, see Fig. S3. In our IM-SRG calculations these quantities can be expressed as follows: For P = 0 and |P |= 2π, there is a good agreement between the two methods. For |P |= 4π, we start to see a difference, which is particularly noticeable in the phase. This is in agreement with our observation (see the main text) that the IM-SRG and mean-field results for the currents disagree when the value of P is large. Recall that the bosonic and impurity currents can be related to the gradient of the phase using Eq. (S3). We can go one step further with the beyond-mean-field IM-SRG method and estimate directly the off-diagonal coherence of the bosons by calculating a property called phase fluctuations given by the one-body density matrix (see, e.g., [13][14][15]): (S21) The quantity δΦ zz ′ vanishes for a condensate. It is therefore a direct measure of the validity of the mean-field approximation. A plot for the same parameters as before is also shown in Fig. S3. We see that for P = 0 and |P |= 2π phase fluctuations are nearly identical and that for |P |= 4π there is a considerable increase. Together with the behavior of the energies, currents, phases and densities, we come to the conclusion that the mean-field approximation works well only for small values of P and Φ such that P < 3π.
Let us provide some physical intuition into why the MFA results become less accurate when we increase P + Φ. To this end, we consider the case of vanishing impurity-boson interaction strength, c → 0. In this limit, the bosons are described by the well-known Lieb-Liniger gas [16,17]. There exist two different types of excitations in the Lieb-Liniger model: type-I and type-II excitations.
To illustrate this discussion, we plot the energy of the system with c = 0 (non-interacting impurity) as a function of P in Fig. S4. The blue (solid) curve represents the case when the bosons are in the ground state and all of the momentum is distributed to the impurity such that The orange (dashed) curve is obtained using the lowest energy state of the Bose gas with a quantized value of |P |= 2π. The rest of the total momentum is given to the impurity We use E bosons from the Yrast curve presented in Ref. [2]. For comparison in Fig. S4, we also show the results from Dashed lines between these symbols are added to guide the eye. 0 π 0.5 π 1 π 1.5 π 2 π 2.5 π 3 π 3.5 π 4 π P Imp. + LL ground. Imp. + LL excited Ref. [2] FIG. S4. Energy for a system of a non-interacting impurity (c → 0) in the Lieb-Liniger gas. The Lieb-Liniger gas is either in the ground (blue solid curve) or in the excited (orange dashed curve) states. The parameters of the system are N = 19, γ = 0.2, m = M = 1. The triangles show the results from Ref. [2] for these parameters.
Ref. [2] for c = 0. The point where the curves cross imply a strong momentum exchange if c ̸ = 0, which might be beyond IM-SRG and MFA. Note that this crossing can be observed directly only if Φ ̸ = 0.
We see that at small values of P, it is energetically favorable to have all 'vorticity' in the impurity. For some parameters P < 4π, however, it is more favorable to excite the Bose gas. This analysis is in agreement with the calculations of [2] for quantized values of P . It be-comes clear now why the mean-field results presented in the main part of the paper are accurate only for P = 0 and |P |= 2π: The MFA does not describe the excitations of the bosons for c = 0 well.
The IM-SRG method faces a similar problem since we use a condensate as a reference state. For example, our flow-equation calculations for c/g ≪ 1 diverge close to the point where it is energetically favorable to excite the bosons (e.g., P ≃ 3.8π in Fig. S4). In this way IM-SRG signals that a simple condensate description of the Bose gas in the co-rotating frame is no longer valid. It is worth noting that IM-SRG converges over a larger range of P if the impurity-boson interactions are stronger. In particular, IM-SRG is in agreement with Ref. [2] for |P |= 4π for strong impurity-boson interactions. This leads us to the question about the effect of the impurity-boson interactions on the excitations of the system, which determines the overlap with the reference state of the IM-SRG method, a condensate. Note that this question might be connected to the critical velocity of an impurity in a onedimensional Bose gas. We leave this question to future studies.
Supplementary Note 4: Convergence towards thermodynamic limit
As the main part of the paper argues, an AB ring allows us to study the convergence of the effective mass to the thermodynamic limit (N, L → ∞, N/L = ρ = const). Here, we discuss this few-to many-body crossover for the effective mass and the energy in more detail. Note that the energy has been discussed already in Ref. [6]. In particular, it was found there that the energy decays as E − E(c = 0) = ρ 2 π 2 /(2N κ) for a non-interacting Bose gas with P = Φ = 0. For an interacting Bose-gas, the energy converges to a finite value that is determined by the polaron energy.
Below, we first use the results of Ref. [6] to provide some further analytical insight. After that, we use a fitting procedure to find the convergence of the energy and the effective mass in our numerical simulations.
Energy
Reference [6] shows that the energy of a system with an impenetrable impurity (c → ∞) is E − E(c = 0) ρ 2 = 8K 4 (p)p + 2K 2 (p)κγN (N − 1)(p + 1) where K is the complete elliptic integral of first kind and p is a parameter determined from the equation: with the complete elliptic integral of second kind, E, [5]. In the limit of N → ∞, p → 1 and therefore E(p) → 1. This allows us to derive an analytic expression for K(p) in terms of γ and N , which can be inserted into Eq. (S1) producing (S3) In the limit of large N , the energy decays as 2/κN to the value 16γ/9κ, which corresponds to the boundary energy of the Bose gas (cf. Refs. [18,19] for an infinitely heavy impurity with κ = 1).
Let us now consider finite impurity-boson interactions. To this end, we fit the IM-SRG results with In Fig. S5, we show the energy for γ = 0.2 and c/g = 0.05, 1, 5. Fitting reveals that the parameter σ is in the range of σ = 1.065 − 1.08 suggesting that also for finite impurity-boson interactions the energy decays like 1/N . Small deviations from the 1/N behavior are probably due to relatively small values of N . We checked this statement by adjusting the number of particles.
Effective mass
It appears complicated to gain analytic insight into the few-to many-body crossover of the effective mass. Therefore, we fit the results from Fig. 3 with Eq. (S4). We find that contrary to the self-energy of the impurity the convergence seems to strongly depend on the strength of the impurity-boson interactions. For weak interactions (c/g = 0.05), we find the fastest convergence with σ = 1.07 ± 0.15. For c/g = 1, the effective mass convergences with σ = 0.99 ± 0.01. For strong interactions (c/g = 5), the convergence is significantly slower σ = 0.67±0.03. To understand this, note that for an impenetrable impurity (c → ∞), the impurity acts as a wall moving through the Bose gas. This means that the effective mass should be the mass of the whole system, i.e., the effective mass should actually increase with N in this case. Our results demonstrate that the effective mass is highly sensitive to the value of c/g, unlike the energy. In particular for strong impurity-boson interactions, one requires more bosons to observe convergence of the effective mass.
Supplementary Note 5: Definition of effective mass
For convenience of the reader, here we review and explain equations that define the effective mass in the thermodynamic limit with Φ = 0, cf. Refs. [20,21]. These equations also appear in the main part of the paper, although in a somewhat modified form.
The problem of a quantum impurity moving through a medium is simplified by mapping this many-body system to an effective one-body set-up -polaron. This quasiparticle is intuitively understood as the impurity dressed by the excitations of the medium, which lead to an effective mass higher than the bare mass.
In the polaron picture, the low-lying states of the system have (approximately) the energy where m eff is the effective mass and P is total momentum of the system. Similar to Eq. 10, we can write the velocity (probability current) of the impurity as v imp = P/m eff .
Sometimes, one employs this expression to define the probability current of the polaron as v pol = v imp .
Using Eq. (S2), we write the energy of the system as where m eff is the effective mass and v imp is the velocity of the impurity. Finally, we connect the effective mass to the momentum carried by the medium (in our case, the Bose gas). To this end, we use the fact that the total momentum is distributed between the impurity and the bosons P = P imp + P bos , and that (by definition) the impurity momentum is given by P imp = mv imp . With this, we derive m/m eff = 1 − P bos /P.
Eqs. (S1)-(S4) are equivalent and can be used to compute the effective mass in numerical simulations. We noticed in our IM-SRG calculations that the last equation is numerically the most stable and therefore it was used in our study. | 11,909.8 | 2023-01-25T00:00:00.000 | [
"Physics"
] |
A new approach to the dynamics of AdS space-times
In the last years, the stability of Anti-de Sitter (AdS) space-times has attracted a lot of attention. Not only because of its own importance but primarily due to the AdS/CFT correspondence that conjectures an equivalence between an string theory on an asymptotically AdS space-time and a conformally invariant quantum field theory (CFT). Recently, Bizon and Rostworowski (2011) showed that the boundary of AdS prevents energy from dispersing. As a consequence, AdS perturbations will collapse after bouncing back from the AdS boundary a sufficient number of times for their energy to be concentrated enough. The details of the mechanisms that triggers the instability are currently an active topic of discussion. Here, we present a new approach to the problem of the collapse of a massless scalar field that introduces a transition from a Cauchy-based evolution to a charateristic one that is able to track with greater accuracy the latest stages of the collapse.
Introduction
The interest in the study of space-times with negative cosmological constant is very recent. Although AdS is a maximally symmetric solution to Einstein's field equations known since a long time ago, physicists have ignored this space-time and even classical books of General Relativity rarely mention it (one notable exception is the Hawking and Ellis book [1]). The establishment of the gauge/gravity duality through the AdS/CFT correspondence shown by Maldacena [2] have changed the scenario completely, generating a lot of activity. As in the case of Minkowski and de Sitter space-times, AdS space-time has been shown to be stable under linear (infinitesimal) perturbations [3], but the problem of non-linear perturbations of AdS is still under debate. Different numerical simulations have shown, in contrast to the Minkowski and de Sitter cases, instabilities in the dynamics of real [4,5] and complex [6] massless scalar fields in AdS space-times. However, some works indicate there may exist states free from instabilities [7,8]. There have also been attempts to study the AdS stability with purely analytic means [9,10,11,12]. In addition, there are some studies on the role of the AdS boundary that consider space-times without cosmological constant but with reflecting boundaries [13,14,15].
We address the problem of the collapse of a massless scalar field in an Asymptotically AdS (AAdS) space-times from a new point of view. We start with a Cauchy evolution scheme in order to follow the evolution through the different bounces at the AdS boundary, but then we introduce a transition to a characteristic evolution scheme to follow the last stages of the collapse with much more resolution (more details will appear in [16]).
Dynamics of AAdS space-times
The dynamics of a self-gravitating real massless scalar field φ in an AAdS space-time of dimension d + 1 is described by the Einstein-Klein-Gordon field equations: where g µν (µ, ν = 0, ..., d) is the space-time metric (and ∇ µ the associated canonical covariant derivative), Λ < 0 is the cosmological constant, and G µν is the Einstein tensor. We solve these equations by using a Cauchy-characteristic scheme. We start with Cauchy evolution using a system of coordinates where the AdS boundary (reachable in a finite time by the scalar field propagation) is at a finite coordinate distance. In this way we can follow the evolution of the scalar field, which typically will bounce at the AdS boundary a finite number of times before collapsing [4,6]. When the collapse is approaching we need a high resolution to control the gradients in the field that signal the formation of an apparent horizon (AH). We have noticed that a multidomain pseudospectral method with adaptive refinement of the domains is still not enough for many purposes, although it provides a high accuracy for most parts of the evolution. To deal with this we have designed a numerical scheme that makes a transition from the Cauchy evolution to a characteristic one where each point follows an ingoing null geodesic. This allows us to follow much better the collapse.
Cauchy evolution
The metric of a d+1-dimensional spherically symmetric AAdS space-time can be written as [4,5] where is the AdS length scale, defined as 2 = −d(d − 1)/2 Λ, and dΩ 2 d−1 is the line element of a d − 1-sphere. The important point is that x is a radial compactified coordinate with range [0, π/2], being x = 0 the origin and x = π/2 the AdS boundary. We recover pure AdS by taking A = 1 and δ = 0. Our evolution equations for (A, δ, φ) are a modification of the ones in [4,5].
Characteristic evolution
We have adapted the scheme first introduced by Christodoulou [17] and later used by [18,19] for AAdS space-times. The starting point is the spherically-symmetric metric where u is an ingoing null coordinate and r is a radial one (the AdS boundary is at r → ∞). The functions g = g(u, r) andḡ =ḡ(u, r) are always greater that some normalization value at the origin that we choose to be 1. Pure AdS is recovered by taking g = 1 andḡ = 1 + r 2 /l 2 . The scalar field is described by the following two variables: The metric functions can be recovered from the scalar field variables from the expressions: The characteristic evolution consists in evolving the variables along ingoing geodesics from one u = const. null slide to the next one. In this way the radial coordinate of a grid point obeys the following evolution equation [see Eq. (3)]: The idea is to evolve the scalar field variable h from one null slice u = const. to the next one along the ingoing null geodesics of Eq. (7), while the rest of variables (h,ḡ, g) can be obtained from h by radial integration according to Eqs. (4), (5), and (6). The equation for h comes from the Klein-Gordon equation and using that along the ingoing null geodesics we have where r(u) is a solution of Eq. (7). Then
Transition from Cauchy to characteristic evolution
In order to use the characteristic evolution to follow the collapse, we need to construct initial data at an initial null slide u = u o = const. from the results of the Cauchy evolution. To that end we have to translate the information from a set of t = const. slides coming from the Cauchy evolution to the values of the variable h, the one we evolve from one null slide to the next one, at the initial null slide u = u o . A key ingredient to perform this transformation is the following relations between Cauchy and characteristics variables: Figure 1. Green lines show data obtained from the Cauchy evolution and blue ones the outgoing null geodesics constructed to initialize the characteristic evolution. In the left figures the data generated by a complete Cauchy evolution is shown and, in the right one, the purple area indicates data obtained from the characteristic evolution. where U ± are characteristic fields of the Klein-Gordon equation (with propagation speeds ±Ae −δ ) so that the set of variables (φ, U ± ) leads to a first-order hyperbolic system for the Cauchy evolution. Another key ingredient is the relation between the coordinate systems used for the Cauchy and characteristic evolutions. The relation between radial coordinates is quite simple: From Eqs. (2) and (3) it is just r = tan(x). We do not have an expression for the null coordinate u in terms of (t, x), instead the u = const. null slices have to be obtained from the equations for the outgoing null geodesics in (t, x) coordinates: which can be integrated from the origin using the Cauchy evolution data. From the coordinate change we can derive the relation which can be used to track the formation of an AH in the characteristic evolution (A → 0). The Cauchy-characteristic transition is illustrated in figure 1. In the left plot, the green lines (horizontal) represent the data generated by the Cauchy evolution. In each frame, we can start an outgoing null geodesic (u = const.) from x = 0 using Eq. (11) and compute where this geodesic will be in the next step and store the data for both x (or r) and the field h. We can construct in this way several null outgoing geodesics and then perform the Cauchy-characteristic transition using the one that has the optimal initial conditions.
Comparing the Cauchy and characteristic evolution schemes
In this section, for the case of d = 3, we compare a simulation done only with Cauchy evolution with another one that starts with Cauchy evolution but changes to characteristic evolution near the collapse. In the first case, as we approach the collapse, the scalar field develops a steep profile. Then, in order to maintain the accuracy and stability of the code we have to introduce more resolution, making the simulation much slower. In the other case, we can change before the profile becomes steep making the Cauchy evolution stage much faster. In the characteristic evolution stage we do not need to introduce more points because the null slicing adapts to the Figure 4. Critical exponent obtained using characteristic initial data and purely characteristic evolution. The value obtained is γ ∼ 0.378. horizon formation concentrating a large portion of the points in the region we are interested in. This is illustrated in figure 2, where the small plot in the left shows the steep profile generated in the energy density (notice the logarithmic scale in x). The plots shows A for both evolutions (in the characteristic case it is estimated using Eq. (12)). The computational time used for the two plots is similar. It is clear that the characteristic evolution adapts much better to our problem, allowing us to go much closer to the AH formation (A → 0). The right plot shows how important is to get as close as possible to AH formation since at some point the profile of A generates structure at small scales.
Results and discussion
Using the method of the Cauchy-characteristic transition presented above, we can define our initial conditions as: where and ω, the amplitude and width of the initial profile respectively, are free parameters.
We have a adopted a fixed value for ω (ω = 0.05) and have performed evolutions for a large number of values for the amplitude . For sufficient large amplitudes the field collapses in way very similar to what Choptuik described in the asymptotically-flat case [20]. This is problematic in our scheme because the time of collapse for this kind of simulations is smaller than the time that we typically need to construct an initial geodesic to initialize the characteristic evolution. For relatively small amplitudes, the field tries to collapse but fails and then it disperses. In the asymptotically-flat case the field will completely disperse and the end state of the evolution is Minkowski space-time. In the case of AAdS spacetimes the field eventually (in a finite time) bounces at the AdS boundary and comes back towards the origin where it will collapse provided certain conditions are satisfied, otherwise the field will disperse again and bounce at the AdS boundary. This will happen a number of times until the field will finally collapse. In figure 3 we can see some results from numerical simulation for initial configuration with amplitudes in the range ∈ [160, 330]. The right branch correspond to initial conditions that collapse after just one bounce at the AdS boundary. Each branch to the left corresponds to configurations that go through one additional bounce (the left branch corresponds to configurations that went through five bounces). This behavior was reported in [4] and the evidence indicates that after each bounce the energy is transferred from lower to higher frequency modes [21,22].
Up to now we have discussed simulations that use the Cauchy-characteristic transition. Another possibility is to use only the characteristic evolution, by prescribing initial configurations at a u = u o = const. null surface (an outgoing null geodesic) and by evolving it using our characteristic code. Since these simulations do not cover the whole space-time (see figure 1) we can only use them for configurations are always within the space-time region covered by the characteristic evolution. An example of this are configurations that collapse directly without bounces at the AdS boundary. For those we use initial conditions given by φ =h(u = 0, r) = r 2 e − (r−r 0 ) 2 Fixing ω = 0.05 and r 0 = 0.1, we can vary the amplitude and only consider those configurations that form an AH. The others, which disperse, correspond to evolutions with bounces, but we cannot track them with the characteristic evolution. Around = * ∼ 2.92, we observe critical behavior: The black hole mass at the AH formation scales as: M AH ∼ ( − * ) γ , where M AH = r AH 1 + (r AH / ) 2 /2. The analysis of this case can be seen in figure 4. By fitting our results in the region close to the critical amplitude, * = 2.920151, we obtain a critical exponent of γ ∼ 0.378. This result is compatible with the one obtained first by Choptuik [20]. This is something that was anticipated by Bizon and Rostworowski [4] and that here we confirm with one more digit of precision.
To sum up, we have introduced a new scheme for the study of the dynamics of spherically symmetric AAdS space-times, in particular the fate of instabilities, which is based on a combined Cauchy-characteristic evolution that provides higher precision to follow collapse and the formation of an AH. We have also presented results from simulations that confirm recent findings in the literature obtained with other evolution schemes. | 3,210.2 | 2015-04-28T00:00:00.000 | [
"Mathematics"
] |
Constituent quark number scaling from strange hadron spectra in $pp$ collisions at $\sqrt{s}=$ 13 TeV
We show that the data of $p_{T}$ spectra of $\Omega^{-}$ and $\phi$ at midrapidity in inelastic events in $pp$ collisions at $\sqrt{s}=$ 13 TeV exhibit a constituent quark number scaling property, which is a clear signal of quark combination mechanism at hadronization. We use a quark combination model under equal velocity combination approximation to systematically study the production of identified hadrons in $pp$ collisions at $\sqrt{s}$= 13 TeV. The midrapidity data of $p_{T}$ spectra of proton, $\Lambda$, $\Xi^{-}$, $\Omega^{-}$, $\phi$ and $K^{*}$ in inelastic events are simultaneously well fitted by the model. The data of multiplicity dependency of yields of these hadrons are also well understood. The strong $p_{T}$ dependence for data of $p/\phi$ ratio is well explained by the model, which further suggests that the production of two hadrons with similar masses is determined by their quark contents at hadronization. $p_{T}$ spectra of strange hadrons at midrapidity in different multiplicity classes in $pp$ collisions at $\sqrt{s}=$ 13 TeV are predicted to further test the model in the future. The midrapidity $p_{T}$ spectra of soft ($p_T<2$ GeV/c) strange quark and up/down quark at hadronization in $pp$ collisions at $\sqrt{s}=$ 13 TeV are extracted.
I. INTRODUCTION
Most of hadrons produced in high energy collisions are of relatively low (transverse) momentum perpendicular to beam axis. Production of soft hadrons is mainly driven by soft QCD processes and, in particular, nonperturbative hadronization. Experimental and theoretical studies of soft hadron production are important to understand the property of the soft parton system created in collisions and to test and/or develop existing phenomenological models. Heavy-ion physics at SPS, RHIC and LHC energies shows the creation of quark-gluon plasma (QGP) in early stage of collisions. In elementary pp and/or pp collisions, it is usually presumed that QGP is not created, at least up to RHIC energies. However, recent measurements at LHC energies show a series of highlights of hadron production in pp collisions such as ridge and collectivity behaviors [1][2][3], the increased baryon-to-meson ratio and the increased strangeness [4][5][6]. Theoretical studies of these new phenomena mainly focus on how to build the new feature of small partonic system relating to these observations by considering different mechanisms such as the color re-connection, string overlap and/or color rope [7][8][9][10], or by considering the creation of mini-QGP or phase transition [11][12][13][14][15][16], etc.
In our latest works [17][18][19][20][21] by studying the available data of hadronic p T spectrum and yield, we proposed a new understanding for novel features of the hadron production in the small quark/parton systems created in pp and/or p-Pb collisions at LHC energies, that is, the change of hadronization mechanism from the traditional fragmentation to the quark (re-)combination. In quark (re-)combination mechanism (QCM), there exists some typical behaviors for the production of identified *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>hadrons such as the enhanced baryon-to-meson ratio and quark number scaling of hadron elliptical flow at intermediate p T . These behaviors are already observed in relativistic heavy-ion collisions [22][23][24] and, recently, are also observed in pp and p-Pb collisions at LHC energies in high multiplicity classes [3,4,6,25]. In particular, a quark number scaling property for hadron transverse momentum spectra is first observed in p-Pb collisions at √ s N N = 5.02 TeV [17]. Figure 1. The scaling property for the dN/dpT dy data of Ω − and φ at midrapidity in different multiplicity classes in pp collisions at √ s = 7 TeV. The coefficient κ φ,Ω in four multiplicity classes is taken to be (1.76, 1.82, 1.83, 1.93), respectively. Data of Ω − and φ are taken from Ref. [26].
Recently, ALICE collaboration reported the data of p T spectra of identified hadrons in different multiplicity classes in pp collisions at √ s = 7 TeV [26] and the preliminary data in inelastic events in pp collisions at √ s = 13 TeV [27]. Here, we find, for the first time, a clear signal of the quark number scaling property for hadronic p T spectra in pp collisions. Considering the production of baryon Ω − (sss) and meson φ (ss), their momentum distribution functions f (p T ) ≡ dN/dp T in QCM under equal velocity combination approximation read as Here, κ φ and κ Ω are coefficients independent of momentum. f s,s (p T ) is s (s) quark distribution at hadronization and we assume f s (p T ) = fs (p T ) in the center rapidity region at LHC energies. With above two formulas, we get a production correlation between Ω − and φ in QCM where κ φ,Ω = κ is independent of momentum. In order to check this scaling property, we take following operation on the dN/dp T dy data of Ω − and φ at midrapidity [26]: (i) divide the p T bin of Ω − (φ) by 3 (2) and (ii) take the 1/3 (1/2) power of the measured dN/dp T dy for Ω − (φ), and (iii) multiply (dN Ω /dp T dy) 1/3 by a constant factor κ φ,Ω so that data points at small p T (p T 0.5 GeV/c) are in coincidence with the scaled data of φ as much as possible. We show in Fig.1 the scaled data of Ω − and φ in different multiplicity classes in pp collisions at √ s = 7 TeV. The relative statistical uncertainties of the scaled data are only a few percentages and are shown as rectangles with filled colors in the figure. We see that in the high multiplicity classes, e.g., Fig.1(a) and (b), the scaled data of Ω − are consistent very well with those of φ, and therefore the quark number scaling property holds well. This verifies our argument in the recent work [21] and is a clear signal of quark combination hadronization in pp collisions at LHC. In the low multiplicity classes, Fig. 1(c) and (d), the scaled data of Ω − are somewhat flatter than those of φ as p T 1 GeV/c and the quark number scaling property seems to be broken to a certain extent. We note that this is probably due to the threshold effects of strange quark production [21].
In Fig. 2, we show the scaled data of Ω − and φ in pp collisions at √ s = 7 and 13 TeV [26][27][28][29] as a guide of energy dependence. We see that the quark number scaling property in inelastic events in pp collisions at √ s = 7 TeV is broken to a certain extent but it is well fulfilled in inelastic events in pp collisions at √ s = 13 TeV. This is an indication of the quark combination hadronization at higher collision energies. Figure 2. The scaling property for the dN/dpT dy data of Ω − and φ at midrapidity in inelastic events in pp collisions at √ s = 7 and 13 TeV. The coefficient κ φ,Ω is taken to be (2.0, 1.5), respectively. Data of Ω − and φ are taken from Refs. [26,27].
On the other hand, we run event generators Pythia8 [30,31] and Herwig6.5 as the naive test of the prediction of string and cluster fragmentation mechanism in pp collisions at √ s = 13 TeV. Fig. 3 shows results of the scaled p T spectra of Ω and φ at mid-rapidity in two event generators. Here we adopt Pythia version 8240 and Herwig version 6521 to simulate. We choose two event classes, inelastic non-diffractive events (INEL) and high multiplicity events with dN ch /dy ≥ 15, to check the multiplicity dependence of the prediction. In Pythia8 simulations, we further check the prediction with default string fragmentation tune (marked as Pythia8 in Fig. 3) and that with rope hadronization mechanism (marked as Pythia8 rope in Fig. 3). Panels (a)-(c) show the scaled spectra of Ω and φ where coefficient κ is chosen so that two spectra are coincident with each other at small p T . Panel (d) shows the ratio of two scaled spectra. We see that the constituent quark number scaling property in two event generators with current tunes is violated by more than 20% at p T 1.5GeV/c. In this paper, we apply a specific quark combination model proposed in our recent works [17,21] to systematically study the production of identified hadrons in pp collisions at √ s = 13 TeV. We mainly calculate the p T distributions and yields of identified hadrons and focus on various ratios or correlations for hadronic yields and p T spectra. We compare our results with the available experimental data to systematically test the quark combination hadronization in pp collisions at LHC energies. Predictions are made for the further test in future.
The paper is organized as follows: Sec. II briefly introduces a specific model of quark (re)combination mechanism under equal velocity combination approximation. Sec. III and Sec. IV present our results and relevant discussions in inelastic events and different multiplicity classes, respectively. A summary and discussion is given at last in Sec. V.
II. QUARK COMBINATION MODEL UNDER EQUAL VELOCITY COMBINATION APPROXIMATION
Quark (re-)combination/coalescence mechanism was proposed in 1970s [32] and has many applications both in elementary e − e + , pp and heavy-ion collisions [33][34][35][36][37][38][39]. In particular, ultra-relativistic heavy-ion collisions create the deconfined hot quark matter of large volume whose microscopic hadronization process can be described by QCM naturally [40][41][42][43][44][45]. In this section, we briefly introduce a quark combination model proposed in previous works [17,21] within QCM framework under the equal velocity combination approximation. We take the constituent quarks and antiquarks as the effective degrees of freedom of the soft parton system created in collisions just at hadronization. The combination of these constituent quarks and antiquarks with equal velocity forms the identified baryons and/or mesons.
A. Hadron production at given numbers of quarks and antiquarks
The momentum distributions of identified baryon and meson are denoted as Here p B and p M are momenta of baryon B i and meson M i , respectively. N Bi and N Mi are the momentumintegrated multiplicities of B i and M i , respectively. The superscript (n) denotes the distribution function is normalized to one. Under the equal velocity combination approximation, also called comoving approximation, momentum distributions of baryon and meson can be simply obtained by the product of those of constituent quarks and/or antiquarks. We have, for where f (n) q (p) ≡ dn q /dp is the momentum distribution of quarks normalized to one.
q2 (x 2 p) are normalization coefficients for baryon B i and meson M i , respectively. Momentum fractions x are given by recalling momentum p = mγv ∝ m, where indexes i, j = 1, 2, 3 for baryon and i, j = 1, 2 for meson. Quark masses are taken to be constituent masses m s = 500 MeV and m u = m d = 330 MeV.
Multiplicities of baryon and meson are
Here N q1q2q3 is the number of all possible three quark combinations relating to B i formation and is taken to be for cases of three different flavors, two identical flavor and three identical flavor, respectively. Factors 6 and 3 are numbers of permutation relating to different quark flavors. N q1q2 = N q1 Nq 2 is the number of all possible q 1q2 pairs relating to M i formation.
Considering the flavor independence of strong interaction, we assume the probability of q 1 q 2 q 3 forming a baryon and the probability of q 1q2 forming a meson are flavor independent, the combination probability can be written as Here N B /N qqq denotes the average (or flavor blinding) probability of three quarks combining into a baryon. N B is the average number of total baryons and N qqq = N q (N q − 1)(N q − 2) is the number of all possible three quark combinations with N q = f N f the total quark number. C Bi is the probability of selecting the correct discrete quantum number such as spin relating to B i as q 1 q 2 q 3 is destined to form a baryon. Similarly, N M /N qq approximately denotes the average probability of a quark and an antiquark combining into a meson and C Mi is the branch ratio to M i as q 1q2 is destined to from a meson. N M is total meson number and N qq = N q Nq is the number of all possible quark antiquark pairs for meson formation.
In this paper, we only consider the ground state J P = 0 − , 1 − mesons and J P = (1/2) + , (3/2) + baryons in flavor SU(3) group. For mesons where we introduce a parameter R V /P which represents the relative production weight of the J P = 1 − vector mesons to the J P = 0 − pseudo-scalar mesons of the same flavor composition. For baryons The parameter R D/O stands for the relative production weight of the J P = (3/2) + decuplet to the J P = (1/2) + octet baryons of the same flavor content. Here, R V /P is taken to be 0.45 by fitting the data of K * /K ratios in pp collisions at √ s = 7 TeV and p-Pb collisions at √ s N N = 5.02 TeV [46] and R D/O is taken to be 0.5 by fitting the data of Ξ * /Ξ and Σ * /Λ [47]. The fraction of baryons relative to mesons is N B /N M ≈ 0.085 at vanishing netquarks [18,21,45]. Using the unitarity constraint of hadronization N M + 3N B = N q , N Bi and N Mi can be calculated using above formulas for the given quark numbers at hadronization.
We summarize the main underlying dynamics of the model. Constituent quarks and antiquarks are assumed to be effective degrees of freedom of soft parton system at hadronization. The combination of these constituent quarks and antiquarks with equal velocity forms baryons and mesons. This is similar to constituent quark model, i.e., the summation of masses (and momenta in motion) of constituent quarks properly constructs the mass (and momentum in motion) of hadron. Model parameters R V /P and R D/O contain unclear non-perturbative dynamics and are obtained by fitting the relevant experimental data and are assumed to be relatively stable in/at different collision systems/energies. Also, the normalization of hadronization process is a prerequisite for quark combination. Quark number conservation is not only globally satisfied via N M + 3N B = N q and N M + 3NB = Nq but also satisfied for each quark flavor via h n qi,h N h = N qi . Here h runs over all hadron species and q i = d, u, s,d,ū,s. n qi,h is the number of constituent quark q i in hadron h. Therefore, this model is a statistical model based on constituent quark degrees of freedom and is different from those popular parton recombination/coalescence models [40,41] which adopt the Wigner wave function method under instantaneous hadronization approximation.
B. Quark number fluctuations and some threshold effects of hadron production As quark numbers at hadronization are small, identified hadron production will suffer some threshold effects. For example, baryon production is forbidden for events with N q < 3. For events with N s < 3, Ω − baryon production is forbidden. In pp collisions at LHC energies, the event-averaged number of strange quarks N s 1 in midrapidity region (|y| < 0.5) for inelastic events and not-too high multiplicity event classes. Therefore, the yield of Ω − is no longer completely determined by the average number of strange quarks but is strongly influenced by the distribution of strange quark number. Similar case for Ξ which needs two strange quarks, etc. Here we use P ({N qi } , { N qi }) to denote the distribution of quark numbers around the event average, and obtain the averaged multiplicity of identified hadrons by where N hi is given by Eqs. (9) and (10) and is the function of {N qi }.
For simplicity, we assume flavor-independent quark number distribution where f runs over u, d, s flavors. Here we neglect the fluctuation of net-charges and take N f = Nf in each events. The distribution of u and d quarks is based on the Poisson distribution P oi N u(d) , N u(d) . As aforementioned discussion, we particularly tune strange quark distribution. Because in minimum bias events and small multiplicity classes in pp collisions N s 1 and Poisson distribution P oi (N s , N s ) in this case has a long tail as N s ≥ 3 which may over-weight the events with N s ≥ 3, we distort the Poisson distribution by a suppression factor γ s , that is, is the Heaviside step function and N is normalization constant. γ s is taken to be 0.8 in inelastic (INEL>0) events and various multiplicity classes.
There are other possible effects of small quark numbers. For example, in events with N s = Ns = 1, because s ands are most likely created from the same one vacuum excitation and therefore they are not likely to directly constitute color singlet and therefore φ production in these events is suppressed. In addition, quark momentum distributions are more or less dependent on the quark numbers (in other words, system size) and we neglect such dependence for the given multiplicity classes and its potential effects are studied in the future works.
III. RESULTS IN INELASTIC EVENTS
We apply the above quark combination model to describe the transverse production of hadrons at midrapidity in pp collisions.The approximation of equal velocity combination in the model is reduced to that of equal transverse-velocity combination. Here, we only study one dimensional p T distribution of hadrons by further integrating over the azimuthal angle. The p T distribution functions of quarks at hadronization at midrapidity are inputs of the model and are denoted as f qi (p T ) = N qi f (n) qi (p T ) with q i = d, u, s,d,ū,s. N qi is the number of q i in rapidity interval |y| < 0.5 and f (n) qi (p T ) ≡ dn qi /dp T is quark p T spectrum normalized to one. We assume the iso-spin symmetry between up and down quarks and assume the charge conjugation symmetry between quark and antiquark. Finally we have only two inputs f u (p T ) and f s (p T ) which can be fixed by fitting the data of identified hadrons.
A. Quark pT distribution at hadronization
Using the scaling property in Eq. (3) and experimental data shown in Fig. 2, we can directly obtain the normalized p T distribution of strange quarks at hadronization, which can be parameterized in the form We emphasize that, by taking advantage of the quark number scaling property, this is the first time we can conveniently extract the momentum distributions of soft quarks at hadronization from the experimental data of hadronic p T spectra. The extracted quark p T spectra carry important information of soft parton system created in pp collisions at LHC energies. First, because of parameters b s and b u in quark distribution function in Eq. (17) are obviously smaller than one, the extracted f (n) u (p T ) and f (n) s (p T ) deviate from Boltzmann distribution in the low p T range. This indicates that thermalization may be not reached for the small partonic system created in pp collisions at LHC. Second, we see that the ratio f Fig. 4 (b), increases at small p T and then saturates (only slightly decreases) with p T . This property is similar to the observation in pp collisions at √ s = 7 TeV [21] and in p-Pb collisions at √ s N N = 5.02 TeV [17] and is also similar to the observation in heavyion collisions at RHIC and LHC [48][49][50]. These information of constituent quarks provides important constraint of developing more sophisticated theoretical models of soft parton system created in high energy collisions.
B. pT spectra of identified hadrons
Among hadrons that are often measured by experiments, pion and kaon are most abundant particles. However, because the masses of pion and kaon are significantly smaller than the summed masses of their constituent (anti-)quarks, the momenta of pion and kaon can not be calculated by the simple combination of those of constituent (anti-)quarks at hadronization [21]. Therefore, momentum spectra of pion and kaon are not the most direct probe of the quark combination model and their results are not shown in this paper. On the other hand, proton, Λ, Ξ − , Ω − , φ and K * 0 can be well constructed by the constituent quarks and antiquarks. These hadrons can be used to effectively test the quark combination model.
In Fig. 5, we show the calculation results of p T spectra of proton, Λ, Ξ − , Ω − , φ and K * 0 in inelastic (INEL>0) events in pp collisions at √ s = 13 TeV using the quark spectra in Fig. 4 and quark numbers N u = 2.8 and N s = 0.86. Here, quark numbers are fixed by globally fitting data of p T -integrated yield densities of these hadrons [27]. Solid lines are QCM results which have included the contribution of strong and electromagnetic decay of resonances. Symbols are preliminary data of hadronic p T spectra at midrapidity measured ALICE collaboration [27]. We see that the data can be well fitted in general by QCM. Our result of K * 0 is slightly smaller than the data. If we multiply the result of K * 0 spectrum by a constant factor and we will see the shape is in good agreement with the data.
Besides the scaling property between the p T spectra of Ω − and φ shown in the introduction (Sec. I), (p +p) /φ ratio as the function of p T can also give an intuitive Solid lines are results of QCM and symbols are preliminary data of ALICE collaboration [27].
picture of microscopic mechanism of hadron production. Proton and φ have similar masses but totally different quark contents. In the central (0-10% centrality) Pb-Pb collisions at √ s N N = 2.76 TeV, the data of (p +p) /φ ratio [51], black squares in Fig. 6, are almost flat with respect to p T . This flat ratio is usually related to similar masses of proton and φ and is usually attributed to the strong radial flow and statistical hadronization under chemical/thermal equilibrium in relativistic heavyion collisions. However, data of (p +p) /φ in inelastic events in pp collisions at √ s = 13 TeV [52], solid circles in Fig. 6, show a rapid decrease with the increasing p T . This is an indication of out of thermal equilibrium in pp collisions. In QCM, p T distributions of identified hadrons are determined by p T spectra of (anti-)quarks at hadronization. (p +p) /φ ratio in QCM reflects the ratio or correlation between the third power of u quark spectrum and the square of s quark spectrum. With quark spectra in Fig. 4 which self-consistently describe data of hadronic p T spectra in Fig. 5, the calculated p/φ ratio in QCM, the solid line in Fig. 6, shows a decreasing behavior with p T and is in good agreement with the data of pp collisions [52].
Hyperons Λ, Ξ − and Ω − contain one, two and three s constituent quarks, respectively. Therefore, ratios Ξ − /Λ and Ω − /Ξ − can reflect the difference in momentum distribution between u(d) quark and s quark at hadronization. Fig. 7 shows our prediction of ratios Ξ − /Λ and Ω − /Ξ − as the function of p T in inelastic events in pp collisions at √ s = 13 TeV. We see that two ratios increase with p T and then tend to be saturate at intermediate p T ∼ 6 GeV/c, which is directly due to the difference be-
IV. RESULTS IN DIFFERENT MULTIPLICITY CLASSES
Using the preliminary data of p T spectra of proton, K * 0 and φ in different multiplicity classes [52,53], we can determine the corresponding p T spectra of constituent quarks at hadronization and predict p T spectra of other identified hadrons. Fig. 8 shows the extracted quark p T spectra (using the parameterized form Eq. (17)) at midrapidity in different multiplicity classes. Because the parameter b q of quark spectrum in high multiplicity classes tends to one, the distribution function in Eq. (17) will asymptotically tend to Boltzmann distribution in low p T range and therefore we see a thermal behavior for quark spectrum. This is related to the increasing multiple parton interactions in these event classes. In small multiplicity classes, parameter b q of quark spectrum is relatively small and the quark spectrum deviates from thermal behavior.
A. Hadronic yields and yield ratios
In Fig. 9, we show the p T -integrated yields of identified hadrons (including kaon) [54] in different multiplicity classes and compare them with the preliminary data in pp collisions at √ s = 13 TeV [52,55]. In general, results of QCM, solid lines, are in good agreement with the data (with maximum deviation about 10%).
Yield ratios of different hadrons can significantly cancel the dependence of model parameters and/or model inputs. Therefore, they are more direct test of the basic physics of the model in confronting with the experimental data. In Fig. 10, we show the yield ratios of hyperons Ω − , Ξ − and Λ to pions divided by the values in the inclusive INEL>0 events. Data of pp collisions at √ s = 7 [5] and 13 TeV [52] and those of p-Pb collisions at √ s N N = 5.02 TeV [56,57] are all presented in order to get a visible tendency with respect to multiplicity of charged particles at midrapidity. Solid lines are numerical results of QCM, which are found to be in agreement with the data. We emphasize that such strangeness-related hierarchy behavior is closely related to the strange quark content of these hyperons during their production at hadronization, which can be understood easily via an analytical relation in QCM. Taking yield formulas Eqs. (9), (11) and considering the strong and electromagnetic decays, we have where we neglect the effects of small quark numbers and adopt the strangeness suppression factor λ s = N s / N u . Because of complex decay contributions, pion yield has a complex expression [58] and here we can write N π = a π N q with coefficient a π being almost a constant. Then the double ratios in Fig. 10 have simple approximate expressions Solid lines are results of QCM and symbols are preliminary data of ALICE collaboration [52,55].
The dotted lines in Fig. 10 are results of above analytic formulas with a naively tuned strangeness suppression λ s = λ ′ s 1 + 0.165 log dN ch /dη |η|<0.5 /6.0 with λ ′ s = 0.31. They can well fit the experimental data of these double ratios as dN ch /dη |η|<0. 5 10. In small multiplicity classes dN ch /dη |η|<0. 5 6 small quark number effects are not negligible and the analytic approximation is larger than the experimental data to a certain extent. Our numerical results have included small quark number effects and are found to be closer to the data.
Yield ratio Ξ/φ is also influenced by the small quark number effects. If we neglect small quark number effects, we have and using Eq. (19) we get the ratio which slightly decreases with the increase of λ s and therefore will slightly decrease with the increase of multiplicity dN ch /dη because λ s increases with dN ch /dη . This is in contradiction with the experimental data. However, considering small quark number effects in QCM will predict the correct behavior of the ratio Ξ/φ, see the solid line in Fig. 11. The formation of Ξ − needs not only two s quarks but also a d quark, which is different from the formation of φ needing only a s and as. Therefore, in events of small multiplicity or small quark numbers, the formation of Ξ − will be suppressed to a certain extent (or forbidden occasionally) due to the need of one more light quark, in comparing with the formation of φ. We see that the calculated ratio Ξ/φ using QCM increases with sys-tem multiplicity dN ch /dη and the increased magnitude of the ratio is consistent with the experimental data of pp collisions at √ s = 13 TeV and those of p-Pb collisions at √ s N N = 5.02 TeV [52,59]. Proton and φ have similar masses but different quark contents. Yield ratio φ/p can further test the flavordominated feature of hadron production in QCM. Neglecting small quark number effects, proton yield after taking into account decay contribution of ∆ resonances has a simple expression Using Eq. (24), we get yield ratio which shows a significant dependence on the strangeness suppression factor λ s . We get N φ /N p ≈ 0.22 with λ s = 0.32 in low multiplicity classes and N φ /N p ≈ 0.28 with λ s = 0.36 in high multiplicity classes. The short dashed lines in Fig. 12 are above two values under analytical approximation. They are slightly lower than the experimental data [52,59], symbols in the figure. Small quark number effects will increase the ratio to a certain extent in terms of the suppression of proton yield. We further show the numerical results of our model including small quark number effects, the solid line, and we see a good agreement with the data. TeV, solid circles, are taken from Refs. [52,59]. The solid line is the numerical results of QCM. The short dashed lines are estimation under analytical approximation.
B. pT spectra of identified hadrons
In Fig. 13, we show the fit of data of p T spectra of proton, K * * and φ [52,53] using QCM and the prediction of other identified hadrons in different multiplicity classes in pp collisions at √ s = 13 TeV. Note that classes IV and V are combined for the K * 0 data and the same for our results. Beside directly comparing the prediction of single hadron spectrum with the future data, we emphasize that QCM can be more effectively tested by some spectrum ratios and/or scaling. The first is to test whether the constituent quark number scaling property between the p T spectrum data of Ω − and φ holds in different multiplicity classes. The second is to study the ratio Ω − /φ as the function of p T . Ω − /φ ratio in QCM is solely determined by the strange quark p T spectrum at hadronization and the ratio usually exhibits a nontrivial p T dependence, as shown in Fig. 14(a), which is a typical behavior of baryon-to-meson ratio in QCM and is absent or unapparent in the traditional fragmentation picture. We also see that the ratio Ω − /φ in higher multiplicity classes can reach higher peak values and the peak position of Ω − /φ in high multiplicity classes is also enlarged in comparing with that in low multiplicity classes. The third is to study p/φ ratio as the function of p T to clarify the p T dependence of the ratio is flavor originated or mass originated? Results of QCM are shown in Fig. 14(b) which decrease with p T and show relatively weak multiplicity dependence. 9 2 × I( ) 8 2 × II( ) 7 2 × III( ) 6 2 × IV( Solid lines are results of QCM and symbols are preliminary data of ALICE collaboration [52,53].
V. SUMMARY AND DISCUSSION
Taking advantage of available experimental data of hadronic p T spectra and yields at midrapidity, we have systematically studied the production of soft hadrons in pp collisions at √ s = 13 TeV within a framework of quark combination mechanism for hadronization. We applied a quark combination model which assumes the constituent quarks and antiquarks being the effective degrees of freedom for the parton system at hadronization and takes equal velocity combination approximation in hadron formation. We used the model to systematically calculate the p T spectra and yields of soft strange hadrons in inelastic events (INEL>0) and in different multiplicity classes.
We found several interesting results which are sensitive to the hadronization mechanism. (1) Data of p T spectra of Ω − and φ in inelastic events (INEL>0) in pp collisions at √ s = 13 TeV exhibit a constituent quark number scaling property. Data in high multiplicity classes in pp collisions at √ s=7 TeV also show this scaling property. It is the first time that such a scaling property is observed in high energy pp collisions. This is an obvious experimental signal of the quark combination mechanism at hadronization in high energy pp collisions. (2) Data of p/φ ratio in inelastic events (INEL>0) in pp collisions at √ s = 13 TeV show an obvious decrease with the increasing p T , which indicates the statistical hadronization model is not responsible for this observation. We demonstrated that data are naturally explained by the quark combination model. (3) Data of yield ratios Λ/π, Ξ − /π and Ω − /π divided by the values in inelastic events as the function of system multiplicity dN ch /dη at midrapidity show a strangeness-related hierarchy structure. We demonstrated that the hierarchy structure is closely related to the strange quark content of these hyperons during their production via the combination of strange quarks and up/down quarks.
By the quark number scaling property, the p T spectrum of strange quarks was directly extracted from data of Ω and φ. The p T spectrum of up/down quarks was extracted from data of other hadrons containing up/down constituent quarks. These extracted quark momentum distribution functions are important results which describe the property of strongly-interacting partonic system at hadronization in the language of constituent quarks.
To confirm such a new feature of hadronization dynamics in high energy pp collisions, we should carefully study all the related experimental data. We have made plenty of predictions on p T spectra and spectrum ratios of strange hadrons in pp collisions at √ s = 13 TeV to further test our model by the future experimental data. On the other hand, compared to our previous study in pp collisions at √ s = 7 TeV [21] which gives the first indication, the current study provides a stronger suggestion of quark combination hadronization in high energy pp collisions. We still need further systematical studies in pp collisions at other available LHC energies so that we can test the universality of this new hadronization feature and study its relation with the possible creation of mini-QGP in small collision systems.
ACKNOWLEDGMENTS
This work is supported by the National Natural Science Foundation of China (11575100), and by Shandong Province Natural Science Foundation(ZR2019YQ06,ZR2019MA053), and by A Project of Shandong Province Higher Educational Science and Technology Program (J18KA228). | 8,254.4 | 2018-11-02T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Probing the nature of the low state in the extreme ultraluminous X-ray pulsar NGC 5907 ULX1
NGC5907 ULX1 is the most luminous ultra-luminous X-ray pulsar (ULXP) known to date, reaching luminosities in excess of 10 41 ergs − 1 . The pulsar is known for its fast spin-up during the on-state. Here, we present a long-term monitoring of the X-ray flux and the pulse period between 2003 and 2022. We find that the source was in an o ff - or low-state between mid-2017 to mid-2020. During this state, our pulse period monitoring shows that the source had spun down considerably. We interpret this spin-down as likely being due to the propeller e ff ect, whereby accretion onto the neutron star surface is inhibited. Using state-of-the-art accretion and torque models, we use the spin-up and spin-down episodes to constrain the magnetic field. For the spin-up episode, we find solutions for magnetic field strengths of either around 10 12 G or 10 13 G, however, the strong spin-down during the o ff -state seems only to be consistent with a very high magnetic field, namely, > 10 13 G. This is the first time a strong spin-down is seen during a low flux state in a ULXP. Based on the assumption that the source entered the propeller regime, this gives us the best estimate so far for the magnetic field of NGC5907 ULX1.
Introduction
A growing number of ultraluminous X-ray sources (ULXs), namely, off-nuclear X-ray binaries with apparent luminosities that exceed 10 39 erg s −1 (see Kaaret et al. 2017 for a review), are now known to be powered by highly magnetized neutron star accretors (with magnetic fields of B 10 9 G).Six such sources have been discovered to date through the detection of coherent X-ray pulsations and thus classified as ultra-luminous X-ray pulsars (ULXPs): M82 X-2 (Bachetti et al. 2014), NGC 7793 P13 (Fürst et al. 2016;Israel et al. 2017b), NGC 5907 ULX1 (Israel et al. 2017a), NGC 300 ULX1 (Carpano et al. 2018), NGC 1313 X-2 (Sathyaprakash et al. 2019) and M51 ULX7 (Rodríguez Castillo et al. 2020).A few more candidates have been found through the tentative identification of pulsations (e.g., NGC 7793 ULX-4, Quintin et al. 2021) or the possible detection of cyclotron resonant scattering lines (e.g., M51 ULX-8, Brightman et al. 2018).Additionally, the galactic source Swift J0243.6+6124reached luminosities significantly above 10 39 erg s −1 and can therefore also be classified as a ULXP (Wilson-Hodge et al. 2018;Tsygankov et al. 2018).Other similar sources include transient neutron stars with Be-star mass donors in the SMC, like SMC X-3 (Townsend et al. 2017;Liu et al. 2022) and RX J0209.6−7427(Vasilopoulos et al. 2020;Hou et al. 2022), which also reached ULX level luminosities briefly during giant outbursts.While it is currently unclear if the accretion geometry during these outbursts is similar to persistent ULXPs or not (e.g., all ULXPs currently exhibit much higher luminosities), it shows that there is a clear connection between Be-X-ray binaries and ULXPs.
such fields suppress the electron scattering cross-section (Herold 1979) and permit higher luminosities for a given accretion rate, while others argue for much lower fields and accretion rates (B ≈ 10 10 -10 11 G; e.g., Kluźniak & Lasota 2015;King et al. 2017) based on the large spin-up rates ( Ṗ ) observed.
Both scenarios, however, present some issues.The very strong magnetic field scenarios struggle to explain how accretion at luminosities below ∼ 10 40 erg s −1 is possible since (in theory) the magnetosphere would stop accretion and the source would be in a permanent propeller state (see below).The lowmagnetic field case on the other hand typically also assumes a very high beaming factor to explain the apparent extreme luminosities.This beaming would result in a very narrow funnel, which appears to be in contradiction to the observed sinusoidal pulse profiles with high pulsed fractions (Mushtukov et al. 2021).Additionally, the very high spin-up rate requires an intrinsically high accretion rate, setting up an argument against an extreme beaming factor.
It is also possible that a combination of these two explanations is present, with a very strong quadrupolar field acting close to the neutron star, whereas further away, the weaker dipolar field dominates (e.g., Israel et al. 2017a;Middleton et al. 2019a;Kong et al. 2022).This scenario has been claimed for Swift J0243.6+6124,driven by the discovery of a cyclotron resonant scattering feature (CRSF) at up to 146 keV (Kong et al. 2022).This line energy implies a filed strength in the lineforming region (which is likely to be very close to the neutron star surface) of around 1.6 × 10 13 G.Kong et al. (2022) have argued that such a strong field has to be in the multipolar component, as a dipole of this magnitude would lead to contradictions with other measurements.Nonetheless, it is currently unclear if the same scenario can explain the persistent ULXPs at even higher luminosities.
Reliably determining the magnetic fields of ULXPs has proven challenging to date.In general, the most robust method for determining field strengths in accreting neutron stars is via the study of cyclotron resonant scattering features (CRSFs; see Staubert et al. 2019 for a recent review).However, such features are challenging to detect in ULXs given their relatively low fluxes and the limited bandpass available for sensitive searches (even in the NuSTAR era).Only two potential features have been seen from the ULX population to date and these limited results paint something of a mixed picture (Brightman et al. 2018;Walton et al. 2018a;Middleton et al. 2019b).Other potential means of estimating the magnetic field strengths in ULXPs are thus clearly of interest.
One thing a number of the known ULXPs have in common is that they show strong long-term variability, sometimes including transitions to "off-states" in which their X-ray flux drops by orders of magnitude (e.g., Motch et al. 2014;Earnshaw et al. 2016;Brightman et al. 2016).One possible explanation for these events is that they represent transitions to the propeller regime, in which the magnetic field of the neutron star (temporarily) acts as a barrier to accretion, resulting in a precipitous drop in the observed X-ray flux (Illarionov & Sunyaev 1975;Tsygankov et al. 2016;Earnshaw et al. 2018).If this is correct, then they may offer an independent means to estimate the dipolar magnetic fields of these sources.However, the nature of these events is not yet clear, and may differ for different systems and events.For example, in M82 X-2 X-ray monitoring suggests that low-states are related to the ∼64 d super-orbital period seen in that system (Brightman et al. 2019), which would be somewhat challenging to reconcile with a propeller-based interpretation.Furthermore, other authors have suggested that these low-flux periods may be related to obscuration, instead of a cessation of accretion (Motch et al. 2014;Vasilopoulos et al. 2019;Fürst et al. 2021).
Among the known ULXPs NGC 5907 ULX1 is the most luminous, with an astonishing peak luminosity of L X ∼ 10 41 erg s −1 (∼500 times the Eddington luminosity for a standard neutron star assuming a distance of d = 17.1 Mpc; Israel et al. 2017a;Tully et al. 2016), making it a case of particular interest.Coordinated observations with XMM-Newton and NuS-TAR show that when the source is bright, it shows broadband spectra that are well described with a combination of a super-Eddington accretion disk that contributes thermal emission below ∼10 keV and an accretion column that dominates at higher energies (Walton et al. 2015(Walton et al. , 2018c;;Fürst et al. 2017).Studies of the short-timescale evolution of its pulse period revealed a potential orbital period of ∼5 d (Israel et al. 2017a;however, this period is not yet confirmed), and when in its ULX state the source is also known to exhibit a ∼78 d X-ray period (Walton et al. 2016), which is therefore likely to be super-orbital in nature.NGC 5907 ULX1 is also one of the ULXPs that is known to exhibit intermittent off-states (Walton et al. 2015).However, unlike the case of M82 X-2, the low-flux periods in NGC 5907 ULX1 are not related to its super-orbital period and so, they could plausibly be related to the propeller transition.
As reported by Israel et al. (2017a), the earliest measurement of pulsations from NGC 5907 ULX1 are from February 2003, with a spin period P around 1.43 s.Eleven years later, the pulsar had spun-up to P ≈ 1.14 s (Israel et al. 2017a).Assuming a constant spin-up over this period implies a rate of change of the pulse period Ṗ = −8.1 × 10 −10 s s −1 .This long-term spin-up indicates that the pulsar was not at spin equilibrium and that the high mass accretion rate also resulted in a large increase of angular momentum.Similar long-term spin-ups are observed for other ULXPs, like NGC 7793 P13 ( Ṗ ≈ −4 × 10 −11 s s −1 , Fürst et al. 2021) and NGC 300 ULX1 ( Ṗ ≈ −4 × 10 −10 s s −1 , Vasilopoulos et al. 2019).
As the spin-up is driven by transfer of angular momentum through accretion, this can also be theoretically used to measure the magnetic field strength.This fact has been discussed in detail by the seminal papers by Ghosh & Lamb (1979a) and Ghosh & Lamb (1979b), which were subsequently updated by Wang (1995).However, those calculations are based on various simplifications, in particular, they assume a geometrically thin, optically thick accretion disk as expected for sub-Eddington accretion rates (Shakura & Sunyaev 1973).In ULXPs, however, we expect the accretion disk to be geometrically thick due to their super-Eddington accretion rates (Shakura & Sunyaev 1973;Abramowicz et al. 1988).Nonetheless, using the Ghosh & Lamb (1979b) model for ULXPs NGC 7793 P13 and NGC 300 ULX1, magnetic fields around 10 12 G were implied (Fürst et al. 2016;Carpano et al. 2018), which is very well in line with the magnetic fields observed in high-mass X-ray binaries in our own galaxy (see, e.g., Staubert et al. 2019).
For NGC 5907 ULX1, much higher magnetic fields have been postulated based on the extreme luminosity, with a multipolar magnetic field component as high as a few 10 14 G (Israel et al. 2017a).More recently, using an updated description of the coupling between the magnetic field and the accretion flow intended to be more suitable for a super-Eddington disk, Gao & Li (2021) estimated a maximal field between 2-3 ×10 13 G.These authors, however, find that the observations presented by Israel et al. (2017a) are also consistent with a field around 6 × 10 11 G.
We are therefore in need of another approach to constrain the magnetic field.One possibility is to study the expected spin-down during a low flux phase, in which accretion might be inhibited due to the fast rotation of the neutron star.In this regime, the spin-down torque onto the neutron star is dominated by the magnetic field interacting with the residual accretion disk (Parfrey et al. 2016).NGC 5907 ULX1 entered a suitable low-flux state in mid-2017, which, broadly speaking, lasted until mid-2020, with brief episodes of higher flux in between.In this paper, we discuss the pulse period evolution before, during, and after this extended off-state, as observed with XMM-Newton, and show that the pulsar spun-down significantly during the off-state.
The remainder of the paper is structured as follows.In Sect.2, we describe the data used and extraction methods.In Sect. 3 we discuss the flux and period evolution and describe our pulsation search in detail.In Sect.4, we discuss our results and use the measurements to estimate the magnetic field.Section 5 provides a conclusion, along with a summary of our main results and an outlook for future measurements.
Observations and data reduction
2.1.Swift NGC 5907 ULX1 has been extensively monitored by the Neil Gehrels Swift Observatory and, in particular, the X-ray telescope (XRT; Burrows et al. 2005).The Swift XRT light curve (Figure 1) is extracted using the standard online pipeline (Evans et al. 2009, primarily using time bins of four days while the source was bright (following Walton et al. 2016).The exception to this is when NGC 5907 ULX1 drops its flux significantly.During these periods, we revert to an approximately monthly binning.The average count rates for these broader bins are determined by extracting XRT images and exposure maps integrated across them, also using the online XRT pipeline, and calculating the net count rate observed using a circular source region of radius 10 and a much larger neighboring region to estimate the background.Uncertainties are calculated following Gehrels (1986).Where the source is not detected, an upper limit at the 3σ-level is calculated using the method outlined by Kraft et al. (1991).These rates and upper limits are then corrected for the fraction of the XRT PSF that falls outside of the source extraction region.The Swift monitoring snapshots do not provide enough photons to search for or measure the pulse period.
XMM-Newton
In order to understand the nature of the strong variability seen by Swift, we executed a series of XMM-Newton observations over the last few years designed to monitor the spin period of the source, both across and after the extended low-flux period seen from ∼2017-2020.In addition, XMM-Newton also performed a series of earlier observations extending as far back as 2003.Details of these observations are given in Table 1.
For all of these XMM-Newton observations (see Table 1), the EPIC detectors (EPIC-pn, EPIC-MOS; Strüder et al. 2001;Turner et al. 2001) were operating in full-frame mode.We therefore focus on the data taken by EPIC-pn here (temporal resolution of 73.4 ms), as the EPIC-MOS detectors do not have sufficient timing capabilities to probe the ∼1 s spin period of the neutron star (temporal resolution of ∼2.6 s).Data reduction was carried out with the XMM-Newton Science Analysis System (SAS v19.1.0),largely following standard procedures.The raw observation data files were processed using EPCHAINand the cleaned event files corrected to the solar barycenter using the DE200 solar ephemeris with BARYCEN.Source light curves were then extracted on the time resolution of the EPIC-pn detector with XMMSELECT.We typically used circular source regions of radius 25-30 , depending on the brightness of NGC 5907 ULX1, although for some of the extremely faint observations, even smaller regions were occasionally used (ranging down to ∼15 ).As recommended we only considered single and double patterned events.
In all cases, the background was estimated from larger regions of blank sky on the same detector as the source region.Given the large number of XMM-Newton observations considered, it is unsurprising to notice that a broad range of background activity is seen among them; some suffer from severe flaring, some show brief, modest periods of enhanced background and some show a stable background level throughout.In order to determine whether additional background filtering is requiredand, if so, then subsequently establishing the level of background emission that is acceptable -we utilized the method outlined in Piconcelli et al. (2004).This determines the background level at which the signal-to-noise (S/N) of the source is maximized.We make this assessment for each observation individually, and maximize the S/N over the 0.3-10.0keV band.For cases where additional background filtering is required, custom good-timeintervals are generated to exclude background levels above that which gives the maximum S/N for the source.
Light curve and flux evolution
The evolution of NGC 5907 ULX1 from early 2014 to early 2022 as seen by Swift is shown in Figure 1.Before 2014, no dense monitoring of the source is available, although there are a small number of observations with XMM-Newton, Swift, and NuS-TAR prior to this.The source faded significantly in 2017 -to the extent that it became challenging to detect with Swift and only returned to full activity briefly in ∼May 2019; we note that there was also a brief re-brightening in ∼March 2018, but the source did not fully return to its "normal" ULX state.This low-flux period ended in mid 2020, and since then NGC 5907 ULX1 has been detected in almost every Swift snapshot obtained.Despite the broad recovery, comparing the behavior before 2017 and after 2020, we can see that the source is still exhibiting a much larger variability in the more recent data, with flux changes of up to a factor of ∼10 within a few months.
Previously, NGC 5907 ULX1 was also in a low flux state between 2012-2014 (see Walton et al. 2015), but its duration and exact luminosities are unclear due to the lack of monitoring.Between 2003-2012 the source seems to have largely been in a bright on-state, as seen by Swift, XMM-Newton, and Chandra (Israel et al. 2017a).
Fortuitously, during the initial decline in flux in 2017 a series of five XMM-Newton observations were taken, allowing us to obtain a precise measurement of the neutron star spin just before the low state (Table 1).The activity period in early 2019 was sufficiently bright and long-lived for us to trigger another series of ten XMM-Newton observations in order to compare the properties of NGC 5907 ULX1 in 2017 and 2019 and determine how the source has evolved across the extended low-flux period in between.Furthermore, another series of seven XMM-Newton observations of NGC 5907 ULX1 have been performed since its more persistent recovery in mid-2020, allowing for further comparisons of the recent behavior with that seen in 2019 as well as prior to 2017.In particular, we focus on timing the pulsar and tracking the evolution of its spin period in this work.3).The blue curve shows an extrapolation of the 78 d X-ray period seen from NGC 5907 ULX1 during its ULX state (Walton et al. 2016).The shaded pink areas indicate where the source was conceivably in the low state and spinning down.Bottom: Pulse period measurements, as listed in Table 1 and Israel et al. (2017a).The gray vertical dotted lines indicate times of observations with XMM-Newton.The blue dashed line shows a possible fiducial model of epochs of spin-up during bright states interspersed with spin-down during off-states.This line is only a suggestion for the evolution of the period.For details see text.
Pulsations
As a simple first step in the timing analysis we performed a Fast-Fourier Transform (FFT) on all full-band EPIC-pn light curves, extracted with a time resolution of 73.4 ms.This approach revealed significant pulsations at ∼0.9812 Hz in ObsID 0824320201 (2019-06-12).In all other observations, no significant signal was found using this basic analysis.
Based on the variation of the pulsed fraction as function of energy available in the literature (Israel et al. 2017a), we find that below 1 keV the pulsations are barely detectable.We therefore subsequently filtered all data on energies > 1 keV for the following analysis to increase the S/N.Following this additional filtering, as a next step, we performed a more in-depth pulsation search using an accelerated Fourier method, which searches a grid in the frequency, ν, and frequency-derivative ν space.We used the implementation HENaccelsearch from the HENDRICS software package (Bachetti 2015) and performed the search between ν=0.9-1.2Hz.This analysis revealed good pulsation candidates in ObsIDs 0804090301, 0804090401, 0824320201, 0823430401, and 0891801501, with a detection significance of > 3σ.
For each of these observations, the best ν-ν combinations from this search were then analyzed in more detail, with a grid search using epoch folding (Leahy et al. 1983).The data were searched in temporal space across a range of ∆P = ±0.3ms and ∆ Ṗ ± = 1 × 10 −8 s s −1 , respectively, centered on the best values from HENaccelsearch.We oversampled the search in P by a factor of 5 compared to the number of independent frequencies and used the same number of grid points also in Ṗ space.This analysis is performed with ISIS (Houck & Denicola 2000), following a similar procedure to that described in Fürst et al. (2016).Uncertainties on P and Ṗ are given as the value where χ 2 has dropped to half its maximum value, namely, an FWHM width based on the χ 2 landscape in the P -Ṗ plane.
The results of this search are given in Table 1 and shown in the bottom panel of Fig. 1.It is clear from these results that the neutron star was spinning significantly slower in 2019 compared to 2017 and again slightly slower in 2020 compared to 2019.After mid 2020, NGC 5907 ULX1 can be seen to have restarted its steady spin-up trend.
We then also performed the same epoch folding search for all ObsIDs where HENaccelsearch did not find a significant signal.The searches were centered on a period obtained from linear interpolation or extrapolation between the neighboring pulsation detections.We increased the search range to ∆P = 3 ms and ∆ Ṗ = 5 × 10 −8 s s −1 and again oversampled the P space by a factor of 5. Larger grids are computationally prohibitive for this kind of brute force method.We did not find significant pulsations in any of these observations.
The pulse periods we give here are not corrected for the orbital motion of the neutron star.We chose not to perform this Table 1.Details of the XMM-Newton observations of NGC 5907 ULX1 considered in this work.The uncertainties given for the pulse period and its derivative are statistical only and are dominated by the larger uncertainties of the orbital ephemeris (≈ 0.2 ms).The luminosity is based on the absorption corrected flux in the 0.3-10 keV band and the pulsed fraction is given for the 1-10 keV energy band.correction given the large uncertainties on the current ephemeris (Israel et al. 2017a).However, taking the values given by Israel et al. (2017a) and their respective uncertainties, we find that the maximal influence of their orbital solution on the measured spinperiod is on the order of 0.2 ms.This is smaller than the difference in spin period that we find between the two 2019 observations -and much smaller than the differences between the 2017 and 2019 observations.
Pulsed Fraction
Where pulsations have been detected, we calculate the pulsed fraction based on the pulse profile (PP) as P F = max(P P ) − min(P P ) max(P P ) + min(P P ). (1) For observations without detected pulsations, we instead calculate the upper limits on the pulsed fraction.To do so, we simulate a series of event lists with the same GTIs and average count rate as the real data but adding sinusoidal pulsations with an increasing pulsed fraction.The period assumed for these pulsations is based on the closest measured periods and is determined by linearly interpolating the evolution of the pulse period between these measurements to the time of the observation in question.
For each pulsed fraction, we simulated 50 individual event lists and performed the same search for pulsations as applied to the real data.We consider pulsations to be detected in our simulated datasets when we find a peak at the 99.9% significance level or above.Starting from 5%, the pulsed fraction is increased in steps of 0.1% up to the point where the fraction of simulations that return significant pulsations increases above 90% (i.e., the pulsations are recovered in at least 45 simulated event lists).We take this pulse fraction to be the upper limit for the observation in question (Table 1).
Magnetic field based on spin-up strength
As seen in Fig. 1, we can identify two epochs of relatively stable spin-up, the first one between 2014 and 2017 (starting around MJD 56847) and the second one after mid 2020 (starting around MJD 59053).In addition, based on the results presented in Israel et al. (2017a), the source was also spinning up on average between 2003 to 2014 (starting around MJD 52690).However, we do not know how stable the spin-up and X-ray flux was between 2003 and 2014, due to the lack of regular monitoring.
Using these three spin-up episodes, we can try to estimate the magnetic field strength based on theoretical calculations describing how the accreted matter couples to the neutron star via the magnetic field, as well as how much angular momentum is transferred by this coupling.Throughout the remainder of this paper, we assume a neutron star radius R = 10 6 cm and a neutron star mass of M = 1.4 M , unless otherwise noted.
To understand the accretion torque on the neutron star, it is important to analyze the relationship between a number of important radii at which significant changes in the geometry or the dominating physical processes occur.These are detailed below.
The corotation radius, R c , is the radius at which the Keplerian orbital speed is equal to the rotational speed of the pulsar.High accretion rates can only occur if the accreted material couples inside this radius to the magnetic field.The corotation radius is given by: Next, there is the magnetospheric radius, R M , which is the radius at which the magnetic field dominates over the ram pressure of the accreted material and within which the material has to follow the magnetic field lines.This is equivalent to the inner radius of the accretion disk, and is related to the Alfvén radius computed for spherical accretion with a factor ξ: (3) Then we have the spherization radius, R sph , the radius at which the accretion becomes locally super-Eddington and geometrically thick.Following Shakura & Sunyaev (1973), this is given by: In the above expressions, P is the pulse period in seconds, M 1.4 is the neutron star mass in units of 1.4 M , µ is the magnetic moment, G is the gravitational constant, Ṁin is the accretion rate at the inner edge of the accretion disk, Ṁtot is the total mass transfer rate, and L Edd is the Eddington luminosity.For a dipolar magnetic field, µ = BR 3 , where B is the magnetic flux density of the dipolar field.
The problem in calculating these radii lies in the fact that they depend on Ṁtot as well as Ṁin , which, in principle, can be inferred from the observed luminosity, L. However, the relation between Ṁin and the luminosity itself depends on the relative location of R sph , R M , and R C .
Within R sph , the outflows will ensure that the local Eddington luminosity is not exceeded, leading, at the same time, to the formation of a funnel that will go on to collimate the observed emission.As discussed by King (2009), for high accretion rates in the super-Eddington regime, we have: where ṁ = Ṁtot / ṀEdd , ṀEdd is accretion rate that corresponds to the Eddington limit and b is the beaming factor, which can be approximated as b ≈ 73/ ṁ2 .High accretion rates in this case mean ṁ > 8.5 (King et al. 2017), which is the case for all observations of NGC 5907 ULX1 considered here.
The estimated mass accretion rate Ṁtot from Eq. 5 can be used to calculate Ṁin , which is the relevant accretion rate for determining R M .For the same reasons as discussed for the luminosity, Ṁin is rising slower than Ṁtot inside of R sph to guarantee that the disk does not locally exceed its Eddington limit.In particular, following Shakura & Sunyaev (1973, cf Erkut et al. 2019): Finally, in order to calculate R M , we have to determine ξ, the conversion factor for the Alfvén radius calculated under the assumption of spherical symmetry.The value of ξ depends strongly on our assumptions of how the accreted material couples and interacts with the magnetic field, depending on the magnetic field geometry.Gao & Li (2021) distinguish two cases of the ratio between the azimuthal and vertical B-field strength which describe different physical regimes of the coupling between the accretion disk and the magnetosphere (see also Wang 1995).These two cases lead to different expressions of the total dimensionless torque n(ω), where ω is the so-called fastness parameter.The exact expressions of these equations can be found in Gao & Li (2021).Based on the description of the transfer of angular momentum onto a rigidly rotating neutron star, we can solve for ω.These equations are at least quadratic in ω, leading to two solutions for each case.We can then solve for the magnetic field, using the following equations (Gao & Li 2021).For R M ≥ R sph : And for R M ≤ R sph : Here, ξ can be approximated with: which is the same regardless of whether the magnetospheric radius is larger or smaller than the spherization radius.Also, R 6 is the neutron star radius in units of 10 6 cm.Given the high luminosity of NGC 5907 ULX1, we expect that the spherization radius tends to be outside of R M .However, for certain combinations of magnetic field strengths and coupling assumptions, we can also find that this may not be the case.For these potential solutions, the magnetic field is so strong that it disrupts the accretion disk before it can become locally super-Eddington.We have up to four B-field estimates in each case, however, not all estimates fulfill the assumptions (i.e., for an overly strong magnetic field, R M might be outside of R sph , making the solutions for R M ≤ R sph invalid in this case ).In Table 2, we list all possible solutions for the three spin-up epochs, that is, for each set of input values, L, P , and Ṗ .
As can be seen in Table 2, we find roughly similar magnetic field strengths for epochs 2 and 3, but a different value for epoch 1.The long baseline for estimating Ṗ in this epoch and the lack of flux monitoring can explain this difference: the average Ṗ and L are likely to be inaccurate, leading to unreliable estimates.Epochs 2 and 3 show a particularly good agreement for the high B-field solution, with a variation of only < 10% for both cases, while the difference for the low B-field solution is around ∼ 50%.Nonetheless, without further information, we cannot determine which solution describes the true nature of the system better.
If we assume that the neutron star has a B-field of 2.0 × 10 13 G (roughly the average of the high-B solutions based on epochs 2 and 3), we can also estimate the average luminosity during epoch 1, which comes out to about 7×10 39 erg cm −2 s −1 .While this value may be a bit on the low side, it could be realistic, given that NGC 5907 ULX1 did seemingly exhibit low fluxes throughout much of 2013 (Walton et al. 2015).In this case, R M > R sph , as the luminosity is very low compared to the magnetic field.
Even with this lower estimated luminosity for epoch 1, the low B-field solution does not agree with the other epochs.This discrepancy is mainly due to the fact that the B-field estimate is independent of the luminosity as long as R M < R sph , as the mass accretion rate at the inner accretion disk edge is regulated by the Eddington limit.
The off-state
In the simplest picture, an accreting neutron star will enter the propeller (or centrifugal inhibition) regime as soon as the corotation radius is inside the magnetospheric radius (R M > R c ).At the magnetospheric radius, matter couples to the magnetic field and is forced to rotate at the angular speed of the neutron star.In the case of R M > R c , this rotation is faster than the Keplerian speed and therefore the matter experiences a net outward force, all but halting direct accretion.The accretion luminosity will then drop by orders of magnitude, however, residual accretion at the magnetosphere might still be present, that is, the source does not have to appear completely off (see, e.g., Corbet 1996).
Based on this calculation, we give the critical luminosity for each epoch and case in Table 2, which is L crit = Ṁin,crit c 2 , where = 0.15, the accretion efficiency of a typical neutron star on its surface.Given that the critical luminosity depends strongly on the magnetic field, the estimates differ more strongly, for instance, by a factor of 2 for the low B-field case (again, only comparing epochs 2 and 3).
Observationally, we can also estimate the luminosity of a possible onset of the propeller effect from the Swift/XRT light curve, by finding the flux at which the source drops below the detection limit of XRT.To estimate this luminosity we transferred the Swift count-rate to flux and luminosity based on the contemporaneous XMM-Newton data.We use a conversion factor of: As can be seen on the right-hand y-axis of Fig. 1, the source drops below the detection limit of Swift around 1-2×10 39 erg s −1 .XMM-Newton detects the source around 1 × 10 39 erg s −1 during the lowest flux states, but no pulsations were found.This luminosity is therefore a good upper limit for the onset of the propeller effect and we refer to it as the propeller luminosity.That value is consistent with the fact that pulsations were only detected down to 14 × 10 39 erg s −1 in July 2017 (Ob-sID 0804090401).
While XRT is not very sensitive, we know that accretion was heavily suppressed in November 2017, due to a deep Chandra observation.This Chandra observation revealed a diffuse nebula around the ULXP and provided a stringent upper limit of L < 1.2 × 10 38 erg s −1 for the point source (Belfiore et al. 2020).We include this upper limit in Fig. 1.
The observational limit of 1 × 10 39 erg s −1 for the propeller luminosity is a little bit below the luminosity implied by the highest estimated B-field from the spin-up (Tab.2), namely, L crit = 2.2 × 10 39 erg s −1 for a magnetic field of B = 2.6 × 10 13 G (orange dotted-dashed line in Fig. 1).
It is worth stressing that the initial decline in flux seen during the dense XMM-Newton monitoring in July 2017 is not itself related to the propeller transition in our interpretation.Rather, this must be related to some other physical effect that resulted in a gradual reduction in the overall accretion rate through the disk and the disk itself is also expected to make a significant contribution in the XRT band (see, e.g., Walton et al. 2018b).For example, density waves in the accretion disk might have reduced the reduced the inner accretion rate or perhaps the mass transfer rate from the stellar companion decreased slightly.We consider this initial stage of the decline to be similar to the behavior seen in 2021 and 2022, where the source also shows significant flux variability without fully entering an off-state.However, in 2017 the flux continued to drop even further, to the point where it reached our proposed propeller luminosity, resulting in a transition to the propeller regime.This transition would naturally explain the very stringent upper limit observed in November 2017 by Chandra.However, as this observation occurred about 4 months later, we cannot make any firm statement with regards to how rapidly the actual propeller transition would have occurred with these data and, unfortunately, the XRT does not have the sensitivity to meaningfully shed light on this issue either (the source is already not detected by the XRT at our propeller luminosity).We note, however, given the rise by two orders of magnitude (or more) in flux in only four days, as seen by Walton et al. (2015) using XMM-Newton and NuSTAR , these observations (taken in 2013) may provide the most stringent constraints on this issue.
We can also follow a more detailed description of the super-Eddington accretion disks put forward by Chashkina et al. (2017Chashkina et al. ( , 2019)).Their model in particular takes advection of material in the accretion disk into account, as well as illumination of the disk by the luminosity emitted close to the compact object.
For a radiation-dominated disk, they find an updated description of the inner disk radius, which does not depend on the mass accretion rate (Eq.61 in Chashkina et al. 2017): where α is the viscosity of the disk, based on the standard Shakura & Sunyaev (1973) description and λ ≈ 4 × 10 10 M −5 1.4 .We assume α = 0.1.Here µ 30 is the magnetic moment of the neutron star in units of 10 30 G cm −3 and c is the speed of light.
Assuming that we reach the equilibrium spin-period just before entering the off-state, that is, a period of P = 0.946 s as measured in July 2017, we can equate R M = R c and then estimate a magnetic field of around 6 × 10 13 G.A similar estimate is found when following the description of King et al. (2017), who use the standard formula for the magnetospheric radius (Eq.3).
Based on our assumption that NGC 5907 ULX1 is spinning close to equilibrium and assuming that it entered the propeller state at the lowest measured luminosity where pulsations were still detected (in July 2017), we can rewrite the limit on the massaccretion rate given Eq.33 of Chashkina et al. (2019) to estimate the B-field: Again, we use the measurements of July 2017 (ObsID 0804090401), that is, P = 0.946 s and based on Eq. 5, ṁ = 31.9and we approximate ξ = 0.75 (based on Eq. 9), we find a B-field of around B ≈ 1.7 × 10 13 G.This estimate is slightly lower due to the effects of irradiation and advection in this model.
If we assume that NGC 5907 ULX1 entered a propeller state when it became undetectable by Swift, we would also expect that the source spins-down during this period.This behavior is indeed what we observe, as the spin period in 2019 is significantly slower than in 2017, just before the off-state.
However, the source rebrightened briefly in 2018, which likely means that it started accreting and spinning-up again.We do not have a pulsation measurement for this period, thus estimating the spin-down strength is not straight forward.In the bottom panel of Fig. 1, we indicate a possible spin-history of the source.We define regions (shaded pink) in which the source is off and spinning-down, thereby splitting up the data into seven time slices that alternate between spin-up and spin-down.
In this model, we assume a spin-up strength of Ṗ = −2.03× 10 −9 s s −1 between 2014-2017 and a spin-up strength of Ṗ = −1.50× 10 −9 s s −1 during all other spin-up episodes.For the spin-down, we assume a value of Ṗ = 2.25 × 10 −9 s s −1 , which is based on a spin-down estimate during the off-state in 2020.
This model is of course highly simplified and averages over long periods of time.It is possible that during the periods where the source was detected in XRT at ∼ 10 39 erg s −1 , active accretion occurred and the source was spinning-up slightly (or at least, was not spinning down further) if R M ≈ R C .However, the current data do not provide the required sensitivity and time resolution to model the spin history on shorter time-scales.
Theoretical estimates of the spin-down strength during the propeller state are difficult and depend on various assumptions of the interaction between the magnetic field and the residual surrounding matter (e.g., Davies et al. 1979;Urpin et al. 1998;Ikhsanov 2001;D'Angelo & Spruit 2010).Here we follow calculations presented by Parfrey et al. (2016), which were originally motivated by millisecond pulsars.In particular, these calculations assume that the spin-down torque is completely dominated by the torque exerted by the open field-lines and that there is no interaction between the field and the disk inside the corotation radius.Based on equation 18 of Parfrey et al. (2016) we can write an estimate for the magnetic field: where I is the moment of inertia of the neutron star (I = 2 5 M R 2 ).The parameter ζ describes how efficiently the magnetic field lines are opened by the star-disk interaction.Here, we assume maximum efficiency, namely, ζ = 1, which implies that all field lines intersecting the disk are opened.
We measured an average spin-down Ṗ = 1.19 × 10 −9 s s −1 between mid 2017 to mid 2019, that is, over the two first off-states.The spin-down is likely a bit faster, given the short rebrightening of NGC 5907 ULX1 in 2018 (see Fig. 1).Nevertheless, with this value for Ṗ , we find B down = 6.83 × 10 13 G.This magnetic field strength is slightly higher than the largest value based on the spin-up calculations, but only by a factor of about 2-3.Given the significant number of simplifications and approximations going into this estimate, it is probably not surprising that we don't find a perfect agreement.An overview of all our B-field estimates is given in Table 3.
It is interesting to note that the source already showed a significant spin-down between two XMM-Newton observations in 2017 (OBSIDs 0804090301 and 0804090401).These were taken while the source was on its decline into the off-state -and were separated by only ∼3 d.The evolution between these two observations implies Ṗ ≈ 1.4 × 10 −9 s s −1 , which is similar to what we measured between 2017-2019.We note, however that both observations are still significantly above the propeller luminosity that would correspond to a 10 13 G magnetic field, so there would appear to be an inconsistency here.Assuming an orbital period of 4.4 d, the lower limit of the period proposed by Israel et al. (2017a), we find that the observed spin-down is barely consistent with being due to the orbital motion within the uncertainties.However, the orbital period is not confirmed, so we cannot draw firm conclusions on the nature of this spin-down.
Another possibility for the observed spin-down is that the source is entering the subsonic propeller (or magnetic inhibition) regime.In this regime, R M < R C , but the matter is still too hot to enter the magnetosphere and accrete.This regime was discussed in detail by Davies et al. (1979); Davies & Pringle (1981) and Ikhsanov (2007).Following Davies & Pringle (1981) and Henrichs (1983), we can write a simple expression for the expected magnetic field given an observed spin-down rate: With the same spin-down rate assumptions as above, this would imply B SSP = 8.85 × 10 12 G.While still high, this estimate is significantly lower than the ones obtain before, as expected from the different assumptions regarding the state of the source.It is difficult to estimate the luminosity at which the subsonic propeller would start, given that it strongly depends on the wind density, temperature, and turbulence just outside the magnetosphere -which are all unknown.
The super-orbital period
Walton et al. ( 2016) discovered flux variability with a period of 78.1 ± 0.5 d in the Swift/XRT data of NGC 5907 ULX1, using data from roughly weekly monitoring observations between 2014-2016.The period showed a peak-to-peak amplitude of roughly a factor of 3 and was interpreted as a super-orbital period.As can be seen in Fig. 1, the flux just before the off-state in 2017 follows the exact same period, with a very similar amplitude and phase.This is also true for the recovery after the off-state in 2020: the XRT flux measurements align very well within the uncertainties of the period with the extrapolated profile of the 78 d period.Since 2020, NGC 5907 ULX1 shows a somewhat larger variability in its flux, with variations of at least a factor of 10.This increase in variability makes identifying the 78 d period more difficult, nonetheless, the bright states in the data still largely line up well with the peaks of the predicted profile.The Swift/XRT monitoring is still ongoing to investigate the Table 2. Implied magnetic field strength for three different epochs following the method given by Gao & Li (2021).The values for L, ṁ, and P are based on the observation at the given date (MJD).We note that for epoch 1, we do not know if the given luminosity is representative of the average luminosity during this epoch, due to the lack of X-ray flux monitoring during that time.Notes. (a) in 10 39 erg cm −2 s −1 in the 0.3-10 keV energy band (b) in ṀEdd (c) in seconds (d) in 10 −9 s s −1 (e) in 10 12 G (f) in 10 37 erg s −1 stability of this period.The current data do not allow for a independent measurement of the period, so we currently cannot say if there is a change in the period after the off-state or not.
Conclusions
We have studied the pulse period evolution of NGC 5907 ULX1 between 2003-2022.In 2017, the source entered an extended off-state, during which it dropped below the detection limit of Swift/XRT.During this off-state the secular spin-up trend reversed, and the neutron star rotational period slowed down significantly.After the source left the off-state in mid-2020, spin-up has resumed albeit at a lower rate than before.We have used different methods to estimate the magnetic field of the neutron star, either based on the spin-up or spindown strength.The main results are summarized in Table 3.In particular, we used the torque transfer model presented by Gao & Li (2021) during the spin-up.We find that the calculated field strengths agree well for the two most recent spin-up episodes in 2014-2017 and 2020-2022, in particular, for the high B-field solution.The first epoch between 2003-2014 gives very different estimates, but due to the lack of continuous flux monitoring, the estimated X-ray flux is highly uncertain.The highest estimate for the magnetic field strength in our data is ≈ 2.5 × 10 13 G, while for the low B-field solution, we find values as low as 2 × 10 12 G.While based on these numbers we cannot distinguish which magnetic field is present in reality, we note that for a ∼10 12 G field, we would expect to see a cyclotron resonant scattering feature (CRSF) around 12 keV, which so far has not been observed in the spectrum (Staubert et al. 2019;Fürst et al. 2017;Israel et al. 2017a).
We also estimate the magnetic field based on the spin-down during the off-state between 2017-2019.If we assume that the source entered the propeller regime when it dropped below the detection limit of Swift/XRT at about L =1-2×10 39 erg s −1 , we estimate a magnetic field of B ≤ 2.5 × 10 13 G.Using the update disk description of Chashkina et al. (2019), we find that this luminosity for the propeller transition implies a magnetic field B ≈ 1.7 × 10 13 G.The spin-down torque itself is difficult to estimate as it depends on the unknown interaction between the magnetic field and the residual matter surrounding it, be it the stellar wind or a residual accretion disk.We performed our calculations based on a description by Parfrey et al. (2016) and again find that the strong spin-down can only be explained with a very high magnetic field (≈ 6.8 × 10 13 G).If we assume that the source was spun down while in the subsonic propeller regime, we find a magnetic field of around 8.9 × 10 12 G.
While we cannot rule out a low magnetic field directly, circumstantial evidence points clearly toward the direction of a magnetic field of a few 10 13 G in NGC 5907 ULX1.This value is in line with previous estimates of the magnetic field (e.g., Israel et al. 2017a;Gao & Li 2021) and implies that the source is accreting at very high rates.
Fig. 1 .
Fig.1.Flux and period evolution of NGC 5907 ULX1 between 2014 to 2022.Top: Swift XRT light curve (0.3-10.0 keV).In green the upper limit for the point source luminosity as measured with Chandra(Belfiore et al. 2020) is shown, using the right y-axis.The XMM-Newton luminosities are shown as red diamonds and orange squares for observations with and without detect pulsations, respectively, also using the right-hand y-axis.The horizontal green line indicates the estimated propeller luminosity based on a magnetic field strength of B = 2.6 × 10 13 G (see Table3).The blue curve shows an extrapolation of the 78 d X-ray period seen from NGC 5907 ULX1 during its ULX state(Walton et al. 2016).The shaded pink areas indicate where the source was conceivably in the low state and spinning down.Bottom: Pulse period measurements, as listed in Table1andIsrael et al. (2017a).The gray vertical dotted lines indicate times of observations with XMM-Newton.The blue dashed line shows a possible fiducial model of epochs of spin-up during bright states interspersed with spin-down during off-states.This line is only a suggestion for the evolution of the period.For details see text.
Table 3 .
Summary of magnetic field estimations with the different methods presented in this work. | 11,247.6 | 2023-02-07T00:00:00.000 | [
"Physics"
] |
Calcium-Sensing Receptor Expression in Breast Cancer
The calcium-sensing receptor (CaSR) plays a crucial role in maintaining the balance of calcium in the body. Altered signaling through the CaSR has been linked to the development of various tumors, such as colorectal and breast tumors. This retrospective study enrolled 79 patients who underwent surgical removal of invasive breast carcinoma of no special type (NST) to explore the expression of the CaSR in breast cancer. The patients were categorized based on age, tumor size, hormone receptor status, HER2 status, Ki-67 proliferation index, tumor grade, and TNM staging. Immunohistochemistry was conducted on core needle biopsy samples to assess CaSR expression. The results revealed a positive correlation between CaSR expression and tumor size, regardless of the tumor surrogate subtype (p = 0.001). The expression of ER exhibited a negative correlation with CaSR expression (p = 0.033). In contrast, a positive correlation was observed between CaSR expression and the presence of HER2 receptors (p = 0.002). Increased CaSR expression was significantly associated with lymph node involvement and the presence of distant metastasis (p = 0.001 and p = 0.038, respectively). CaSR values were significantly higher in the patients with increased Ki-67 (p = 0.042). Collectively, higher CaSR expression in breast cancer could suggest a poor prognosis and treatment outcome regardless of the breast cancer subtype.
Introduction
The calcium-sensing receptor (CaSR) is a plasma membrane receptor that is a member of the G protein-coupled receptor (GPCR) superfamily. As a GPCR, the CaSR consists of the three primary structural elements found in this family of receptors: an extracellular domain, a seven-transmembrane domain, and an intracellular tail. It was first cloned from parathyroid cells, where its expression plays a vital role in the negative feedback loop that regulates calcium homeostasis by suppressing parathyroid hormone (PTH) secretion in hypercalcemic states [1,2]. The CaSR was shown to be expressed in a plethora of diverse tissues including skeletal, renal, cardiac, hematological, ovarian, and breast tissues, where it became apparent that it is an important regulator of varied physiological processes, including the proliferation, differentiation, and apoptosis of cells.
An alteration in the signaling pathway of the CaSR has been associated with the development of a variety of tumors including colorectal and breast tumors, where the role of the CaSR has been described as that of a tumor suppressor in the former and that of an oncogene in the latter [3,4]. Normal and neoplastic breast tissues were shown to express the CaSR [5]. In the aspect of the genetic background, only few studies found a correlation between increased breast cancer risk and single-nucleotide polymorphisms (SNPs) of the CaSR gene [6]. The interaction between the CaSR and BRCA1 was analyzed, revealing that cells containing BRCA1 mutants lacking BRCA1 expression displayed reduced CaSR expression. Additionally, the findings indicated that BRCA1 utilized the CaSR to suppress the expression of survivin, a factor that promotes cell survival. Consequently, the CaSR could partially mitigate the detrimental consequences of BRCA1 loss [7].
In addition to regulating PTH, the CaSR also regulates the secretion of parathyroid hormone-related protein (PTHrP). PTHrP is actually a growth factor that utilizes the same receptors as PTH. The CaSR is expressed in normal breast epithelial cells and is activated during lactation [8]. During lactation, the CaSR enhances calcium transport into milk and participates in the regulation of systemic calcium and bone metabolism. In breast tissue cells during lactation, the CaSR suppresses the production of PTHrP. As mentioned earlier, PTHrP is a growth factor that affects calcium homeostasis in the body. In the mother's systemic circulation, PTHrP activates a mechanism of bone resorption to increase the availability of calcium for milk production [9]. In the child's circulation, PTHrP, through mechanisms that are not yet fully understood, influences calcium accumulation in the bones [10].
Further research has also indicated the involvement of the CaSR in a wide range of processes such as cell proliferation, cell differentiation, apoptosis, hormone secretion, and gene expression. The CaSR has been found in breast cancer cell cultures, and it has been shown that the expression of this receptor is directly associated with the occurrence of bone metastases. Unlike the physiological effect of the CaSR in suppressing the secretion of PTHrP, in breast cancer cells, the CaSR acts to stimulate the production of PTHrP. The secretion of PTHrP leads to bone resorption and an increase in the systemic concentration of Ca 2+ ions. Elevated levels of calcium ions in breast cancer cells then promote the production of PTHrP, likely through a mechanism mediated by the CaSR. Increased levels of PTHrP, in turn, have osteolytic effects, releasing a new amount of calcium ions and establishing a positive feedback mechanism that further promotes massive osteolysis.
The CaSR has more recently been studied as a hypothetical predictive marker for skeletal metastases in breast carcinoma, and it was shown that in patients with advanced, metastatic breast cancer, CaSR expression was higher in those with skeletal metastases [11]. Based on recent findings, we aimed to explore the correlation between CaSR expression and different pathohistological prognostic factors of breast cancer. Furthermore, we compared CaSR expression with the value of the Ki-67 proliferation index, which serves as a marker of active cell proliferation and clearly indicates the biological aggressiveness of cancer.
Results
A total of 79 female patients with breast cancer of NST were included in this retrospective study. The mean age of the patients was 56.8 years (range: 28-79 years). Other clinical parameters and pathological findings of the enrolled patients are presented in Table 1.
A positive correlation was found between the expression of the CaSR and tumor size, regardless of the tumor type (p = 0.001) ( Table 2). The expression of ER, as a hormonedependent receptor, exhibited a negative correlation with CaSR expression (p = 0.033) ( Table 2). In contrast to ER, a positive correlation was observed in relation to the HER2 receptor (p = 0.002). Increased expression of the CaSR was significantly associated with lymph node involvement and the presence of distant metastasis (p = 0.001 and p = 0.038) ( Table 2). Differences in CaSR values in breast cancer regarding the assessment of breast cancer biological aggressiveness based on the level of the Ki-67 proliferation index were observed. CaSR values were significantly higher in the Ki-67 group with values > 20: 3.5 (2.0-4.0) compared to 1.0 (1.0-5.0); p = 0.042 (Table 3 and Figure 1). Through ROC analysis of CaSR values in breast cancer for evaluating the biological aggressiveness of breast cancer based on the level of the Ki-67 proliferation index > 20, an optimal cutoff value of CaSR > 1 was determined with the best combination of sensitivity (83.3%) and specificity (57.89%) ( Figure 2). Differences in CaSR values in breast cancer regarding the assessment of breast cancer biological aggressiveness based on the level of the Ki-67 proliferation index were observed. CaSR values were significantly higher in the Ki-67 group with values > 20: 3.5 (2.0-4.0) compared to 1.0 (1.0-5.0); p = 0.042 (Table 3 and Figure 1). Through ROC analysis of CaSR values in breast cancer for evaluating the biological aggressiveness of breast cancer based on the level of the Ki-67 proliferation index > 20, an optimal cutoff value of CaSR > 1 was determined with the best combination of sensitivity (83.3%) and specificity (57.89%) ( Figure 2).
Discussion
In our study, which was a retrospective analysis of 79 patients with NST invasive breast cancer, the CaSR was found to be a relevant marker of tumor size and aggressiveness, irrespective of the tumor surrogate subtype. Moreover, elevated CaSR expression was significantly linked to lymph node involvement and the presence of distant metastasis. A positive correlation was noticed between CaSR expression and the presence of HER2 receptors, while the patients with elevated Ki-67 exhibited significantly higher CaSR values.
In breast cancer cells, the CaSR acts as an oncogene and promotes tumor growth through mechanisms that are not yet fully understood. Studies on mice and breast cancer cell cultures have shown that inhibition of the CaSR reduced the proliferation of breast cancer cells, and in mice with CaSR inhibition, there was slower tumor growth and longer survival compared to the control group [12,13]. Although not all mechanisms by which the CaSR affects tumor growth have been clarified, our study showed a significant positive correlation between tumor size and the expression of the CaSR. Mice with inhibited CaSR in breast cancer cells exhibited slower tumor growth and longer survival compared to the control group. VanHouten's research on the expression level of the CaSR in metastatic breast cancer indicated a positive association between CaSR expression and lymph node involvement, as well as a negative association with progesterone receptor expression. However, we did not find any correlation between CaSR and progesterone receptor expression. All of the mentioned studies highlight the important role of the CaSR in the development and progression of breast cancer [14,15]. The activation of an expressed CaSR on two human breast cancer lines, MDA-MB-231 and MCF-7, led to increased production of parathyroid hormone-related protein (PTHrP). The secretion of PTHrP by neoplastic cells can activate PTH receptors in osteoblasts, thus activating a cascade of events that results in osteoclast-led osteolysis and further proliferation of cancerous cells [14,15]. PTHrP exerts its action through osteoblasts, by activating the RANK-RANKL-OPG system. In this activation loop, RANKL binds to the RANK receptor on osteoclasts and stimulates osteoclastogenesis [16,17]. PTHrP expression has been implicated as a risk factor for the development of skeletal metastases, in which it is more commonly expressed when compared to primary breast carcinomas [18,19]. The role of the CaSR in the development
Discussion
In our study, which was a retrospective analysis of 79 patients with NST invasive breast cancer, the CaSR was found to be a relevant marker of tumor size and aggressiveness, irrespective of the tumor surrogate subtype. Moreover, elevated CaSR expression was significantly linked to lymph node involvement and the presence of distant metastasis. A positive correlation was noticed between CaSR expression and the presence of HER2 receptors, while the patients with elevated Ki-67 exhibited significantly higher CaSR values.
In breast cancer cells, the CaSR acts as an oncogene and promotes tumor growth through mechanisms that are not yet fully understood. Studies on mice and breast cancer cell cultures have shown that inhibition of the CaSR reduced the proliferation of breast cancer cells, and in mice with CaSR inhibition, there was slower tumor growth and longer survival compared to the control group [12,13]. Although not all mechanisms by which the CaSR affects tumor growth have been clarified, our study showed a significant positive correlation between tumor size and the expression of the CaSR. Mice with inhibited CaSR in breast cancer cells exhibited slower tumor growth and longer survival compared to the control group. VanHouten's research on the expression level of the CaSR in metastatic breast cancer indicated a positive association between CaSR expression and lymph node involvement, as well as a negative association with progesterone receptor expression. However, we did not find any correlation between CaSR and progesterone receptor expression. All of the mentioned studies highlight the important role of the CaSR in the development and progression of breast cancer [14,15]. The activation of an expressed CaSR on two human breast cancer lines, MDA-MB-231 and MCF-7, led to increased production of parathyroid hormone-related protein (PTHrP). The secretion of PTHrP by neoplastic cells can activate PTH receptors in osteoblasts, thus activating a cascade of events that results in osteoclastled osteolysis and further proliferation of cancerous cells [14,15]. PTHrP exerts its action through osteoblasts, by activating the RANK-RANKL-OPG system. In this activation loop, RANKL binds to the RANK receptor on osteoclasts and stimulates osteoclastogenesis [16,17]. PTHrP expression has been implicated as a risk factor for the development of skeletal metastases, in which it is more commonly expressed when compared to primary breast carcinomas [18,19]. The role of the CaSR in the development of bone metastases has already been described in breast cancer cells [14,20]. Unlike in physiological conditions where the CaSR acts to reduce bone degradation in situations of increased Ca 2+ levels, in breast cancer cells, the CaSR acts to promote further bone resorption and an elevated systemic concentration of Ca 2+ ions in response to an increase in the Ca 2+ ion concentration [21]. This establishes a mechanism of positive feedback that promotes further massive osteolysis [22]. The CaSR has been linked to the development of bone grafts in research and positively correlates with their size and occurrence. In vivo studies have shown that overexpression of the CaSR in the MDA-MB-231 breast cancer cell line increases osteolytic potential by increasing osteoclastogenesis [23]. An increase in the number of osteoclasts results in increased bone resorption, which subsequently enables faster growth of tumor grafts [24]. Activation of the calcium receptor stimulates the proliferation of osteoclasts by stimulating PTHrP, which acts as a growth factor, as previously described [25,26]. Our study observed a significant, positive correlation between the expression of the CaSR and the development of distant metastases, correlating with the results of previous studies.
Considering the described characteristics of breast cancer with high CaSR expression, it is not surprising that statistical analysis confirmed a significant positive correlation between CaSR values in breast cancer and the Ki-67 proliferation index, which serves as a marker of the biological aggressiveness of breast cancer. Our study found that larger tumors with positive lymph nodes and distant metastases at the initial presentation had higher levels of the CaSR. The described characteristics, as well as Ki-67 values, indicate tumor aggressiveness. CaSR values were significantly higher in the Ki-67 group with values greater than 20 compared to patients with Ki-67 values less than 20. Within healthy breast tissue, Ki-67 can be detected in cells that do not express ER, while cells with estrogen receptors do not exhibit Ki-67 [27]. Since the expression of the CaSR significantly negatively correlates with ER expression, it is possible that ER also plays a role in the relationship with Ki-67. The results of this study correlate with findings in the literature showing that tumors with a higher malignant potential exhibit higher Ki-67 values, higher CaSR levels, and morphological characteristics associated with more malignant lesions [28,29]. Further research on larger tumors could confirm the existence of this correlation.
Approximately 70% of breast cancers are ER-positive and belong to the group of hormone-dependent tumors. The impact of estrogen in breast cancer development has already been established, with findings indicating that patients with high expression of estrogen receptor (ER) have a more favorable prognosis compared to those with ER-negative tumors, which tend to be more aggressive and prone to metastasis [30]. On the other hand, in physiological conditions, the presence of estrogen receptors is important for maintaining bone mass. In postmenopausal women and women undergoing tamoxifen therapy, which selectively acts on estrogen receptors and reduces estrogen binding to the receptor, a significant decrease in total bone mass has been observed. This reduction in bone mass is partly explained by increased osteoclast activity in the absence of estrogen. In the case of breast cancer, when the presence of the CaSR through positive feedback mechanisms leads to increased bone resorption, it has been shown that there is downregulation and decreased expression of ER receptors [31]. The mechanism by which the CaSR downregulates ER is not fully understood, but the literature indicates that high extracellular Ca 2+ levels affect ER transcriptional activity in MCF-7 breast cancer cell lines [32]. Nevertheless, it is possible that the release of Ca 2+ mediated by PTHrP through the CaSR influences ER regulation. This study supports the significant negative correlation between the presence of the CaSR and ER expression.
Prior to the invasion of cancer cells into the circulation, they need to undergo a process of epithelial-mesenchymal transition (EMT). In this transformation, in situ microcalcifications composed of calcium oxalate and hydroxyapatite play an important role [33,34]. The occurrence of hydroxyapatite, a calcium mineral, is associated with malignant lesions [35]. These studies indicate a significant role of calcium signaling in the spread of breast cancer. Through the previously described positive feedback loop with Ca 2+ , as well as the described influence of the CaSR on epithelial-mesenchymal transition, tumor spread is facilitated. These mechanisms can explain the positive correlation between the CaSR and the spread of breast cancer to lymph nodes and distant sites.
Patients
This single-center retrospective study was approved by the institutional review board, and the need for informed consent was waived. Patients who underwent surgical resection of breast cancer were enrolled in this study. Demographic, clinical, and pathological data were collected from the institutional database. Histological tumor types were classified according to the World Health Organization Histological Classification of Breast Tumors. Tumor grading was assessed according to the Elston and Ellis criteria. Only patients with invasive breast cancer of no special type (NST) were included in this study. A total of 79 patients with breast cancer of NST were selected and categorized, according to the age of the patients, size of the tumor, ER, PR, and HER2 status, Ki-67 proliferation index, histological grade of the tumor, lymphovascular invasion, lymph node status, and TNM staging of the breast cancer according to the American Joint Committee on Cancer (AJCC) 8th edition TNM system [36].
Immunohistochemistry
ER, PR, HER2, and Ki-67 statuses were determined through immunohistochemistry (IHC) analyses with streptavidin-peroxidase detection by staining formalin-fixed, paraffinembedded, 3 µm thick tissue sections representative of the tumor. The ER or PR status was positive when at least 1% of the tumor cell nuclei showed staining for ER or PR, according to the Breast Biomarker Reporting guidelines of the College of American Pathologists (CAP). The HER2 status was determined positive when the IHC staining intensity score was greater than or equal to three (circumferential membrane staining that is complete, intense, and within >10% of tumor cells) according to the CAP guideline recommendations for HER2 testing in breast cancer. The determination of a HER2/CEP17 ratio ≥ 2.0 and an average HER2 copy number ≥ 4.0 via silver in situ hybridization (SISH) is considered to indicate a positive HER2 status. Surrogate definitions based on immunohistochemical analysis of breast cancer tissue were used, and subtypes were determined based on the receptor status as luminal A like (ER+ and PR+, HER2−, Ki-67 < 20%), luminal B HER2+ like (ER+ and/or PR+, HER2+, Ki-67 > 20%), luminal B HER2 negative like (ER+ and/or PR+, HER2−, Ki-67 > 20%), HER2 positive (ER−, PR−, HER2+), and triple negative or basal like (ER−, PR−, HER2−).
CaSR IHC was performed on core needle biopsy samples by staining 3 µm thick, formalin-fixed, paraffin-embedded tissue sections representative of the tumor using an automatic immunostainer, Ventana BenchMark ULTRA, Roche Diagnostics. An anti-CaSR polyclonal antibody, PA1-934A (AffinityBioReagents, Inc., Golden, CO, USA, Thermo Scientific Inc., Rockford, IL, USA; dilution 1:200), was used as a primary antibody. Evaluation of the immunohistochemical analysis of CaSR reactivity was performed in consensus by two pathologists who were blinded to other information. Expression of the CaSR was quantified according to a 6-point scale, ranging from score 0 (negative) to score 5 (strong, uniform expression), as described in the literature [11]. The expression of the CaSR was quantified as absent expression (0), rare positive cells (1), non-uniform weak expression (2), non-uniform weak/intense expression (3), intense non-uniform expression (4), or strong uniform expression (5) (Figure 2).
Statistical Analysis
For CaSR expression, we used two main categories: CaSR positive if the score was 3-5, and CaSR negative if the score was 0, 1, or 2. Differences in continuous data between CaSR groups were compared with the Mann-Whitney U test. Spearman rho correlation coefficients were used to assess correlations between CaSR expression and other clinical variables. ROC analysis of CaSR values in breast cancer for assessing the biological aggres-siveness of breast cancer based on the level of the Ki-67 proliferation index > 20 was carried out. All p-values below 0.05 were considered significant. IBM SPSS statistical package for Windows, version 29.0 was used in all statistical procedures.
CaSR expression (positive versus negative) among groups was evaluated using Fisher's exact test (two proportions) or a chi-square test (more than two proportions). The Kruskal-Wallis test was used for the comparison of expression scores among different groups. Values of p less than 0.05 were considered statistically significant.
Conclusions
Collectively, the results of this study indicate that there is a positive relationship between CaSR expression and tumor size, irrespective of the tumor surrogate subtype. Moreover, the expression of ER demonstrates a negative correlation with CaSR expression. Conversely, a positive correlation is observed between CaSR expression and the presence of HER2 receptors. Additionally, elevated CaSR expression is significantly associated with lymph node involvement and the presence of distant metastasis. Furthermore, patients with increased Ki-67 exhibit significantly higher CaSR values. Overall, these results suggest that higher CaSR expression in breast cancer could indicate a poor prognosis and treatment outcome, regardless of the subtype of breast cancer. | 4,860.8 | 2023-07-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Solving Quantum Statistical Mechanics with Variational Autoregressive Networks and Quantum Circuits
We extend the ability of unitary quantum circuits by interfacing it with classical autoregressive neural networks. The combined model parametrizes a variational density matrix as a classical mixture of quantum pure states, where the autoregressive network generates bitstring samples as input states to the quantum circuit. We devise an efficient variational algorithm to jointly optimize the classical neural network and the quantum circuit for quantum statistical mechanics problems. One can obtain thermal observables such as the variational free energy, entropy, and specific heat. As a by product, the algorithm also gives access to low energy excitation states. We demonstrate applications to thermal properties and excitation spectra of the quantum Ising model with resources that are feasible on near-term quantum computers.
Introduction-Quantum statistical mechanics poses two sets of challenges to classical computational approaches. First of all, classical algorithms generally encounter the difficulties of diagonalzing exponentially large Hamiltonians or the sign problem originates from the quantum nature of the problem. Moreover, even on the eigenbasis one still faces intractable partition function which involves summation of exponentially large number of terms.
A straightforward way to address these difficulties is to directly realize the physical Hamiltonian on analog quantum devices and study the system at thermal equilibrium, for example, see Refs. [1,2]. On the other hand, a potentially more general approach would be to study thermal properties with a universal gate model quantum computer. However, it calls for algorithmic innovations to prepare thermal quantum states on quantum circuits given their unitary nature. There have been quantum algorithms to prepare thermal Gibbs states on quantum computers [3][4][5][6][7]. Unfortunately, these approaches may not be feasible on near-term noisy quantum computers with limited circuit depth. While variational quantum algorithm for preparing thermofield double states [8,9] requires additional quantum resources such as ancilla qubits, as well as measuring and extrapolating Renyi entropies. The quantum imaginary-time evolution [10] relies on exponentially difficult tomography on a growing number of qubits and synthesize of general multi-qubit unitaries.
Recently, Refs. [11,12] proposed practical approaches to prepare the thermal density matrix as a classical mixture of quantum pure states in the eigenbasis. In these proposals, the classical probabilistic model is either assumed to be factorized or expressed as an energy-based model [13]. However, the factorized distribution is generally a crude approximation for the Gibbs distribution in the eigenbasis. While the energy-based model still faces the problem of intractable partition function, which inhibits efficient and unbiased sampling, learning, or even evaluating the model likelihood.
Modern probabilistic generative models offer solutions to the intractable partition function problem [15] since the goals of generative modeling are exactly to represent, learn and The autoregressive network shown in blue is a classical probabilistic model that parametrizes a joint distribution in the form of Eq. (2). The model generates bit string as easy to prepare input product states to the quantum circuit. The neural network and the circuit produce a parametrized density matrix Eq. (1). (b) An implementation of the autoregressive model p φ using the masked autoencoder [14]. The neural network maps bit strings to real-valued outputs which parametrizes the conditional probabilities in Eq. (2). sample from complex high-dimensional probability distributions efficiently. Popular generative models include autoregressive models [14,16,17], variational autoencoders [18], generative adversarial networks [19], and flow-based models [20]. For the purpose of this study, the autoregressive models stand out since they support unbiased gradient estimator for discrete variables, direct sampling, and tractable likelihood at the same time. The autoregressive models have reached state-of-the-art performance in modeling realistic data and found real-world applications in synthesizing natural speech and images [16,17]. Variational optimization of the autoregressive network has been used for classical statistical physics problems [21,22]. Quantum generalization of the network was also employed for ground state of quantum many-body systems [23].
In this paper, we combine quantum circuits with autoregressive probabilistic models to solve problems in quantum statistical mechanics. The resulting model allows one to perform variational free energy over density matrices efficiently. We demonstrate applications of the approach to thermal properties and excitations of quantum lattice model.
arXiv:1912.11381v1 [quant-ph] 24 Dec 2019
By leveraging the recent advances in deep probabilistic generative models, the proposed approach extends the variational quantum eigensolver (VQE) [24] to thermal quantum states with essentially no overhead. Thus, the present algorithm is also feasible for near-term quantum computers [25][26][27][28][29][30][31]. The only practical difference to the VQE is that one needs to sample input states to the quantum circuit from a classical distribution, and one has an additional term in the objective function to account for the entropy of the input distribution.
For the classical simulation of the proposed algorithm, we use Yao.jl, an extensible and efficient framework for quantum algorithm design [32]. Yao.jl's batched quantum register and automatic differentiation via reversible computing [33] makes it an ideal tool for differentiable programming models which combine classical neural networks and quantum circuits. Our code implementation can be found at [34].
Model architecture, objective function, and optimization scheme- Figure 1(a) shows the architecture of the variational ansatz. A classical probabilistic model generates binary random variables x according to a classical distribution p φ (x), where φ are the network parameters. It is straightforward to prepare qubits to the classical product state |x . Then, a parametrized quantum circuit performs unitary transformation to the input states U θ |x , where the circuit parameters θ do not depend on the inputs. Overall, the model produces a classical mixture of quantum states. The density matrix of the ensemble reads [11,12] The density matrix is hermitian and positive definite. Moreover, given a normalized classical probability, one has Tr(ρ) = x p φ (x) = 1. The density matrix depends both on parameters φ and θ. We omit the explicit dependence in the notation to avoid cluttering in the notations.
The parametrized quantum circuit performs a unitary transformation to the diagonal density matrix x p φ (x)|x x|, whose diagonal elements are parametrized by a neural network. Using a quantum circuit for the unitary transformation [35] is more general than the classical flow model [36]. Moreover, it automatically ensures physical constraints such as the orthogonality of the eigenstates. The classical distribution p φ (x) is in general nontrivial since it is not necessarily factorized for each dimension of x [11,12]. Thus, exact representation of the classical distribution on the eigenbasis p φ (x) may also incur exponential resources. Parametrizing the probability distribution using a classical Boltzmann distribution has the problem of intractable partition functions. Hence, we employ an autoregressive network to produce the input states of the quantum circuit.
The autoregressive network models the joint probability distribution as a product of conditional probabilities where one has assumed an order of each dimension of the variables. x <i denotes the set of variables that are before x i . The autoregressive network is a special form of Bayesian network, which models conditional dependence of random variables as a directed acyclic graph shown in Fig. 1(a). The model can capture high-dimensional multimode distribution with complex correlations. One can also directly draw uncorrelated samples from the joint distribution via ancestral sampling, which follows the order of the conditional probabilities.
The practical implementation of the autoregressive networks largely benefits from rapid development of deep learning architectures such as the recurrent or convolutional neural networks [16,17] and autoencoders [14]. In this paper, we employ the masked autoencoder shown in Fig. 1(b). The autoencoder network transforms bit string x to real-valued vectorx of the same dimension, where each element satisfies 0 <x i < 1, e.g., outputs of sigmoid activation functions [13]. We mask out some connections in the autoencoder network so the connectivity ensures thatx i only depends on the binary variable x <i . Thus, each element of the output defines a conditional Bernoulli distribution In this way, the joint probability for all binary variables satisfies the autoregressive property Eq. (2). Since each conditional probability is normalized the joint distribution is normalized by construction. The probability distribution is parameterized by the network parameters φ. In a simple limit where the network is disconnected,x = sigmoid(φ) and one restores the product state ansatz considered in Refs. [11,12].
Given a Hamiltonian H at inverse temperature β, the density matrix σ = e −βH /Z plays a central role in the quantum statistical mechanics problem, where Z = Tr(e −βH ) is an intractable partition function. One can perform the variational calculation over the parametrized density matrix Eq. (1) by minimizing the objective function L = Tr(ρ ln ρ) + β Tr(ρH), which follows the Gibbs-Delbrück-Molière variational principle of quantum statistical mechanics [37]. The two terms of Eq. (3) correspond to the entropy and the expected energy of the variational density matrix respectively. The objective function is related to the quantum relative entropy S (ρ σ) = L + ln Z between the variational and the target density matrices. Since the relative entropy is nonnegative [38], one has L ≥ − ln Z, i.e. the loss function is lower bounded by the physical free energy. The equality is reached only when the variational density matrix reaches the physical one ρ = σ.
To estimate the objective function Eq. (3), one can sample a batch of input states |x from the autoregressive network, then apply the parametrized circuit and measure the following estimator The first term depends solely on the classical probabilistic model, which can be directly computed via Eq. on the samples. Note that the entropy of the autoregressive model is known exactly rather than being intractable in the energy-based models [12]. Moreover, having direct access to the entropy avoids the difficulties of extrapolating the Renyi entropies measured on the quantum processor [8,9]. The second term of Eq. (4) involves the expected energy of Hamiltonian operators H , where we denote The classical neural network and the quantum circuit perform classical and quantum average respectively. The Eq. (4) shows zeros variance property, i.e. when the variational density matrix exactly reaches to the physical one, the variance of the estimator Eq. (4) reduces to zero. This can be used as a self-verification of the variational ansatz and minimization procedure [30].
We would like to utilize the gradient information to train the hybrid model which consists of neural networks and quantum circuits efficiently. Moreover, random sampling of the autoregressive net and the quantum circuit suggest that one should employ stochastic optimization with noisy gradient estimators [13]. First, the gradient with respect to the circuit parameters reads The term inside the square bracket is a gradient of a quantum expected value. To evaluate the expectation on an actual quantum device, one can employ the parameter shift rule of [39][40][41][42]. These approaches estimate the gradient of each circuit parameter using the difference of two sets of measurement on the quantum circuit with the same architecture. While in the classical simulation of the quantum algorithm one can employ the automatic differentiation [43] to evaluate the gradient efficiently. Next, the gradients of the neural network parameters can be evaluated using the REINFORCE algorithm [45] where the term ∇ φ ln p φ (x), known as the score-function gradient in the machine learning literature [46], can be efficiently evaluated via backpropagation through the probabilistic model Eq. (2) [43]. In this regard, f (x) = ln p φ (x) + β x|U † θ HU θ |x can be viewed as the "reward signal" given the policy p φ (x) for generating bit string samples. We have introduced the baseline b = E x∼p φ f (x) which does not affect the expectation of Eq. (6) since E x∼p φ ∇ φ ln p φ (x) = 0. However, the baseline helps to reduce the variance of the gradient estimator [47].
Given the gradient information we train the autoregressive network and the quantum circuit jointly with the stochastic gradient descend method. The training procedure finds out the circuit U θ which approximately diagonalizes the density matrix and brings the negative log-likelihood − ln p φ (x) closer to the energy spectrum of the system. In principle, the same circui can diagonalize the density matrices at all temperatures if U θ fully diagonalize the Hamiltonian. However, in the practical variational calculation, this does not need to be the case to achieve good variational free energy since the temperature selects the relevant low-energy spectra which contributes mostly to the objective function.
After training, one can sample a batch of input states |x and treat them as approximations of the eigenstates of the system. Since the unitary circuit preserves orthogonality of the input states, the sampled quantum states span a low energy subspace of the Hamiltonian. For example, measuring the expected energy x U † θ HU θ x reveals the excitation energies of the system. In this respect, the objective function Eq. (3) is related to the weighted subspace-search VQE algorithm for the excited states [48]. Different from the weighted subspace-search VQE, a single physical parameter inverse temperature β controls the relative weights on the input states. Adaptive sampling of the autoregressive model provides the correct weights that spans the relevant low energy space. Due to its close connection to the original VQE algorithm [24], we denote the present approach as the β-VQE algorithm. While suppose the Hamiltonian is diagonal in the computational basis, i.e., a classical Hamiltonian, one can leave out the quantum circuit and the approach falls back to the variational autoregressive network approach of Ref. [21]. In the classical limit it is also obvious that the autoregressive ansatz is advantageous than a simple product ansatz. Numerical simulations-We demonstrate application of β-VQE to thermal properties of quantum lattice problems. Despite that most of the efforts on VQE have been devoted to quantum chemistry problems [25][26][27][28][29], quantum lattice problems are more native applications on near-term quantum computers for two reasons. First, typical problems with local interaction one does not suffer from unfavorable scaling of a large number of Hamiltonian terms. Second, quantum lattice models that only involve spins and bosons do not invoke the overhead of mapping from fermion to qubits. Therefore, it is anticipated that near-term devices should already produce valuable results for quantum lattice problems before they are impactful for quantum chemistry problems [49].
We consider the prototypical transverse field Ising model on a square lattice with open boundary conditions where Z i and X i are Pauli operators acting on the lattice sites. The model exhibits a quantum critical point at zero temperature at Γ c = 3.04438 (2). While for Γ < Γ c the model exhibits a thermal phase transition from an ferromagnetic phase to a disordered phase. All of these rich physics can be studied unbiasedly with sign-problem free quantum Monte Carlo approach, e.g. see [50]. Having abundant established knowledge makes the problem Eq. (7) an ideal benchmark problem for the β-VQE algorithm on near-term quantum computers. For the autoregressive network Eq. (2) we employ the masked autoencoder architecture [14] shown in Fig. 1(b). We arrange the qubits on the two dimensional grid following the typewriter order. The autoencoder has single hidden layer of 500 hidden neurons with rectified linear unit activation. For the variational quantum circuit, we employ the setup shown in Fig. 2 which arrange the qubits on a two dimensional grid [51] and apply general two qubit gates [44] on the neighboring sites in each layer. The general gate consists of 15 singlequbit gates and 3 CNOT gates. Each two qubit unitary is parametrized by 15 parameters in the rotational gates, which parametrizes the SU(4) group. The circuit architecture enjoys a balanced expressibility and hardware efficiency. We repeat the pattern for d times which we denote as the depth d of the variational quantum circuit. Therefore, for the 3 × 3 system considered in Fig. 2(a) with d = 5, there are 15 × 12 × 5 = 900 circuit parameters. Initially we set all the circuit parameters to be zero. We estimate the gradients Eqs. (5, 6) on batch of 1000 samples, and we the Adam algorithms [13] to optimize the parameters φ and θ jointly. Figure 3 shows that the objective function decreases towards the exact values as a function of training epochs. We measure physical observables on the trained model and compare them with exact results. For example, Figs. 4(a,b) show the expected energy H and the specific heat β 2 H 2 − H 2 computed by measuring Hamiltonian expectation and its variance. Moreover, one sees in Fig. 4(c) that the entropy E x∼p φ (x) − ln p φ (x) changes from ln 2 per site in the high temperature limit towards zero at zero temperature. While the purity of the system Tr(ρ 2 ) = E x∼p φ (x) p φ (x) shown in Fig. 4(d) increases from zero towards one as the temperature decreases. All these observables can be directly measured on an actual quantum device. Overall, one sees the autoregressive model Eq. (2) combined with variational quantum circuit yields accurate results over all temperatures. Figure 5 shows the low energy spectrum of the quantum Ising model obtained from β-VQE at β = 0.5. One sees that the approach provides low energy spectrum of the problem at various strength of the transverse field. The approach works nicely even when the first excited state becomes nearly degenerated with the ground state.
Outlooks-The present approach would be most useful for studying thermal properties of frustrated quantum systems which are prevented by the sign problem [52]. Moreover, one can further employ the qubit efficient VQE scheme [53,54], where one can study thermal properties of quantum manybody systems on a quantum computer with the number of qubits smaller than the number of degrees. In that scenario, the ansatz for the density matrix is a classical mixture of matrix product states. The variational ansatz for density matrix can also be used in quantum algorithm for non-equilibrium dynamics [55] and steady states [56].
The quantum circuit also acts as a canonical transformation that brings the density matrix to a diagonal representation. Combined with the fact that one can obtain the marginal likelihood of the leading bits in the autoregressive models, the setup may be useful for deriving effective models with less degrees of freedom similar to the classical case [57]. Therefore, one can envision using the present setup to derive effective models by using a quantum circuit for renormalization group transformation. Moreover, since the circuit approximately diagonalizes the density matrix, one also can make use of it for later purpose, such as accelerated time evolution [58].
Regarding further improvements of the algorithm, one may consider using tensor network probabilistic models [59,60] instead of the autoregressive network to represent the classical distribution in the eigenbasis. Both models have the shortcoming that the sampling approach produces the bits sequentially. To address this issue, one may consider employ the recent proposed flow models for discrete variables [61,62]. While to further improve the optimization efficiency, one may consider using the improved gradient estimator with even lower variances [63,64]. To this end, differentiable programming of neural networks and quantum circuits shares a unified computational framework. Therefore, a seamlessly integration of models and techniques will enjoy advances of both worlds. | 4,478.6 | 2019-12-24T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Sulforaphane and Its Protective Role in Prostate Cancer: A Mechanistic Approach
The increasing incidence of prostate cancer worldwide has spurred research into novel therapeutics for its treatment and prevention. Sulforaphane, derived from broccoli and other members of the Brassica genus, is a phytochemical shown to have anticancer properties. Numerous studies have shown that sulforaphane prevents the development and progression of prostatic tumors. This review evaluates the most recent published reports on prevention of the progression of prostate cancer by sulforaphane in vitro, in vivo and in clinical settings. A detailed description of the proposed mechanisms of action of sulforaphane on prostatic cells is provided. Furthermore, we discuss the challenges, limitations and future prospects of using sulforaphane as a therapeutic agent in treatment of prostate cancer.
Introduction
The gland known as the prostate is located in the male reproductive system just behind the bladder and surrounds the urethra. The unrestricted proliferation of cells of the prostate gland results in prostate cancer [1]. Prostate carcinoma is one of the most prevalent forms of cancer in men globally, accounting for about 1.4 million new cases and 375,000 mortality per year worldwide [2]. Factors that increase the risk of developing prostate cancer include: age, genetics and lifestyle habits [3]. Considering how common prostate cancer is, the scientific community has intensified efforts in the search for novel therapeutics from naturally occurring compounds capable of preventing, inhibiting or reversing tumor development. Plants have been extensively screened for phytochemicals with anticancer properties; one such phytochemical is sulforaphane [4].
Sulforaphane is a small chemical compound found in cruciferous vegetables of the Brassica genus (broccoli, broccoli sprouts, kale, cabbage, Brussel sprouts and cauliflower). It is produced when the vegetable is chopped, chewed, boiled or disrupted, causing a plant enzyme myrosinase (EC 3.2.1.147) to convert a precursor molecule called glucoraphanin into sulforaphane. This process also occurs in the human body after consumption of the vegetables, as the gut microbiome contains bacteria that produce myrosinase [5] (Figure 1).
In the 1990s, sulforaphane was isolated from broccoli for the first time and shown to possess anticancer properties by researchers at Johns Hopkins School of Medicine [6,7]. Subsequently, there has been a plethora of studies reporting the antineoplastic activity of sulforaphane. Recent studies have demonstrated that sulforaphane can prevent the development of cancer cells and initiate apoptosis in a variety of cancer types, including prostate cancer [8]. This is due to the compound's ability to target multiple signaling Subsequently, there has been a plethora of studies reporting the antineoplastic activity of sulforaphane. Recent studies have demonstrated that sulforaphane can prevent the development of cancer cells and initiate apoptosis in a variety of cancer types, including prostate cancer [8]. This is due to the compound's ability to target multiple signaling pathways involved in cancer cell growth and survival [9][10][11]. Hence, the goal of this review is to present an up-to-date assessment of sulforaphane's effect on prostate cancer, and to give detailed descriptions of the various proposed mechanisms of action of sulforaphane in the prevention of the progression of prostatic tumors. Challenges and prospective future directions in the use of sulforaphane as a chemo-preventive therapeutic are also discussed.
In Vitro Studies
Prostate carcinogenesis is usually initiated by androgen receptor (AR) signaling, and proliferation of cancer cells is promoted by a preferential increase in aerobic glycolysis. A recent study evaluated the shielding properties of sulforaphane and capsaicin against the effect of androgen receptor (AR) stimulators. The researchers manipulated the levels of AR stimulators Androgen and Tip60 by overexpressing these stimulators in LNCaP cells [12]. This resulted in increases in androgen receptors and prostate-specific antigens (PSA), stimulation of AR pathway and proliferation of LNCaP cells by 80-100%. HIF-1α levels were also raised by 52%, which promoted glycolysis. However, 10 μM of sulforaphane totally suppressed the rise and increases brought on by Tip60 and androgen in LNCaP cells. The compound also effectively stopped the increase in both cytosolic and nuclear levels of HIF-1α, reducing glycolysis by 74%. The study concluded that sulforaphane has
In Vitro Studies
Prostate carcinogenesis is usually initiated by androgen receptor (AR) signaling, and proliferation of cancer cells is promoted by a preferential increase in aerobic glycolysis. A recent study evaluated the shielding properties of sulforaphane and capsaicin against the effect of androgen receptor (AR) stimulators. The researchers manipulated the levels of AR stimulators Androgen and Tip60 by overexpressing these stimulators in LNCaP cells [12]. This resulted in increases in androgen receptors and prostate-specific antigens (PSA), stimulation of AR pathway and proliferation of LNCaP cells by 80-100%. HIF-1α levels were also raised by 52%, which promoted glycolysis. However, 10 µM of sulforaphane totally suppressed the rise and increases brought on by Tip60 and androgen in LNCaP cells. The compound also effectively stopped the increase in both cytosolic and nuclear levels of HIF-1α, reducing glycolysis by 74%. The study concluded that sulforaphane has the ability to reduce Tip60 and androgen-induced proliferation and glycolysis in prostatic tumor cells [12].
The impact of sulforaphane and other isothiocyanates on prostate cancer cell lines was assessed in a study that showed that when prostatic carcinoma cell lines PC-3 and DU 145 were treated with 30 µM of sulforaphane for 72 h, it reduced the viability of the cells by 40-60% and inhibited the proliferation of the cell lines [13]. Sulforaphane also decreased the metastatic ability of the cells by up to 50%. When the cell lines were subjected to a combination therapy of sulforaphane and the chemotherapeutic drug Docetaxel (DOCE), the treatment was significantly more efficacious than sulforaphane or DOCE alone. The researchers found that sulforaphane made both cell lines more responsive to DOCE by a synergic mechanism [13]. In a different study, the effect of sulforaphane on PC-3 prostate cancer cells and HDFa normal cells was examined. The study showed that sulforaphane inhibited DNA replication and caused DNA damage in both prostate cancer and normal cell lines. DNA damage, in the form of double-stranded breaks, was more pronounced in cancer cells due to their inability to carry out proper DNA repair. This led to apoptotic elimination of cancer cells [14].
Another study reported the anti-tumor properties of sulforaphane on DU145 and PC3 prostatic tumor cell lines via a blockade of cell cycle. A total of 1-20 µM of sulforaphane inhibited the growth of DU145 and PC3 cells. A total of 10 µM of sulforaphane reduced proliferation after 48-72 h of incubation, while 20 µM completely blocked cell growth. Clones were completely destroyed when exposed to 10 µM of sulforaphane for 10 days. Sulforaphane suppressed the multiplication of these prostate cancer cell lines by prompting an arrest of the cell cycle at the S and G2/M phases. This was evident from the increased levels of the proteins responsible for the regulation of cell-cycle, such as CDK1, CDK2 and p19, and from the acetylation of histones H3 and H4 [15].
Studies on breast and prostate cancers reported an induction of apoptosis by isothiocyanates [16]. Sulforaphane and two other isothiocyanates induced apoptosis in breast cancer and prostate cancer cell models via ubiquitin proteasome system (UPS)-mediated protein degradation. The study showed that sulforaphane interacts with deubiquitinating enzymes USP14 and UCHL5 and inhibits the activity of these enzymes. When prostate cancer 22Rv1 cells were treated with 25 µM of sulforaphane for 24-36 h, there was an accumulation of poly-ubiquitinated proteins, which prompted protein degradation and eventual apoptosis of the cells. Thus, 80% of viable 22Rv1 cells were lost when exposed to sulforaphane for 24 h [16]. A similar study examined the effect of the induction of autophagy and apoptosis by sulforaphane in PC-3 and castration-resistant 22Rv1 prostate cancer cell lines. The study shows that 10-20 µM of sulforaphane significantly increased lysosome-associated membrane protein 2 (LAMP2) in the cell lines. This induction of LAMP2 levels is undesirable for the prevention of prostatic tumors. However, when LAMP2 was knockdown and the cells were treated with sulforaphane, there was a striking increase in apoptosis in both cell lines. Hence, the study recommended a combination regimen of sulforaphane and a chemical inhibitor of LAMP2 for the chemoprevention of prostate cancer [17] (Table 1).
In Vivo Studies
Prostate cancer is characterized by elevated de novo synthesis of fatty acid and overexpression of key fatty acid synthesis enzymes such as acetyl-CoA carboxylase (ACC) and fatty acid synthase (FASN). Sulforaphane has been shown to prevent prostatic tumor in Transgenic Adenocarcinoma of Mouse Prostate (TRAMP) mice by the inhibition of fatty acid synthesis [18]. Administration of 6 µmol/mouse of sulforaphane to TRAMP mice resulted in 60-70% downregulation of ACC and FASN proteins in prostate tumors and a significant reduction in plasma levels of acetyl-CoA, total free fatty acids and total phospholipids. Human prostate tumors also often exhibit the Warburg phenomenon: a marked increase in aerobic glycolysis. The same group of researchers, in a subsequent study, showed that sulforaphane suppressed glycolysis in prostate neoplastic lesions of mouse models. In the study, two murine models (TRAMP and Hi-Myc) were treated with sulforaphane. When TRAMP mice were given 6 µmol/mouse (1 mg/mouse) three times a week for 17-19 weeks, the prostate tumor expression of glycolysis-promoting enzymes such as hexokinase II (HKII), pyruvate kinase M2 (PKM2) and lactate dehydrogenase A (LDHA) was decreased by 32-45%. Similarly, when Hi-Myc mice were given 1 mg/mouse of sulforaphane three times each week for 5-10 weeks, expression of HKII, PKM2 and LDHA was significantly decreased. These results provide evidence that sulforaphane suppresses in vivo glycolysis in prostate cancer cells [18,19] (Table 1).
Sulforaphane-rich diets have been shown to reduce the incidence and severity of prostate cancer in TRAMP mice. The study design included TRAMP mice fed a 15% broccoli sprout diet and a control group fed an AIN93G diet for 28 weeks. Tissue samples were collected from these two groups of TRAMP mice at 12 and 28 weeks for examination. At week 28, the group fed with 15% broccoli sprout diet showed a slower rate of prostate tumor development, decreased cancer severity and significant reduction in invasive prostate cancer. Sixteen out of eighteen (89%) control mice had an adenocarcinoma, while just seven out of nineteen (37%) broccoli sprout-fed mice developed adenocarcinoma [20].
Clinical Studies
Investigators carried out a randomized controlled clinical trial (NCT04046653) on 98 men scheduled for prostate biopsy from July 2011 to December 2015. The men were randomly assigned into two groups. One group was given 200 µmol per day of broccoli sprout extract for 4-5 weeks, while the other group received a placebo. At the end of the course of treatment, both groups' prostate tissues were analyzed for biomarkers and HDAC activity. The study found no significant positive changes in prostate cancer biomarkers. The researchers proposed that the reason for this surprising result could be because the intervention period was short, the dosage was low or insufficient and/or due to the rapid elimination of sulforaphane before it reached the target tissue [21].
A study aiming to evaluate the impact of consuming a glucoraphanin-rich broccoli soup on gene expression in prostate glands of men with localized prostate cancer recruited 49 men diagnosed with organ-confined prostate cancer, who were placed on surveillance to monitor progression of the cancer for a randomized double-blinded controlled trial. The study design randomly divided the 49 participants into a 3-arm intervention. The control arm was given 300 mL portion of broccoli soup made from a standard, commercially available broccoli. The second arm of the study was given the same volume of broccoli soup made from an experimental broccoli genotype enhanced to provide 3 times the glucoraphanin concentration of the control, while the third arm of the intervention received the same volume of broccoli soup that had been enhanced to a glucoraphanin concentration 7 times that of the control. In all arms of the intervention, participants drank 300 mL of these soups weekly for 12 months. Gene expression in the prostate tissues from each patient was quantified by RNA sequencing before and after the dietary intervention. The result of the study indicated an increased level of gene expression consistent with a risk of carcinogenesis in the tissues of participants from the control group. These changes were mildly reduced in the second group, and totally suppressed in the third group. Thus, the study concluded that consuming a glucoraphanin-rich broccoli soup reduces the risk of the progression of prostate cancer [22].
A different study tried to explain the mechanism by which sulforaphane affects prostate tissue by showing that sulforaphane and its associated metabolites accumulate in the human prostate gland. Forty-two men scheduled for prostate biopsy were recruited for the study. The study design consists of one placebo and two active interventions: a supplement that provided glucoraphanin (BroccoMax©) and another that provided alliin from garlic. Participants were placed in one of these three groups for 4 weeks. At the end of the intervention period, sulforaphane and alliin levels in biopsy samples from the prostate's periphery and transition zone were measured. The result of the study shows that the glucoraphanin supplement significantly increased the concentration of sulforaphane and sulforaphane-N-acetyl cysteine in both zones of the prostate gland. It is plausible that this accumulation of sulforaphane in the prostate gland may lead to suppression of prostate cancer progression through a variety of mechanisms [23].
AR Signaling
Androgen receptor (AR) signaling mediates the initial stages of prostate carcinogenesis [24]. AR is a hormone receptor and transcription factor. Binding of androgen (such as testosterone) to AR activates AR. The activated AR migrates to the nucleus where it upregulates the transcription of the genes of proteins such as B-cell lymphoma-extra-large (Bcl-XL) and Hypoxia-inducible factor (HIF-1α) [25,26]. Bcl-XL is a protein that suppresses apoptosis and thus promotes the survival and expansion of prostate cancer cells [27]. HIF-1α upregulates the transcription of hexokinase (HK) and pyruvate kinase (PK); over-expression of these enzymes reprograms the metabolism of cells to solely aerobic glycolysis [28]. This is a hallmark of cancer cells known as the Warburg effect [29]. Hence, HIF-1α promotes the development and multiplication of prostate cancer cells via glycolytic metabolism.
Sulforaphane has been proposed to prevent prostate carcinogenesis by disrupting the AR signaling pathway. Sulforaphane interacts with the promoter region of the AR gene, preventing the transcription of ARs. This significantly reduces the synthesis of ARs, with no ARs present in the cell surface, androgens cannot bind to ARs to initiate the AR signaling cascade [12] (Figure 2). More so, sulforaphane has been shown to suppress HIF-1α [30]. Sulforaphane binds to HIF-1α and distorts its structure; the distorted HIF-1α loses its function and it is subsequently degraded (Figure 2).
Induction of Apoptosis
Apoptosis is a natural occurrence through which the number of cells in tissues are regulated. Cancer develops when apoptosis fails; thus, cancerous tissues often suppress apoptosis in cells [31]. Apoptosis can be induced by the activity of the ubiquitin proteasome system (UPS). The UPS involves two processes: ubiquitination and 26S proteasomemediated degradation. Improperly folded or damaged proteins are marked by ubiquitin, and then recognized and degraded by 26S proteasome [32]. The 26S proteasome has two subunits: a 20S barrel-shaped catalytic core and a 19S regulatory particle. Deubiquitinating enzymes (DUBs) are attached to the 19S regulatory particle to prevent erroneous degradation of cellular proteins. DUBs remove ubiquitin from poly-ubiquitinated protein, preventing its degradation by the proteasome [33]. Tumor tissues often over-express DUBs such as USP14 and UCHL5, thus preventing degradation of proteins and apoptosis, ultimately resulting in the survival and proliferation of cancerous tissues.
Induction of Apoptosis
Apoptosis is a natural occurrence through which the number of cells in tissues are regulated. Cancer develops when apoptosis fails; thus, cancerous tissues often suppress apoptosis in cells [31]. Apoptosis can be induced by the activity of the ubiquitin proteasome system (UPS). The UPS involves two processes: ubiquitination and 26S proteasome-mediated degradation. Improperly folded or damaged proteins are marked by ubiquitin, and then recognized and degraded by 26S proteasome [32]. The 26S proteasome has two subunits: a 20S barrel-shaped catalytic core and a 19S regulatory particle. Deubiquitinating enzymes (DUBs) are attached to the 19S regulatory particle to prevent erroneous degradation of cellular proteins. DUBs remove ubiquitin from poly-ubiquitinated protein, preventing its degradation by the proteasome [33]. Tumor tissues often over-express DUBs such as USP14 and UCHL5, thus preventing degradation of proteins and apoptosis, ultimately resulting in the survival and proliferation of cancerous tissues.
Sulforaphane has been shown to inhibit the two proteasomal cysteine DUBs, USP14 and UCHL5, in prostate cancer cells [16]. Sulforaphane interacts with USP14 and UCHL5 and suppresses their activity. This promotes increased degradation and induces apoptosis of cells of prostate tumor tissue (Figure 3). Sulforaphane has been shown to inhibit the two proteasomal cysteine DUBs, USP14 and UCHL5, in prostate cancer cells [16]. Sulforaphane interacts with USP14 and UCHL5 and suppresses their activity. This promotes increased degradation and induces apoptosis of cells of prostate tumor tissue (Figure 3).
DNA Damage
It has been reported that sulforaphane causes double-stranded DNA breaks and then prevents the repair of these breaks in human prostate cancer cells [14,34]. When DNA damage occurs via a double-stranded break, the repair process involves a complex of var-
DNA Damage
It has been reported that sulforaphane causes double-stranded DNA breaks and then prevents the repair of these breaks in human prostate cancer cells [14,34]. When DNA damage occurs via a double-stranded break, the repair process involves a complex of various nucleotide excision repair proteins. The combined action of MRN and CtIP proteins holds each pair of single DNA strands in place. RPA, BRCA and XPA work together to form a Holliday junction and a primer at the point of repair. Eventually, DNA synthesis is initiated and the damage is repaired [35].
However, in prostate cancer cells, sulforaphane inhibits XPA protein, an important protein involved in nucleotide excision repair. This disrupts and prevents the repair process; multiple double-stranded DNA breaks accumulate in the cell until the cell is destroyed by apoptosis [34] (Figure 4).
DNA Damage
It has been reported that sulforaphane causes double-stranded DNA bre prevents the repair of these breaks in human prostate cancer cells [14,34]. damage occurs via a double-stranded break, the repair process involves a com ious nucleotide excision repair proteins. The combined action of MRN and C holds each pair of single DNA strands in place. RPA, BRCA and XPA wor form a Holliday junction and a primer at the point of repair. Eventually, DN is initiated and the damage is repaired [35].
However, in prostate cancer cells, sulforaphane inhibits XPA protein, a protein involved in nucleotide excision repair. This disrupts and prevents th cess; multiple double-stranded DNA breaks accumulate in the cell until th stroyed by apoptosis [34] (Figure 4).
Upregulation of Protective Enzymes
Sulforaphane protects against prostate carcinogenesis by upregulating the transcription of carcinogen-detoxifying enzymes (Phase 2 enzymes). Sulforaphane binds to Keap1 in the cytoplasm and disrupts its orientation. This disruption releases Nuclear factor erythroid 2 (Nrf2). Nrf2 is transported to the nucleus, where it binds to antioxidant response element (ARE); this leads to the increased transcription of Phase 2 detoxifying enzymes such as quinone 1, NAD(P)H dehydrogenase and heme oxygenase 1. These enzymes enhance cellular defenses and prevent the initiation of carcinogenesis [5,36] (Figure 5).
Autophagy
Autophagy is a process by which the cell maintains homeostasis by recycling old and damaged cytoplasmic components such as proteins and organelles [37]. The process involves the formation of membranous vacuoles (autophagosomes), which engulf the cytoplasmic components marked for recycling. Autophagosomes fuses with lysosomes, and the lysosomal enzymes degrade the content of the vacuoles [38]. Researchers have shown that autophagy plays a complex role in the development and progression of cancer cells [39].
Upregulation of Protective Enzymes
Sulforaphane protects against prostate carcinogenesis by upregulating the transcription of carcinogen-detoxifying enzymes (Phase 2 enzymes). Sulforaphane binds to Keap1 in the cytoplasm and disrupts its orientation. This disruption releases Nuclear factor erythroid 2 (Nrf2). Nrf2 is transported to the nucleus, where it binds to antioxidant response element (ARE); this leads to the increased transcription of Phase 2 detoxifying enzymes such as quinone 1, NAD(P)H dehydrogenase and heme oxygenase 1. These enzymes enhance cellular defenses and prevent the initiation of carcinogenesis [5,36] ( Figure 5).
Autophagy
Autophagy is a process by which the cell maintains homeostasis by recycling old and damaged cytoplasmic components such as proteins and organelles [37]. The process involves the formation of membranous vacuoles (autophagosomes), which engulf the cytoplasmic components marked for recycling. Autophagosomes fuses with lysosomes, and the lysosomal enzymes degrade the content of the vacuoles [38]. Researchers have shown that autophagy plays a complex role in the development and progression of cancer cells [39].
In the early stages of prostate cancer, sulforaphane induces autophagy by upregulating the transcription of microtubule-associated protein 1 light chain 3 (LC3), an essential protein for the formation of autophagosomes [40]. This induction of autophagy results in cytoprotective effects on prostate cells and the suppression of further progression of In the early stages of prostate cancer, sulforaphane induces autophagy by upregulating the transcription of microtubule-associated protein 1 light chain 3 (LC3), an essential protein for the formation of autophagosomes [40]. This induction of autophagy results in cytoprotective effects on prostate cells and the suppression of further progression of prostatic tumors, as damaged and abnormal cell organelles are rapidly degraded [41]. However, in the later stages of the carcinoma, autophagy promotes the survival of cancer cells by shielding them from the effects of stress and therapy. Thus, autophagy inhibitors (such as chloroquine) have been proposed as an adjuvant to sulforaphane for advanced cases of prostate cancer [42].
Limitations and Challenges
As this review has shown, sulforaphane as a natural therapeutic in preventing the progression of human prostate carcinogenesis is very promising and advantageous. However, a number of hurdles and challenges have to be surmounted before it can be used in clinical therapy.
Dosages
There is limited knowledge on the appropriate dosage of sulforaphane that can be administered to humans in a clinical setting. For example, there is a disconnect between doses administered in animal models and allowable doses in humans. Doses ranging from 5 to 100 mg/kg of sulforaphane reduce tumors in animal models [5,19]. For a 70 kg human, this translates to 350-7000 mg/kg, which is significantly above the upper threshold of tolerable doses. As reported by a recent study, administration of low doses of sulforaphane to human subjects shows no positive result [21].
Another limitation is that the therapeutic index of sulforaphane is not known; its range of effective doses and lethal doses has not been worked out. While sulforaphane has been shown to be safe and well-tolerated at low doses, high doses can lead to toxicity and adverse effects. Therefore, it is crucial to standardize the optimal therapeutic dose of sulforaphane.
In addition, there is an anomaly when the doses of sulforaphane or glucoraphanin used in clinical trials are converted to quantities of raw vegetables to be consumed. The reported average concentration of glucoraphanin in raw broccoli is 0.38 µmol/g [43]. The doses of glucoraphanin used in most clinical trials range from 25 to 800 µmol, which translates to about 65 to 2105 g of raw broccoli. This quantity of raw broccoli cannot be realistically consumed daily.
Bioavailability
There is a dearth of studies on the bioavailability of sulforaphane due to its highly unstable nature. Sulforaphane can be quickly metabolized and eliminated from the body; it becomes difficult to study the bioavailability and pharmacokinetics of the compound. Hence, most studies utilize its precursor glucoraphanin or other forms of its metabolites.
There is also variability in the way individual human subjects metabolize glucoraphanin into sulforaphane; this results in different ranges in the bioavailability of sulforaphane from one subject to the other. Fahey et. al. [44] studied the concentrations of the bioavailable sulforaphane metabolites after administering glucoraphanin to participants. They found a wide range of variability in the ability of individuals to convert glucoraphanin to sulforaphane using their gut myrosinase. The same group of researchers then administered glucoraphanin and myrosinase simultaneously to participants, yet the variability in the conversion and bioavailability of sulforaphane persisted [45].
Supplements
As it is practically impossible to match the daily doses of sulforaphane used in clinical trials by eating raw vegetables, supplementation with glucoraphanin or sulforaphane has been recommended. There have been a large number of glucoraphanin/sulforaphane supplements from various companies flooding the market since researchers showed that sulforaphane had anticancer properties and may protect against cancer. Very few of these supplements actually contain sulforaphane and/or glucoraphanin. A few of these supplements have been tested and found not to contain any traces of sulforaphane; some were not extracts of broccoli at all. The few supplements that do contain broccoli extracts faced the challenged of shelve-life and shelve stability, as sulforaphane and myrosinase are highly unstable [46,47].
Clinical Trials
The bulk of the studies on the effect of sulforaphane on prostate cancer have been in vitro studies and in vivo studies using animal models. Very few randomized controlled clinical trials have been conducted due to the complexity of conducting one. Some complex factors involved in the design of such a study include: source of sulforaphane (precursors, extracts, supplements or whole vegetables), standardization of efficacious dosage and number and availability of human subjects [46]. Since prostate cancer is a disease of an aged male population, it becomes difficult to recruit a sufficient number of subjects for clinical trials.
As the number of human clinical trials is limited, translation and comparison of the results obtained from animal models to human subjects is not feasible. Extensive knowledge of sulforaphanes' mechanisms of action and pathways in human subjects in clinical settings is lacking. With such gaps in knowledge, little is known about the long-term effect and off-target effects of chronic use of sulforaphane.
Conclusions and Future Directions
There is no doubt that sulforaphane has some anticancer and chemoprotective properties. As a natural product it is cheaper and safer than other synthetic anticancer agents. Its potential as a therapeutic agent will continue to spur research.
Future prospects in this area of research should focus on large-scale clinical trials conducted over long periods of time. A standard dosage and the development of a therapeutic index for the use of sulforaphane in clinical settings should also be an area of intense focus.
New systems designed to increase the bioavailability of sulforaphane and improve its absorption by cancer cells are currently being developed. For example, systems such as microencapsulation, microspheres, micelles and nanoparticles will be the direction of future research [48].
Another potential research area may involve human clinical trials designed to assess sulforaphane's effect on benign prostatic hyperplasia (BPH) and lower urinary tract symptoms (LUTS). As one of sulforaphane's proposed mechanism of action involves the disruption of AR signaling pathway, and the development of BPH is related to this pathway, sulforaphane can improve symptoms in men with BPH and LUTS.
In addition, a combination-therapy approach is increasingly being proposed in the treatment of prostate cancer. Combining sulforaphane with other agents, such as chemotherapy and radiation therapy, may enhance its efficacy and should become a staple in future study design. Based on the evidence presented in this review, we conclude that sulforaphane is a promising chemopreventive phytocompound capable of preventing the progression of prostate cancer. | 6,322.4 | 2023-04-01T00:00:00.000 | [
"Biology"
] |
Regulation of Dictyostelium Protein-tyrosine Phosphatase-3 (PTP3) through Osmotic Shock and Stress Stimulation and Identification of pp130 as a PTP3 Substrate*
Osmotic shock and growth-medium stimulation ofDictyostelium cells results in rapid cell rounding, a reduction in cell volume, and a rearrangement of the cytoskeleton that leads to resistance to osmotic shock. Osmotic shock induces the activation of guanylyl cyclase, a rise in cGMP mediating the phosphorylation of myosin II, and the tyrosine phosphorylation of actin and the ∼130-kDa protein (p130). We present data suggesting that signaling pathways leading to these different responses are, at least in part, independent. We show that a variety of stresses induce the Ser/Thr phosphorylation of the protein-tyrosine phosphatase-3 (PTP3). This modification does not alter PTP3 catalytic activity but correlates with its translocation from the cytosol to subcellular structures that co-localize to endosomal vesicles. This translocation is independent of PTP3 activity. Mutation of the catalytically essential Cys to a Ser results in inactive PTP3 that forms a stable complex with tyrosine-phosphorylated p130 (pp130) in vivo and in vitro, suggesting that PTP3 has a substrate specificity for pp130. The data suggest that stresses activate several interacting signaling pathways controlled by Ser/Thr and Tyr phosphorylation, which, along with the activation of guanylyl cyclase, mediate the ability of this organism to respond to adverse changes in the external environment.
In order to survive, cells need to adapt rapidly to environmental stresses. New environmental conditions are sensed by plasma membrane-associated proteins, activating signal transduction cascades that, in turn, regulate metabolism, cytoskeletal changes, secretion, or uptake of compounds, and gene expression (1,2). Recently, research has predominantly focused on the role of MAP 1 kinase pathways in stress response regu-lation. In mammalian cells, the MAP Jun N-terminal kinases (JNKs) or stress-activated protein kinases are activated by a diverse set of stimuli, leading to the phosphorylation and activation of transcription factors (1,3,4). UV irradiation and osmotic stress are believed to induce membrane perturbation or conformational changes in membrane proteins, which promote cell-surface receptor clustering, autophosphorylation, activation, and eventually, through a MAP kinase cascade, the activation of JNK (5). p38, another MAP kinase, is also activated by osmotic shock (6), but the signaling pathway seems to be at least partially different from the JNK pathway (3,7). In yeast Saccharomyces cerevisiae, the pathway induced by hyperosmotic condition is very well elucidated. As in Escherichia coli, in which a two-component system composed of a histidine kinase (EnvZ) and a response regulator (OmpR) is involved in osmoregulation (8), hyperosmolarity in yeast is sensed by a transmembrane histidine kinase (SLN1; Ref. 9). Under normal, low osmotic conditions, SLN1 is active and autophosphorylated on histidine. The phosphate is transferred in three steps via YPD1 to an aspartic acid residue of the response regulator SSK1 (10). Phosphorylated SSK1 prevents the activation of the HOG1 MAP kinase cascade, whereas under high osmotic conditions, SLN1 is inactive, SSK1 is not phosphorylated, and the HOG1 MAP kinase cascade is active, leading to gene expression and glycerol production (10). From the above mentioned components, only a histidine kinase (DokA, see below) has been found in Dictyostelium. Other signaling pathways, activated in Dictyostelium in response to stress stimulation, are summarized below.
In this study, we examine stress responses and osmotic shock stimulation in Dictyostelium and the potential role of a protein-tyrosine phosphatase in mediating these responses. Dictyostelium grows as single-celled amoebae, but upon starvation the cells aggregate, differentiate, and form a multicellular organism (11). Within 5-10 min after single Dictyostelium cells are exposed to high osmolarity or growth medium, the cells round up and shrink to ϳ50% of their original volume (2,(12)(13)(14). Phosphorylation of myosin II on three Thr residues, the subsequent disassembly of myosin filaments, the reduced myosin-actin interaction, and the relocalization of myosin play key roles in this process and are crucial for the cells to survive hyperosmotic stress (2). Exposure of the cells to 0.3 M glucose leads to an intracellular rise in cGMP (2,14) which is required for the phosphorylation of myosin II (2). This rise in cGMP is thought to activate a cGMP-dependent protein kinase, which in turn activates a myosin II heavy chain-specific protein kinase C (15,16). Similarly, extracellular cAMP induces guanylyl cyclase activity and myosin II phosphorylation during Dictyostelium aggregation, which mediates chemotaxis (15,17). However, the kinetics of intracellular cGMP accumulation and the signal transduction pathway leading to guanylyl cyclase stimulation are different than after osmotic shock stimulation (2,18).
Cellular stresses such as ATP depletion, as well as the exposure of cells previously starved in non-nutrient buffer to growth medium, lead to rapid cell rounding and transient tyrosine phosphorylation of certain proteins, including actin and p130 (12, 19 -22). Actin tyrosine phosphorylation, as with the activation of guanylyl cyclase, correlates with cell-shape change and a rearrangement of actin filaments and is affected by the level of the protein-tyrosine phosphatase PTP1 (12,20). The tyrosine phosphorylation of p130, however, is affected in strains overexpressing wild-type or mutant forms of proteintyrosine phosphatase PTP3 but not PTP1, suggesting it might be a substrate of PTP3 and play a different role in these response pathways (22). PTP3, determined to be a nonreceptor PTP by sequence analysis, was found to be transiently phosphorylated in response to growth medium stimulation, supporting the involvement of PTP3 in p130 regulation. PTP3 is expressed in growing cells, and its expression is induced to higher levels during multicellular development (22). Recently, a putative intracellular histidine kinase (DokA) was reported, and a dokA null strain appears to be less osmo-tolerant than wild-type cells, indicating a potential role of this enzyme in osmoregulation (13). Although it is likely that MAP kinase cascades are involved in Dictyostelium osmoregulation, no members of a stress-activated MAP kinase pathway have been identified.
In this report, we further investigate the role of PTP3. We find that PTP3 becomes phosphorylated on Ser and Thr residues after osmotic shock or other stress stimulations, which also lead to the tyrosine phosphorylation of actin and p130. However, by using different concentrations of osmotically active substances, we find that the signaling pathways mediating actin and p130 tyrosine phosphorylation, as well as guanylyl cyclase activation, seem to be distinct. We demonstrate that PTP3 specifically interacts with pp130 in vivo and in vitro, suggesting that pp130 is a PTP3 substrate. Another tyrosinephosphorylated protein (pp60) was found to interact with PTP3, but the interaction seems to be different than with pp130. In addition, we show that PTP3 phosphorylation does not alter PTP3 activity but correlates with a translocation of PTP3 from the cytoplasm to subcellular structures. Our results indicate that osmotic shock and other stresses result in the activation of multiple, interactive response pathways, including Tyr and Ser/Thr phosphorylation of multiple components in the pathway that permit Dictyostelium cells to respond to environmental changes.
EXPERIMENTAL PROCEDURES
Plasmid Constructions and Culturing of Dictyostelium Strains-Most plasmids have been described previously (22). In all PTP3 overexpression constructs, the PTP3 promoter is localized upstream of the wild-type or mutated PTP3 gene, and overexpression is achieved by multiple integrations of these plasmids into the chromosome. For the fusion of the FLAG tag (DYKDDDDK) to the C terminus of PTP3, an oligonucleotide was designed that contained the antisense sequence encoding the last 8 amino acids of PTP3 and the FLAG amino acids followed by an Asp718 restriction site (5Ј-GTT TGG TAC CTT TTT TTT ACT TGT CAT CGT CAT CTT TGT AAT CAA AAC ATT TAA TTG GTG TAA CTC T-3Ј). This oligonucleotide and an outside T7 primer were used for polymerase chain reaction amplification of the last ϳ500 base pairs of the PTP3 gene, and after the confirmation of the correct sequence, the BglII-Asp718 fragments of the PTP3(C649S) and the PTP3⌬1(C649S) overexpression constructs were replaced by the BglII-Asp718 fragment containing the FLAG tag (for the restriction sites see Fig. 5A). Similar to the FLAG-tagged construct, a C-terminal Myc tag (EQKLISEEDLN) fusion was made (Myc oligonucleotide, 5Ј-GTT TGG TAC CTT TTT TTT AAT TTA AAT CTT CTT CTG AAA TTA ATT TTT GTT CAA AAC ATT TAA TTG GTG TAA CTC T-3Ј), and the BglII-Asp718 fragment of the wild-type PTP3 overexpression construct was replaced by the BglII-Asp718 fragment containing the Myc tag. The GST-PTP3(C649S) construct pMG35 is essentially the same as the previously described pMG24 except for a single base pair change that converts the catalytic cysteine to a Ser (Fig. 5A).
In most of our studies, the wild-type strain KAx-3 was used, and if not specifically indicated overexpression plasmids were transformed into this strain. The partial ptp3 null strain, lacking one of the PTP3 genes, has been described (22). For mitochondrial localization studies, PTP3 overexpression constructs were transformed into the cluA null mutant (23). Transformation and clonal selection was carried out as described earlier (22). The growth medium was HL5 supplemented with 56 mM glucose as described by Franke and Kessin (24).
Growth Medium or Osmotic Shock Stimulation, Harvesting of Total Protein Samples-Prior to growth or osmotic shock stimulation, Dictyostelium cultures were grown for 2-3 days in shaking cultures and the cells were washed, in either 12 mM sodium/potassium phosphate buffer (pH 6.1) or phosphate-free MES-PDF buffer (25). The cells were resuspended in the same buffers at 1.0 ϫ 10 7 cells/ml and shaken for 2 to 4 h at room temperature at 150 rpm. Growth medium, osmotic shock, or other stress stimulations were performed as indicated in the figure legends. At different time points, total protein samples of 5.0 ϫ 10 6 cells were taken and boiled in 80 l of SDS sample buffer. Usually, 2-3 l were loaded per lane on an 8% SDS gel.
FITC-dextran was used to label endosomal compartments as described (29). Briefly, starved cells were placed on coverslips, placed in dishes, and flooded with either sodium/potassium phosphate buffer or HL5 growth medium containing FITC-dextran (2 mg/ml; Sigma) for 30 min.
Phosphoamino Acid Analysis-KAx-3 cells overexpressing PTP3⌬1-(C649S) were starved for 2-4 h in MES-PDF buffer in the presence of 1 mCi/ml [ 32 P]orthophosphate (35 mCi/70 l; ICN Biomedicals, Costa Mesa, CA). After starvation 2.0 ϫ 10 7 cells were withdrawn for the first IP, and sorbitol was added to the remaining cells at a final concentration of 200 mM. After an additional 15 min of shaking, 2.0 ϫ 10 7 cells were taken for the second IP. The anti-PTP3 IPs were boiled in SDS sample buffer and loaded on a preparative SDS gel (10% polyacrylamide gel; 14 ϫ 16 cm 2 ). After the gel run, the proteins were transferred to an Immobilon-P membrane (Millipore) for 1 h as described previously (22). The membrane was exposed to a Kodak XAR film to detect the 32 P incorporation and subjected to Western blot analysis by using the anti-PTP3 antibody. The hot spots were cut out, and after two washing steps in 100% methanol and H 2 O, the pieces of membrane were submerged in "constant boiling HCl" and incubated at 110°C for 1 h. Afterward, the hydrolysates were lyophilized and dissolved in H 2 O containing markers for Ser(P), Thr(P), and Tyr(P). The phosphoamino acids were separated by two-dimensional electrophoresis (pH 1.9 and pH 3.5) as described previously (30).
Guanylyl Cyclase Assays-Wild-type cells were starved in 12 mM sodium/potassium phosphate buffer as described above. After the addition of the osmotic active solution, the cells were kept in shaking culture. At the indicated time points, aliquots of 2.0 ϫ 10 6 cells (usually 100 l) were withdrawn (Fig. 4). The aliquots were diluted in 100 l (1 volume) of 3.5% perchloric acid and incubated on ice for 30 -60 min with periodic vigorous shaking. The solution was neutralized by the addition of 45 l of 50% saturated KHCO 3 and incubated for another 60 min on ice with occasional vigorous shaking. After a final spin for 10 min at 4°C, 100 l of the supernatant was analyzed with the cyclic GMP 3 H assay system (Amersham Pharmacia Biotech).
GST Fusion Protein Isolation and Adsorption of Cell Lysates-The isolation of GST fusion proteins from E. coli strain BL21(DE3) was done as previously reported (22) except that the proteins were not eluted from the glutathione-Sepharose beads after the washing steps. The in vitro adsorption of Dictyostelium proteins was performed as follows.
After starvation and 15 min of growth medium incubation, wild-type cells were lysed in lysis buffer (1ϫ PBS (pH 7.4), 50 mM NaF, 1% Nonidet P-40, 2 mM EDTA (pH 7.2), 1 mM sodium pyrophosphate, 1.6 g/ml leupeptin, 4 g/ml aprotinin). Sodium orthovanadate (Na 3 VO 4 ) was only added when indicated. After a cell lysis on ice for 5 min and a centrifugation at 4°C for 10 min, the lysate of 2.0 ϫ 10 7 cells in 1.1 ml of lysis buffer was added to ϳ80 l of glutathione-Sepharose beads carrying the GST fusion proteins. Following an incubation at 4°C with gentle rocking for 1 h, the beads were washed in lysis buffer and 1ϫ PBS (pH 7.4) and, only if indicated, in 0.5 M NaCl. The proteins were finally eluted from the beads by boiling in SDS sample buffer.
PTP3 Is Phosphorylated in Response to
Stress-When Dictyostelium cells were starved for 4 h in non-nutrient buffer and resuspended in growth medium, PTP3 became transiently phosphorylated. This modification was evident by anti-PTP3 Western blot analysis since it led to a slower migrating form of PTP3 on an SDS gel (22). We were interested in examining other conditions that might induce PTP3 phosphorylation. For this purpose, cells overexpressing an inactive form of PTP3 with an internal deletion of 116 amino acids (PTP3⌬1(C649S) (22)) were used. The truncated version of PTP3 was used instead of the full-length protein of 989 amino acids because it gave a higher level of expression in Dictyostelium and a greater mobility shift on SDS gels. Thus, PTP3⌬1(C649S) presumably contains the critical phosphorylation site(s) for the shift. Osmotically active small molecules, such as 0.3 M glucose, 0.2 M sorbitol, or 0.4 M sodium chloride, induced a mobility shift of PTP3⌬1(C649S) on an SDS gel (Fig. 1A). Within the resolution of this assay, the shift was identical for the three active compounds at these concentrations and is similar to observations when starved cells are shifted to growth medium. In addition, stresses such as ATP depletion, heavy metal ions, or heat shock induce tyrosine phosphorylation of actin and p130 (20), as previously shown for growth medium addition to starved cells (12). Since pp130 is a potential PTP3 substrate (22), we tested whether the exposure of cells to 1 mM sodium azide (to deplete ATP), 100 M cadmium chloride, or a heat shock at 33°C induced a mobility shift of PTP3 on an SDS gel. All of the stresses led to PTP3⌬1(C649S) phosphorylation (Fig. 1B). We observed mobility shifts to those shown in Fig. 1B when cells were taken from growth medium and exposed to the stresses mentioned above (data not shown). Our data suggest a possible general role for PTP3 in osmo-and stress regulation.
To determine the amino acids that were phosphorylated on PTP3, cells overexpressing PTP3⌬1(C649S) were labeled in vivo with [ 32 P]orthophosphate. In vivo 32 PO 4 -labeled cell lysates were made from cells starved for 4 h and stimulated or not stimulated with 0.2 M sorbitol. PTP3⌬1(C649S) was immunoprecipitated with anti-PTP3 antibodies, and the IPs were separated on an SDS gel and blotted onto a membrane. The membrane was exposed to a film and also subjected to Western blot analysis using anti-PTP3 antibodies (Fig. 2B). The Western blot confirmed that equal amounts of PTP3⌬1(C649S) were immunoprecipitated in the samples and that the PTP3⌬1-(C649S) exhibited a mobility shift after sorbitol stimulation. The autoradiogram indicated that PTP3⌬1(C649S) was phosphorylated before and after sorbitol treatment and the mobility shift correlated with an increase in the level of phosphorylation. Interestingly, both bands of the PTP3⌬1(C649S) doublet were labeled in starved cells and the sorbitol stimulation led to a very broad, fuzzy series of bands. The labeled PTP3⌬1(C649S) proteins were excised from the membrane and examined by phosphoamino acid analysis (30). In starved, unstimulated cells, only Ser(P) was detected, whereas after sorbitol induction the amount of label in the Ser(P) increased, and some Thr(P) was also detected (Fig. 2C). Since the 116-amino acid region that was deleted in PTP3⌬1(C649S) does not contain any tyrosines and since neither anti-Tyr(P) Western analysis nor in vivo 32 PO 4 -labeling detected any PTP3 tyrosine phosphorylation (22), we conclude that PTP3 is phosphorylated exclusively on serines and threonines.
Tyrosine Phosphorylation of Actin and p130 Is Induced at Different Concentrations of Osmotic Active Substances-When Dictyostelium cells were starved for 2-4 h in non-nutrient buffer and then incubated with growth medium, we observed several distinct changes in the tyrosine phosphorylation pattern of certain proteins (Fig. 3A) (12,22). p130 was fully phosphorylated within 5 min, whereas actin phosphorylation was first detected at 10 min and was maximal at 25 min after stimulation. When the cells were shifted back to low osmotic phosphate buffer, both proteins became dephosphorylated ( stimulated cells (Fig. 3C) is visible. Since this Tyr(P) protein never showed any interaction with PTP3 (in GST-PTP3(C649S) interaction assays (Fig. 5B) or co-immunoprecipitation assays with PTP3(C649S) (Fig. 6A), data not shown), it presumably is a protein other than pp130, or it is pp130 phosphorylated on another tyrosine that is not recognized by PTP3 (see Fig. 3B). The tyrosine phosphorylation of actin was regulated differently than that of p130; 0.10 M sorbitol produced only a low level of actin tyrosine phosphorylation (data not shown); intermediate osmotic concentrations (0.15 M (Fig. 3B) and 0.20 M) led to strong actin phosphorylation, and high osmolarity (0.30 M and above) had only a minor effect (Table I; Fig. 3C). Analysis of osmotically active substances showed that ionic and non-ionic molecules had equal responses with respect to differential p130 and actin tyrosine phosphorylation and were dependent on the osmotic concentration (Table I). As the osmolarity response curves of actin and p130 tyrosine phosphorylation are different, we suggest the responses may be regulated, at least in part, by different signaling pathways. Table I; see Refs. 2, 12, and 14). However, despite these similar slow activation kinetics, actin phosphorylation was maximal at osmolarities between 0.15 and 0.20 M (Table I). For guanylyl cyclase activation, maximal stimulation was observed at osmolarities of Ն0.30 M (Fig. 4A; Table I; see Ref. 14). Stimulation with 0.20 M glucose or growth medium produced only a small increase in cGMP, whereas stimulation with 0.20 M sorbitol had little effect (Fig. 4, A and B). These data suggest that a distinct signaling pathway is responsible for the strong guanylyl cyclase activation. Overexpression of PTP3(WT) or the deletion of one of the two chromosomal PTP3 genes in Dictyostelium did not affect guanylyl cyclase activation (data not shown). 0.20 and 0.15 M sorbitol stimulation led to cell rounding, with kinetics similar to growth medium stimulation. 0.10 M sorbitol produced cell rounding, but the initiation of the rounding was delayed by ϳ5-10 min (data not shown).
Specific Interaction of Tyrosine-phosphorylated p130 with a Catalytically Inactive Form of PTP3 in Vitro-Since our preliminary data suggested that pp130 might be a PTP3 substrate (22), we further investigated the potential interaction between the two proteins. For this purpose, two nearly identical ϳ100-kDa fusion proteins were designed in which the N-terminal 242 amino acids of PTP3 were replaced by GST. One protein had an active catalytic site (GST-PTP3(WT), pMG24; see Ref. 22), whereas in the other protein, a Ser was substituted for the Cys essential for catalysis (GST-PTP3(C649S), pMG35; Fig. 5A; see Ref. 31), resulting in a catalytically inactive enzyme. The catalytic Cys is localized in a highly conserved region of the ϳ230amino acid catalytic domain within the signature motif characteristic for PTPs, HCXXGXXRS(T) (31,32). These conserved amino acids bind the tyrosine phosphate, and in the initial step of the catalysis, the cysteine thiolate acts as a nucleophile yielding a covalent thiol phosphate intermediate (33). The Cysto-Ser mutation still allows substrate recognition and binding, but the inability to hydrolyze the phosphate is reported to give a prolonged and more stable interaction with the substrate (34).
The two GST-PTP3 fusion proteins and the GST protein alone were expressed in E. coli and isolated using glutathione- Anti-PTP3 IPs of in vivo labeled cell lysates taken before and after 0.2 M sorbitol addition were separated on an SDS gel. The proteins were transferred to a membrane and exposed to a film ( 32 PO 4 ). To verify the amounts of immunoprecipitated PTP3, the same membrane was subjected to anti-PTP3 Western blot hybridization (Western blot). The two arrows point to the PTP3 doublet visible before stimulation. C, phosphoamino acid analysis of PTP3 after starvation and after sorbitol stimulation. The origins where the samples were applied to the thin layer plate are indicated. The first electrophoresis was done at pH 1.9 (upward) and the second at pH 3.5 (to the left). The markers were visualized by ninhydrin staining, and the labeled phosphoamino acids were identified after exposure to a film. part. hydrol., partial hydrolysis.
Sepharose beads. As expected, GST-PTP3(WT) dephosphorylated p-nitrophenyl phosphate and a tyrosine-phosphorylated peptide; GST-PTP3(C649S) had no detectable activity toward these substrates (see Ref. 22; data not shown). To identify tyrosine-phosphorylated Dictyostelium proteins that interact with PTP3, wild-type Dictyostelium cells were lysed after star-vation in non-nutrient buffer or after a subsequent stimulation with growth medium, and the lysates were incubated with the GST fusion proteins coupled to glutathione-Sepharose beads. After washing the resin, the retained proteins were eluted with SDS sample buffer, separated by polyacrylamide gel electrophoresis, and blotted onto a membrane. Anti-Tyr(P) Western blot analysis revealed that one tyrosine-phosphorylated 130-kDa protein bound very specifically to GST-PTP3(C649S). Because this protein had the same mobility as pp130 and was only detectable after growth medium stimulation (Fig. 5B), it is very likely that the protein is pp130. The active GST-PTP3(WT) did not bind stably to pp130, presumably because it dephosphorylated and released this substrate. From these results, we can conclude that GST-PTP3(C649S) interacts with the tyrosinephosphorylated p130 specifically through the PTP3 catalytic domain. This interaction was quite strong, since the treatment of the adsorbed beads with 0.5 M NaCl did not decrease pp130 binding (Fig. 5C). In addition to pp130, two other bands were detected in the anti-Tyr(P) Western blots. The band at ϳ110 kDa corresponded to the very abundant GST-PTP3 protein that was bound to the glutathione-Sepharose (Fig. 5C) and results from a very weak binding of the antibody to this highly abundant protein on the blot. The band at ϳ60 kDa (pp60) is another tyrosine-phosphorylated protein that was present in lysates before and after medium stimulation and was not dephosphorylated by GST-PTP3(WT) (Fig. 5B). Since glutathione-Sepharose beads carrying GST alone did not bind pp60 (Fig. 5B), the interaction of pp60 with PTP3 is specific but most likely not mediated through the catalytic active site. Other , and subsequently, every 5 min, another total protein sample was taken. After 25 min, the cells were washed and resuspended in sodium/potassium phosphate buffer, and again, samples were taken every 5 min for 25 min. C, essentially the same experiments were performed as in A and B, but only two protein samples were taken, one after starvation (unstimul.) and the other 35 min after the different osmotic stimulations (as indicated). Note that 35 min after stimulation with growth medium or 0.15 M sorbitol, high levels of pp130 and actin phosphorylation were found (data not shown). The results shown in C were confirmed with full time courses as presented in A and B. In some gels, the ϳ130-kDa band migrates as two distinct bands as seen in B. B, the ϳ130-kDa phosphotyrosine band is seen as two bands, a faster mobility, lighter band observed in unstimulated cells that disappears with a stronger, slower mobility band (pp130) appearing within 5 min. After removal of the sorbitol, the slower mobility band disappears and the faster mobility band reappears. Ϫ Ϫ ND a Cell lysates taken 25 min after stimulation were analyzed by anti-Tyr(P) Western blot and Tyr(P)-levels of actin and p130 were compared. b cGMP levels of cell lysates taken 10 min after stimulation were compared. Ϫ not visible; (ϩ) very weak response; ϩ weak response; ϩϩ strong response; ϩϩϩ very strong response; ND, not determined.
c The calculated osmolarity of HL5 is ϳ0.10 M.
strongly tyrosine-phosphorylated proteins, among them actin and a protein of ϳ200 kDa, did not interact with the GST-PTP3 fusion proteins. Recent structural data for Yersinia PTP Yop51 indicates that sodium orthovanadate inhibits PTPs through a covalent bond between vanadate and the active site Cys (35). One mM vanadate did not inhibit the interaction between GST-PTP3(C649S) and pp130, presumably because the active site cysteine thiolate was absent. In fact, 1 mM vanadate increased the amount of bound pp130 to the GST-PTP3(C649S) resin, possibly because it inhibited endogenous PTP activities present in the cell lysate. A higher concentration (10 mM) of vanadate did prevent the interaction GST-PTP3(C649S) with pp130 (data not shown), as was also observed for the interaction of PTP-PEST(C231S) with its substrate p130 cas (36).
Specific Interaction of Tyrosine-phosphorylated pp130 with Asp718 (A). B and C, specific binding of pp130 to inactive GST-PTP3(C649S). Lysates of wild-type cells harvested after starvation (unstimul.) and subsequent growth medium addition (stimul.) were incubated with glutathione-Sepharose beads carrying active GST-PTP3(WT), inactive GST-PTP3(C-649S), or GST alone. To eliminate the unbound proteins, the beads were washed in lysis buffer and 1ϫ PBS and, only when indicated, in 0.5 M NaCl (C). The adsorbed tyrosine-phosphorylated proteins were detected by anti-Tyr(P) Western blots. As a control, the GST-PTP3(C649S) protein was also incubated in lysis buffer alone, without any Dictyostelium cell lysate (C, no extract).
PTP3(C649S) in Vivo-
To determine whether the observed in vitro interaction of PTP3 with pp130 is biologically relevant, we tried to co-immunoprecipitate these two proteins from Dictyostelium cell lysates. For this purpose, the FLAG tag (DYKD-DDDK) was fused in-frame at the C terminus to full-length PTP3(C649S) or the truncated version PTP3⌬1(C649S). Dictyostelium cells expressing the FLAG-tagged proteins were lysed before and after medium stimulation, and the lysates were precipitated with an anti-FLAG antibody. The IPs were first analyzed by an anti-Tyr(P) Western blot (Fig. 6A), and the filter was stripped and probed with an anti-PTP3 antibody (Fig. 6B). After medium stimulation, the full-length and truncated forms of PTP3(C649S) co-immunoprecipitated pp130 (Fig. 6A). No pp130 was immunoprecipitated in the wild-type control strain in which no FLAG-tagged protein was expressed (Fig. 6A). Since full-length PTP3 and pp130 migrated similarly on this SDS gel and since the tyrosine in the sequence of the FLAG tag could potentially be phosphorylated, we tested whether the tyrosine-phosphorylated band was not the FLAGtagged PTP3. As seen in Fig. 6A, the truncated PTP3⌬1(C649S), which migrates more rapidly than full-length PTP3 and pp130 (Fig. 6B), was not tyrosine-phosphorylated. As suggested previously (22), the internal deletion of 116 amino acids that contains the sequence between the first NsiI site and the SspI site (Fig. 5A) did not affect substrate interaction in vivo. No other tyrosine-phosphorylated proteins were visible. These data provide further evidence for the specificity of the PTP3 interaction with pp130.
Phosphorylation of PTP3 Correlates with an Intracellular Translocation-To examine the possible physiological significance of PTP3 phosphorylation and how this might affect its interaction with pp130, we performed two series of experiments. First, the PTP activity of anti-PTP3 IPs was determined before and after growth medium stimulation. IPs of wild-type cells and wild-type cells overexpressing full-length PTP3(WT) were analyzed for enzymatic activity before and after growth medium stimulation against a Tyr(P)-containing Cdc2 peptide (37,38). Samples were taken from the reaction mixture, and the free phosphate was measured by scintillation counting (38). In the presence of 1 mM dithiothreitol in the IP buffer (1ϫ PBS (pH 7.4), 50 mM NaF, 1% Nonidet P-40, 2 mM EDTA (pH 7.2), 1 mM sodium pyrophosphate, 1.6 g/ml leupeptin, 4 g/ml aprotinin) to keep the catalytic Cys of PTP3 reduced and active (39), similar PTP3 activities were found before and after stimulation (data not shown). In the absence of dithiothreitol, the PTP3 activity after starvation was significantly higher (ϳ5-fold for the PTP3(WT) overexpressor strain; ϳ2.5-fold for the wild-type strain) than after subsequent growth medium addition (data not shown). These results suggest that Ser/Thr phosphorylation does not affect PTP3 activity but possibly results in a conformational change of PTP3 that makes the active center more accessible to oxidation during protein isolation. In vivo, this conformational change could lead to altered substrate interaction or subcellular localization.
Second, we examined the intracellular localization of PTP3 before and after growth medium stimulation of wild-type cells overexpressing PTP3(WT) and PTP3(C649S). For these immunostaining experiments, two antibodies were used, the monoclonal anti-Myc antibody, directed against a C-terminal Myctagged PTP3(WT), and the polyclonal anti-PTP3 antibody, directed against PTP3(WT) and PTP3(C649S). After starvation, staining was visible throughout the cell for both forms, and cytoplasmic membranes remained unstained (Fig. 7A). In some experiments, nuclei whose localizations were determined by DNA (Hoechst dye) staining appeared as dark spots in the immunofluorescence experiments using the anti-PTP3 or Myc antibodies (data not shown). After growth medium addition, we observed a dramatic change in the PTP3-staining pattern. With both antibodies and the PTP3(WT) and PTP3(C649S) overexpressor strains, we found a scattered, dot-like staining throughout the cell after 15 min of stimulation (Fig. 7B, data for PTP3(C649S)). After a more extended period, PTP3 accumulates in larger domains (Fig. 7, Ca and Da).
The staining pattern suggested that PTP3 may be associated with an organelle. We excluded the possibility that these dotlike structures are mitochondria by transforming the Myctagged PTP3(WT) into the cluA null strain (23). In this strain, all mitochondria are clustered near the cell center (23). After 30 min stimulation with growth medium, the mitochondria, as visualized by immunostaining the mitochondrial protein F 1 B, were found localized near the center of the cell (Fig. 7Cb), whereas PTP3 accumulated in domains that excluded the mitochondria (Fig. 7Ca). We examined whether the PTP3 may associate with an endosomal compartment. Cells were starved for 4 h and stimulated with growth medium containing FITClabeled dextran to label endosomal compartments. As shown in Fig. 7D, there was a direct correlation between the distribution of dextran-containing compartments and PTP3 staining after stimulation. Non-stimulated cells show a random distribution of dextran (data not shown).
FIG. 6. Co-immunoprecipitation of pp130 with PTP3(C649S). Lysates from wild-type cells or wild-type cells overexpressing the FLAG-tagged full-length PTP3(C649S) or the truncated PTP3⌬1-(C649S) were immunoprecipitated with the anti-FLAG antibody. A, co-immunoprecipitated tyrosine-phosphorylated proteins were detected by an anti-Tyr(P) Western blot. B, the presence of fulllength or truncated PTP3 was verified by an anti-PTP3 Western blot of the same membrane. As discussed previously for Fig. 3, B and C (see "Results"), the strong Tyr(P) signal at 130 kDa in unstimulated cells presumably belongs to a protein other than pp130. starv., starved; med., medium.
Multiple, Discrete Pathways Are Activated in Response to
Stress-In this study, we analyzed stress responses in Dictyostelium in general and the regulation and role of PTP3 in these pathways in particular. We have shown that different osmolarities lead to different intracellular responses, suggesting that subtle regulatory mechanisms exist for the adaptation of cells to small changes in the extracellular environment. Considering the changes in the natural environment that Dictyostelium cells may experience, such mechanisms guarantee the ability of the cells to respond appropriately and to survive. Since p130 phosphorylation, actin phosphorylation, and the maximum activation of guanylyl cyclase are induced by different osmotic conditions, we suggest that the pathways leading to these events are, at least in part, different. A knock-out of the histidine kinase DokA or a mutation that reduces guanylyl cyclase activity leads to an osmosensitive phenotype (2,13). However, cGMP accumulation is not affected in dokA null strains, indicating that DokA acts downstream of guanylyl cyclase or in another pathway (13). We have not observed an altered osmosensitivity for any PTP3 mutant, including the partial ptp3 null strain lacking one copy of PTP3 or the wild-type strain overexpressing active or inactive PTP3. Moreover, PTP3⌬1-(C649S) expressed in the dokA null background was phosphorylated in response to stress. 2 This most likely excludes the possibility that DokA lies upstream of PTP3 in a signaling cascade. A MAP kinase kinase (DdMEK1; see Ref. 18) is hyperphosphorylated in response to stress, and interestingly, this occurs with kinetics similar to those of PTP3 and p130 phosphorylation. 3 DdMEK1 does not appear to be upstream of PTP3 because overexpressed PTP3⌬1(C649S) is hyperphosphorylated in the ddmek1 null background as well. 2 One possibility is that both DdMEK1 and PTP3 are phosphorylated by a common, stress-activated kinase.
Previously, we and others (12,21,22) investigated responses of starved cells to growth medium stimulation. The data presented here cannot exclude the possibility that the observed cellular events were, fully or partially, a consequence of the osmolarity of the growth medium. Stimulation with 0.15 M sorbitol mimics the protein tyrosine phosphorylation pattern induced by growth medium, which has a calculated osmolarity of ϳ0.16 M. The results of cells stimulated with HL5 lacking the 0.056 M glucose support this possibility, as the tyrosine phosphorylation is similar to that of 0.10 M sorbitol induction (Table I).
Stress-induced Phosphorylation of PTP3 Correlates with a Translocation of PTP3-In response to high osmolarity, we found PTP3 to be hyperphosphorylated on Ser and Thr. PTP3 is a large protein (989 amino acids) with 153 (15.4%) Ser and 64 (6.5%) Thr residues. The broad fuzzy band that is observed after sorbitol stimulation (Fig. 2B) can be explained by differential Ser/Thr phosphorylation at multiple sites. Analysis of the PTP3 sequence by eye or by the ppsearch program (EMBL Data Library) identifies the following potential PTP3 phosphorylation sites for known protein kinases: MAP kinase, 14 minimal proline-directed recognition sites (Ser/Thr-Pro; see Ref. 40); protein kinase A and cGMP-dependent protein kinase, 1 recognition site (Lys-Arg-Arg-Ser); protein kinase C, 16 recognition sites (Ser/Thr-Xaa(hydrophobic)-Arg/Lys); and casein kinase II, 12 recognition sites (Ser/Thr-Xaa-Xaa-Asp/Glu).
The Ser/Thr phosphorylation of PTP3 correlated with a translocation of PTP3 from the cytoplasm to subcellular structures, but it did not affect PTP3 activity toward a phosphopeptide substrate. Since both wild-type PTP3 and the catalytically inactive PTP3(C649S) translocated in response to osmotic stress, the translocation is independent of PTP3 activity. We suggest that PTP3 translocation is regulated through Ser/Thr phosphorylation. Our data suggest that PTP3 translocates to an endosomal compartment, although our analysis cannot distinguish between the compartments. As the response is transient when cells are placed in growth medium and can also be readily reversed by placing the cells in starvation medium, 2 we suggest that the association with endosomal vesicles is probably on the outside of the structures. The functional reason for this translocation is not known, although we note that PTP3 is more resistant to oxidation under these conditions. Whereas this property is observed upon cell lysis and may not be an in vivo property of PTP3 in osmotically stressed cells, it is an indication of a change in the property of PTP3 that is associated with its phosphorylation and/or subcellular localization and thus suggests some change in the in vivo properties of PTP3. There are other examples of intracellular translocation of PTPs upon stimulation as follows: phorbol 12-myristate 13acetate induces the differentiation of human HL-60 cells to macrophages. In this process, the activity and expression level of PTP1C increase 2-3 times; PTP1C is Ser-phosphorylated and translocates from the cytoplasm to the plasma membrane (41). In thrombin-activated platelets, SH-PTP1 translocates to the cytoskeleton (42).
pp130 Is a Substrate of PTP3-The catalytically inactive PTP3(C649S) binds tyrosine-phosphorylated pp130 in vivo and in vitro. These results show that PTP3 per se has a substrate specificity for pp130. Because pp130 did not associate with active PTP3(WT) in the in vitro binding experiments and because high vanadate concentrations inhibited PTP3(C649S) association with pp130 in vitro, the interaction between PTP3 and pp130 is presumably mediated through the catalytic site of PTP3 and the Tyr(P) and surrounding residues of pp130. Similarly, inactive PTP-PEST(C231S) selectively binds tyrosinephosphorylated p130 cas in vitro and in vivo, whereas inactive PTP1B has no substrate specificity in in vitro binding assays and binds practically any tyrosine-phosphorylated protein present in the cell lysate (36).
It is possible that PTP3 substrates in addition to pp130 exist. Such substrates could be present only in low amounts or they may not be efficiently recognized by our anti-Tyr(P) antibody. Since PTP3 is also expressed during Dictyostelium multicellular development with a maximal expression at 8 h (22) as well as during growth, it is probable that during the multicellular stages, PTP3 interacts with proteins other than pp130 and functions in different pathways. At the moment, the molecular identity of p130 is unknown. Preliminary data from pp130 adsorbed in vitro to GST-PTP3(C649S) did not reveal any obvious autokinase activity under the conditions used. 2 Possible Association of PTP3 with Stress-response Pathways-We have no direct proof that p130 or PTP3 plays a regulatory role in stress response, but from the data presented in this paper it is intriguing to speculate that they do. We observed a correlation between the phosphorylation of PTP3 and an intracellular translocation of PTP3 after stress stimulation, as well as an interaction of PTP3 with pp130. PTP3 isolated from growing cells migrated with a mobility on SDS gels that was similar to its migration in starved cells before stimulation. Similarly, PTP3 staining in growing cells looked like PTP3 staining in starved cells (data not shown). Assuming that pp130 is also cytoplasmically localized, our accumulated data could lead to the following hypothetical model (Fig. 8A). Under normal, non-hyperosmotic conditions during growth and development, PTP3 is in the cytoplasm and acts to keep pp130 in the unphosphorylated state. Stress induction stimulates PTP3 phosphorylation and may directly stimulate pp130 tyrosine phosphorylation. We propose that PTP3 phosphorylation leads to a conformational change exposing a site for endosomal docking and a subsequent translocation from the cytoplasm, which allows tyrosine-phosphorylated pp130 to accumulate in the cytoplasm. Although p130 could be a structural protein, it is intriguing to speculate that tyrosine phosphorylation of p130 has a positive or activating effect on stress-induced signal transduction pathways, and PTP3 plays a negative role in modulating these pathways. The co-immunoprecipitation experiments (Fig. 6) do not necessarily contradict this model. Because of the high overexpression of PTP3(C649S) it is likely that, although the translocation from the cytoplasm is apparent (Fig. 7), some PTP3(C649S) remains in the cytoplasm and associates with pp130. The model in Fig. 8B summarizes the known pathways outlining Dictyostelium stress regulation. Fast stress responses are observed within minutes after stimulation and include the phosphorylation of PTP3, p130, and DdMEK1. Slow responses are detected 10 -20 min after the stress signal in wild-type cells and result in the phosphorylation of actin and myosin, the rearrangement of the cytoskeleton, and cell rounding.
Other PTPs are known to negatively regulate pathways induced by hyperosmolarity or other stresses. In S. cerevisiae, a defect in the osmosensor SLN1 histidine kinase resulted in a non-phosphorylated downstream SSK1 response regulator, which is responsible for the lethal, constitutive activation of the HOG1 MAP kinase cascade. Overexpression of PTP2 rescued this lethal phenotype, and it was proposed that PTP2 directly dephosphorylates and inactivates HOG1 (9). In fission yeast Schizosaccharomyces pombe, the Spc1 MAP kinase pathway is activated by various cytotoxic stresses such as high osmolarity, oxidative stress, and high temperature. spc1 null cells are unable to grow in high osmolarity medium (43,44). Spc1 is also required for the initiation of mitosis, meiosis, and mating (44 -46). Two PTPs, PYP1 and PYP2, negatively regulate this pathway by dephosphorylating Spc1 (43,44). Furthermore, PYP2 is a target gene of the Spc1-stimulated transcription factor Atf1, indicating a negative feedback mechanism (45,46). In mammalian cells, arsenite ions (As 3ϩ ) are toxic and highly carcinogenic. As 3ϩ is thought to directly inhibit a phosphatase containing an essential Cys. In the absence of cellular stresses, this phosphatase activity is believed to maintain low JNK and p38 MAP kinase activities (47). Recently, PTP1B has been reported to be phosphorylated on Ser in response to stress and osmotic shock, but neither the function of the phosphorylation nor the upstream kinase have been identified (48). Because no members of a stress-regulated MAP kinase pathway have been identified in Dictyostelium, we cannot test whether PTP3 is phosphorylated by such a pathway or acts as a negative regulator of a MAP kinase as discussed in the examples above. FIG. 8. Stress regulation in Dictyostelium. A, model of PTP3 and pp130 interaction in response to stress. Stress induces the Ser and Thr phosphorylation of PTP3 and, simultaneously, the translocation of PTP3 from the cytoplasm. Our data strongly suggest that pp130 is a substrate for PTP3. We speculate the following regulatory mechanism. The tyrosine phosphorylation of pp130 could be induced directly by the stress signal, but assuming a cytosolic localization of p130, it is facilitated by the translocation of PTP3 to another compartment. Cytosolic PTP3 keeps pp130 in the non-phosphorylated, possibly inactive form. Supposing that a phosphorylated active pp130 is required in stress response signaling pathways, the function of PTP3 in normal, non-stress-stimulated cells could be to negatively regulate a stress-stimulated signaling cascade through the inhibition of p130 activation. B, summary of stress response pathways in Dictyostelium. Upon stimulation of cells, signaling pathways are induced in Ͻ5 min (fast responses) or 10 min (slow responses). See "Discussion" for details.
Purification and sequence analysis of p130 are likely to provide the data necessary to define its function and the function of PTP3 in regulating stress response pathways.
In Dictyostelium, osmotic and stress response regulation appears to be complex. The data presented here indicate different pathways control different aspects of the overall response. The identification of pp130 as a specific PTP3 substrate characterizes PTP3 as a highly selective PTP. The concomitant PTP3 phosphorylation and translocation in response to stress suggest that PTP3, perhaps through its inhibition of p130 activation, may function to negatively regulates stress response pathways. | 9,744 | 1999-04-23T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Speech-Driven Facial Animations Improve Speech-in-Noise Comprehension of Humans
Understanding speech becomes a demanding task when the environment is noisy. Comprehension of speech in noise can be substantially improved by looking at the speaker’s face, and this audiovisual benefit is even more pronounced in people with hearing impairment. Recent advances in AI have allowed to synthesize photorealistic talking faces from a speech recording and a still image of a person’s face in an end-to-end manner. However, it has remained unknown whether such facial animations improve speech-in-noise comprehension. Here we consider facial animations produced by a recently introduced generative adversarial network (GAN), and show that humans cannot distinguish between the synthesized and the natural videos. Importantly, we then show that the end-to-end synthesized videos significantly aid humans in understanding speech in noise, although the natural facial motions yield a yet higher audiovisual benefit. We further find that an audiovisual speech recognizer (AVSR) benefits from the synthesized facial animations as well. Our results suggest that synthesizing facial motions from speech can be used to aid speech comprehension in difficult listening environments.
INTRODUCTION
Real-world listening environments are often noisy: many people talk simultaneously in a busy pub or restaurant, background music plays frequently, and traffic noise is omnipresent in cities. Seeing a speaker's face makes it considerably easier to understand them (Sumby and Pollack, 1954;Ross et al., 2007), and this is particularly true for people with hearing impairments (Puschmann et al., 2019) or who are listening in background noise. This phenomenon, termed inverse effectiveness, is characterized by a more pronounced audiovisual comprehension gain in challenging hearing conditions (Meredith and Stein, 1986;Stevenson and James, 2009;Crosse et al., 2016).
This audiovisual (AV) gain is linked to the temporal and categorical cues carried by the movement of the head, lips, teeth, and tongue of the speaker (Munhall et al., 2004;Chandrasekaran et al., 2009;O'Sullivan et al., 2017) and likely emerges from multi-stage, hierarchical predictive coupling and feedback between the visual and the auditory cortices (Hickok and Poeppel, 2007;Kayser et al., 2007Kayser et al., , 2012Schroeder et al., 2008;Peelle and Sommers, 2015;Crosse et al., 2016;O'Sullivan et al., 2021).
However, the visual component of audiovisual speech is often not available, such as when talking on the phone or to someone wearing a mask, when listening to the radio or when watching video content where the audio narrates nonspeech video content. A system that automatically synthesizes talking faces from speech and presents them to a listener could potentially aid comprehension both for normal hearing people and those living with hearing loss in such situations.
Early efforts to synthesize talking faces from speech were based on pre-recorded kinematic and parametrized models Kuratate et al., 1998). These early models yielded animations capable of augmenting speech comprehension in background noise Le Goff et al., 1997;Munhall et al., 2004) but required the previous or simultaneous recording of a human speaker wearing facial markers or myographic electrodes (Bailly et al., 2003).
Later works proposed a modular framework for pre-trained text-to-AV-speech synthesizers (MASSY) which included both animated and photorealistic face generation sub-modules (Fagel and Sendlmeier, 2003;Fagel, 2004). Talking heads synthesized with such models increased comprehension performance as much as their natural counterparts in consonant-recognition paradigms but word and sentence identification was about twice as high for the natural videos (Lidestam and Beskow, 2006;Aller and Meister, 2016).
Synface, a project dedicated to synthesizing talking faces for enhancing speech comprehension, also utilized phonetic analysis of speech and showed that stimuli generated in such a way can improve speech comprehension in people with hearing impairments as well as in healthy volunteers listening in background noise (Beskow et al., 2002;Agelfors et al., 2006).
Recent advances in speech-driven animation methods have made it possible to produce photorealistic talking heads with synchronized lip movements using only a still image and an audio clip. State-of-the-art solutions are trained in an end-to-end manner using self-supervision and do not require intermediate linguistic features such as phonemes, or visual features such as facial landmarks and visemes. Most are based on generative adversarial networks (GANs) and can produce high quality visual signals that can even reflect the speaker's emotion (Chung et al., 2017;Chen et al., 2019;Vougioukas et al., 2020).
Employing such facial animations to improve speech-in-noise comprehension would represent a significant step forward in the development of audiovisual hearing aids. However, it has not yet been investigated whether such end-to-end synthetic facial animations can aid a listener to better understand speech in noisy backgrounds. In this study we set out to investigate this issue.
MATERIALS AND METHODS
To investigate the impact of different types of AV speech on speech-in-noise comprehension in humans, we first synthesized realistic facial animations from speech. We then assessed how these facial animations benefitted humans in understanding speech in noise, compared to no visual signal and to the actual video of a speaker. We finally compared the human level of AV speech comprehension to that of an AV automatic speech recognizer.
Audiovisual Material
We employed sentences from the GRID corpus, which consists of 33 speakers each uttering 1,000 three-second-long sentences (Cooke et al., 2006). The videos in the GRID corpus are recorded at 25 frames per second, and the speech signals are sampled at 50 kHz. Four speakers, of which two were female, were selected for their lack of a strong accent (speakers 12, 19, 24, and 29).
Sentences of the GRID corpus are semantically unpredictable but meaningful commands composed of six words taken from a limited dictionary ( Table 1). As intended for this corpus, participants were only scored on the color, letter, and digit in each sentence (i.e., the keywords marked with an asterisk in Table 1), with the remaining words acting as contextual cues.
Audio
The audio files of the chosen speakers were down-sampled to 48 kHz using FFMPEG to match the sampling frequency of the available speech-shaped noise (SSN) files. The latter, also known as speech-weighted noise, was generated from the spectral properties of multiple concatenated clean speech files from different speech corpora and audiobooks by randomizing the phase of all spectral components before extracting the real part of the inverse Fourier transform.
The root mean square amplitudes of both the voiced part of the GRID sentence and the SSN were then measured. The two signals were scaled and combined such that the signalto-noise ratio (SNR) was -8.82 dB. This value was found during pilot testing to reduce comprehension of normal-hearing participants to 50%.
Synthesized Video
We used the GAN 1 model proposed by Vougioukas et al. (2020) to generate talking head videos from single still images and speech signals at 25 frames per second (Figure 1). The GAN is trained using multiple discriminators to enforce different aspects of realism on the generated videos, including a synchronization discriminator for audiovisual synchrony. The offset between the audio and the visual component in the synthesized videos is below 1 frame (below 40 ms, Table 6, Vougioukas et al., 2020). This method is also capable of generating videos that exhibit spontaneous facial expressions such as blinks, which contribute to the realism of the sequences.
The LipNet pretrained automated lipreading model, which obtains a word error rate (WER) of 21.76% on the natural images, achieves a WER performance of 23.1% when evaluated on synthetic videos from unseen subjects of the GRID dataset, indicating that the produced movements correspond to the correct words (Assael et al., 2016;Vougioukas et al., 2020).
Natural Video
For direct comparability with the synthesized videos, the natural videos presented to the volunteers were formatted in the same way as the natural videos used to train the GAN. The faces in the high-resolution GRID videos were aligned to the canonical face, cropped, and downscaled to a resolution of 96 × 128 pixels using FFMPEG. The points at the edges of the eyes and tip of the nose were used for the alignment of the face. The process used to obtain videos focused on the face is outlined in Figure 2.
Turing Realism Test
The realism of the synthesized videos was assessed through an online Turing test. Users were shown 24 randomly selected videos from the GRID, TIMIT (Garofolo et al., 1993), and CREMA (Cao et al., 2014) datasets, half of which were synthesized, and were asked to label them as real or fake in a two-alternative forced choice (2AFC) procedure. The experiment was performed by 50 students and staff members from Imperial College London before the Turing test was made available online. 2 The results from the first 750 respondents were reported in Vougioukas et al. (2020) and we present updated results from 1,217 participants. Figure 3 shows a side-by-side comparison between a fake and generated video.
An unstructured assessment of the videos' realism was also performed on the 18 participants of the speech-comprehension experiment (see below). Following the speech comprehension task, the subjects were asked to comment on anything interesting or strange they had noticed in the videos during the experiment. Their verbal responses were recorded anonymously. 2 The Turing test was made available online at "https://forms.gle/ vjFzS4QDU9UzFjDJ9"
Assessment of Speech-in-Noise Comprehension
Participants Eighteen native English speakers, eleven of them female, with self-reported normal hearing and normal or corrected-to-normal vision participated in the experiment. The participants were between 18 and 36 years of age, with a mean age of 23 years. All participants were right-handed and had no history of mental health problems, severe head injury or neurological disorders. Before starting the experiment, participants gave informed consent. The experimental protocol was approved by the Imperial College Research Ethics Committee.
Stimuli Presentation
We considered three types of AV stimuli. All three types had speech in a constant level of background noise, that is, with the same SNR. The type of the video, however, varied between the three types of AV signals. During one type of stimulation, subjects heard noisy speech while the monitor remained blank ("audioonly"). In another type, we presented subjects with noisy speech together with the synthesized facial animations ("synthetic AV"). Finally, subjects were also presented with the speech signals while watching the genuine corresponding videos of the talking faces ("natural AV").
The experiment consisted of six rounds of three blocks, where each block corresponded to one of the AV conditions. Six sentences were presented in each block. The order in which the three conditions were presented was randomized within rounds and across rounds. Each sentence was chosen randomly from a pool of all 1,000 sentences from each of the four speakers, and the order of speakers was randomized.
Each subject therefore listened to 36 sentences for each of the three AV types. The participants took a brief rest for one minute after every round.
Data and Analysis
Between each sentence, the participants were asked to select the keywords they had heard, from a list on the screen. The list allowed participants to select all possible GRID sentence combinations while non-keyword terms were pre-selected and displayed for them in each trial. The selection of the keywords by the participants on the monitor allowed to compute their comprehension score automatically.
The data was therefore collected and analyzed in a doubleblind fashion: neither the experimenter nor the participant knew which type of video or what specific sentence was presented. Importantly, the participants were not informed of the synthesized nature of part of the videos.
The scoring was expressed as the percentage of keywords correctly identified in each trial. The scores for each type of AV signal were extracted by averaging across trials and rounds for each participant. The responses for each keyword were also recorded, paired with the corresponding presented keyword.
Hardware and Software
The experiment took place in an acoustically and electrically insulated room (IAC Acoustics, United Kingdom). A computer running Windows 10 placed outside the room controlled the audiovisual presentation and data acquisition. The audio component of the stimulus was delivered diotically at a level of 70 dB(A) SPL using ER-3C insert earphones (Etymotic, United States) through a high-performance sound card (Xonar Essence STX, Asus, United States). The sound level was calibrated with a Type 4157 ear simulator (Brüel&Kjaer, DK). The videos were delivered through a fast 144 Hz, 24-inch monitor (24GM79G, LG, South Korea) set at a refresh rate of 119.88 Hz. The monitor was mounted at a distance of one meter from the participants. The videos were played in full screen such that the dimensions of the talking heads appeared life-sized.
To ensure that the audio and video components of the stimuli were presented in synchrony, the audiovisual latency of the presentation system was characterized. A photodiode (Photo Sensor, BrainProducts, Germany) attached to the display and an acoustic adaptor (StimTrak, BrainProducts, Germany) attached to the audio cable that was connected to the ear phones were employed to record the output of a prototypical audiovisual stimulus. The latency difference between the two stimuli modalities was found to be below 8 ms.
Audiovisual Automated Speech Recognition
The same 36 sentences that were randomly selected and presented to each participant for each condition were also analyzed with an audiovisual speech recognizer (AVSR). We finetuned the pre-trained model from Ma et al. (2021) for ten epochs on the 29 GRID speakers which were not used in the behavioral study. The AVSR employed ResNets to extract features directly from the mouth region coupled with a hybrid connectionist temporal classification (CTC) objective/attention architecture. The output of the model was then analyzed in the same way as the human data.
RESULTS
To assess the realism of our facial animations, we first investigated whether humans could discriminate between the synthesized videos and the natural ones. In a large online Turing test on 1,217 subjects, we found that the median of the correct responses was exactly at the chance level of 50% (Figure 4), as was the result on a more controlled Turing test performed on 50 subjects. Moreover, the 18 participants of the speech-innoise comprehension experiment were not told of the nature of half of the videos, and none reported finding anything unusual regarding the videos in a questionnaire completed following the experiment. To the average human observer, the synthesized videos were thus indistinguishable from the natural ones.
We then proceeded to assess the potential benefits of the synthesized talking faces on speech-in-noise comprehension. We found that both the synthesized and the natural videos significantly improved comprehension in our participants when compared to the audio signal alone (Figure 5).
The comprehension for the audio-only type was 50.8 ± 7% (mean and standard error of the mean). The synthesized and natural videos improved speech comprehension to 61.8 ± 7% and 71.2 ± 6%, respectively. The relative improvement between FIGURE 4 | Histogram of the percentage of correct responses in the Turing test on discriminating between the synthetic and the natural videos. The median was exactly at the chance level of 50%. the audio-only and synthetic AV signals was about 22% (p = 2.3 × 10 −5 , w = 1, two-sided Wilcoxon signed-rank test for dependent data with Benjamini-Hochberg FDR correction). The relative improvement of the natural AV signals as compared to the audio signal alone was about twice as large, about 40% (p = 2.3 × 10 −5 , w = 0). The relative difference between the synthetic AV signals and the natural ones was statistically significant as well, at 15% (p = 7.6 × 10 −5 , w = 5).
We further analyzed the differences between the AV gain in speech comprehension provided by the synthetic and the natural AV signals. In particular, we computed confusion matrices between the different key words of the sentences that the volunteers were asked to understand. The confusion matrices were normalized such that, for each presented keyword, the probability to select any other keyword was one. We then subtracted the answer-response pair frequency of the confusion matrix of the synthesized AV signals from that of the natural AV signals ( Figure 6A). As indicated by the presence of mostly positive differences on the leading diagonal of the resulting matrix, the natural videos outperformed the synthesized videos in terms of providing categorically unequivocable cues. The differences in the remaining sectors of the matrix shed some light into the reason the natural videos performed better. For example, matrix elements highlighted by the green rectangle in Figure 6A demonstrate that the synthesized videos encouraged participants to mistakenly select the letter "a" when presented with the keywords "o" and "n." Similarly, the yellow arrows highlight that participants were more likely to mistake the letter "t" for the letter "g" and the digit "two" for the digit "seven" when presented with synthesized videos relatively to the natural videos.
We also subtracted the answer-response pair frequency of the confusion matrix of the synthetic AV signals from that obtained from audio-only signals ( Figure 6B). The mostly negative differences on the leading diagonal of the resulting matrix show that the synthetic videos improved the subjects' ability to discriminate between keywords compared to the audio-only condition. The green arrow in Figure 6B highlights one exception: the synthetic videos encouraged participants to mistakenly select the keyword "a" when presented with "o, " congruently with the results shown in panel A. The yellow annotations indicates that the confusion of the keywords "t" and "two" also persists.
Nonetheless, the synthesized videos were found to disambiguate the keyword "b," notable for being hard to distinguish from other consonants pronounced in combination with the phoneme /i:/ such as the keywords "g" and "d." We then determined whether an AVSR could benefit from the synthetic facial animations as well. We found that the scores of the AVSR improved by about 13% for the synthetic AV material as compared to the audio-only signals (Figure 7). However, this improvement was significantly lower than the corresponding improvement of 22% in human speech-in-noise comprehension (p = 0.007, t = 3.04, two-sided single-value t-test). Also, the natural AV signals improved the scores of the AVSR by 40% when compared to the audio signal alone, which was comparable to our result on the gain in human speech comprehension. FIGURE 7 | The scores of an AVSR were improved by the synthetic videos, although the gain was less than that experienced by humans. The natural AV signals led to a higher gain, similar to that of our human volunteers.
We also analyzed the confusion matrices for the AVSR data (Figure 8), which were calculated in the same way as those for the human behavioral data. The natural videos outperformed the synthetic videos across most keywords, in particular allowing the AVSR to disambiguate "t" from "g," a finding that mirrored those made for human listeners. The letter "t" is also more frequently mislabeled when the AVSR has access to the synthetic videos than when no visual signal is available (yellow annotations in Figures 8A,B). The green rectangles visible in Figures 8A,B highlight that the synthetic visual representation for the keywords "n", "m" and "o" were a source of confusion for the AVSR, much like for humans.
Nonetheless, the black annotations in Figure 8 highlight that the synthetic videos had a significantly lower chance to induce the AVSR to label a "b" as a "p" than their natural counterparts, and that they significantly decreased the chance that the AVSR labeled "i" as "y" when compared to the audio-only condition.
DISCUSSION
To the best of our knowledge, our results provide the first demonstration that end-to-end synthetic facial animations can improve speech-in-noise comprehension in humans. Our finding therefore suggest that facial animations generated from deep neural networks can be employed to aid with communication in noisy environments. A next step toward such a practical application will be to investigate the benefit of the facial animations in people with hearing impairment, such as patients with mild-to-moderate sensorineural hearing loss as well as patients with cochlear implants.
However, our results also showed that the speech-in-noise comprehension is yet higher when listeners see the natural videos. This result contrasts with our other finding that humans cannot distinguish between the real and the synthesized videos, neither when explicitly instructed to do so in an online Turing test nor as a spontaneous judgment while carefully and procedurally attending to the videos in a speech-in-noise task using short sentences. We note, however, that the standardized nature of the sentences in the GRID corpus might have hindered the differentiation between the natural and synthetic videos. On the other hand, the Turing test also employed audiovisual material from the TIMIT and CREMA datasets that offer more realistic speech content, such that the standardized nature of the GRID corpus alone cannot explain the observed lack of differentiation in the Turing test. It therefore appears that the synthetic videos lack certain aspects of the speech information, although the lack of this information is not obvious to human observers. One clue as to why that may be lies in the choice of discriminators employed in the synthesizer GAN architecture: the GAN was optimized for realism and audiovisual synchrony rather than for speech comprehension. Certain keywords pronounced in combination with alveolar and bilabial nasal consonants such as "n" and "m" or others pronounced in combination with (palato)alveolar affricates and plosives such as "g", "t" and "two" were poorly disambiguated by the synthetic videos. This finding suggests that the GAN may have avoided the issue of synthesizing labial and coronal visemes featuring complex interactions of tongue, teeth, and lip movements to some extent, for the sake of realism and at the expense of comprehension. Still, the result that these videos disambiguated consonants pronounced in combination with the phoneme /i:/ (letter keywords "b" and "p") and vowels pronounced in combination with the diphthong /aI/ (letter keywords "i" and "y") signifies that their effectiveness at improving speech comprehension cannot be due to temporal cues alone but must include categorical cues.
From a different perspective, the synthesized audiovisual signals may aid speech comprehension in two ways. First, access to the visual signal may improve the availability of information to human listeners, allowing the brain to perform internal denoising through multimodal integration. This may be aided by the fact that the visual signals were synthesized from clean speech signals without background noise. Second, the synthesizer may be increasing the signal-to-noise ratio externally by adding information regarding the dynamics of visual speech. Such information would be learned by the GAN during training and can be beneficial in speech-in-noise tasks. The latter conclusion is supported by the results presented by Hegde et al. (2021), who recently showed that hallucinating a visual stream by generating it from the audio input can aid to reduce background noise and increase speech intelligibility. Importantly, they also showed that humans scores on subjective scales such as quality and intelligibility were higher for speech denoised in such a way. Moreover, our finding that an AVSR performs better when it has access to a synthetic facial motion than when it relies on the speech signal alone also suggests that our synthesized facial animations contain useful speech information. We caution, however, that there exist many unknowns regarding the interaction of the AVSR and the GAN-generated stimuli, limiting the further interpretation of the ASVR's performance on these stimuli.
As a limitation of our experiment, we did not investigate the effects of different temporal lags between the auditory and the visual signals. In a realistic audiovisual hearing aid scenario, the synthetic video signal would be delayed with respect to the audio, due to the sampling and processing time required. Because the auditory signal is often slightly delayed with respect to the visual signal, this inverse temporal latency could influence the AV benefit. Moreover, we did not investigate the effects of different levels and types of background noise on the ability of the synthesizer to accurately reproduce visual speech. In addition, the highly standardized sentences of the GRID corpus, in which the different keywords occurred at the same timing, meant that dynamic prediction was not required for their comprehension. Our study could therefore not assess the influence of the synthetic facial animations on this important aspect of natural speech-innoise comprehension.
Therefore, a natural progression of this work will be to perform on-line experiments with noise-hardened versions of the synthesizer, such as that proposed by Eskimez et al. (2020). Further studies will also look at improving the synthesizer model through the implementation of targeted loss models, informed by the findings of the confusion matrix analysis presented here.
Taken together, our results suggest that training a GANbased model in a self-supervised manner and without the use of phonetic annotations is an effective method to capture the lip dynamics relevant to human audiovisual speech perception in noise. This research paves the way for further understanding of the way speech is processed by humans and for applications in devices such as audiovisual hearing aids.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Imperial College Research Ethics Committee. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
EV and TR designed the research. KV, PM, SP, and MP developed the synthetic facial animations. EV obtained the behavioral data on speech comprehension and analyzed the data. All authors contributed to the writing of the manuscript. | 5,874.4 | 2021-12-18T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Alumina-Supported NiMo Hydrotreating Catalysts—Aspects of 3D Structure, Synthesis, and Activity
Preparation conditions have a vital effect on the structure of alumina-supported hydrodesulfurization (HDS) catalysts. To explore this effect, we prepared two NiMoS/Al2O3 catalyst samples with the same target composition using different chemical sources and characterizing the oxidic NiMo precursors and sulfided and spent catalysts to understand the influence of catalyst structure on performance. The sample prepared from ammonium heptamolybdate and nickel nitrate (sample A) contains Mo in the oxidic precursor predominantly in tetrahedral coordination in the form of crystalline domains, which show low reducibility and strong metal–support interactions. This property influences the sulfidation process such that the sulfidation processes of Ni and Mo occur tendentially separately with a decreased efficiency to form active Ni–Mo–S particles. Moreover, inactive unsupported MoS2 particles or isolated NiSx species are formed, which are either washed off during catalytic reaction or aggregated to larger particles as seen in scanning transmission electron microscopy/energy-dispersive X-ray spectroscopy (STEM/EDX). The oxidic precursor of the sample synthesized using nickel carbonate and molybdenum trioxide as metal sources (sample B), however, contains Mo in octahedral coordination and shows higher reducibility of the metal species as well as weaker metal–support interactions than that of sample A; these properties allow an efficient sulfidation of Mo and Ni such that formation of active Ni–Mo–S particles is the main product. Ptychographic X-ray computed tomography (PXCT) and STEM and EDX measurements show that the structure formed during sulfidation is stable under operation conditions. The structural differences explain the HDS activity difference between these two samples and explain why sample B is much active than sample A.
INTRODUCTION
Catalytic hydrodesulfurization (HDS) is a mature technology used to remove sulfur from crude oil to produce ultraclean fuels. Typically, HDS catalysts consist of sulfides of Mo and Ni or Mo and Co, supported on a high-surface-area alumina (γ-Al 2 O 3 ) 1 carrier. These catalysts are usually prepared in the form of oxidic precursors by impregnating the carrier with the metals followed by drying and calcination. Catalyst activation is done by sulfidation in a H 2 S/H 2 gas flow at elevated temperatures (around 400°C), yielding NiMo or CoMo sulfide phases. The structure of these sulfide phases is described in the so-called Co−Mo−S model; 2 Co or Ni promotor atoms are located at the edges of nanosized MoS 2 particles. With sharpening global environmental regulations, a strong demand for fuels containing only traces of sulfur (10 ppm or less) was created. To satisfy market needs, highly active catalysts of increased lifetime and stability are needed, prompting the exploration of several synthesis processes. 3,4 The traditional workflow of producing HDS catalysts consists of impregnation, drying, calcination, and activation (sulfida-tion). In the early 2000s, a modified workflow gained traction since it yielded catalysts of higher activity. In this workflow, organic additives, such as glycols, ethylenediaminetetraacetic acid (EDTA), or nitrilotriacetic acid (NTA), were added to the catalyst formulation. 5−8 Since these additives would decompose or oxidize under typical calcination conditions (400°C in air), the calcination step was omitted. The effect of organic additives on catalytic performance has been explored in numerous studies, linking changes in structure, density, and stability of the active phase to performance. 6,9−21 These uncalcined catalyst types show a low tendency to form Mo−O−Al bonds, which advances the formation of type II catalysts. Compared to the organic additive-free and calcined type I catalysts, which are characterized by a strong interaction with the support, type II catalysts are generally more active. 22 Besides organic additives, phosphorus is another important component in the formation of highly active hydrotreating catalysts; 23,24 experimental observations have shown that the presence of phosphorus promotes the stacking of MoS 2 -type particles by lowering metal−support interactions and enhancing the formation of type II phases. 25−27 A critical variable in catalyst preparation is the metal formulation. Since the series of oxomolybdates with the monomeric MoO 4 2− anion and the neutral solid MoO 3 as end members are accessible via pH-dependent aggregation processes, 28,29 there are several ways to design the impregnation solution. The most commonly used recipe to prepare Ni-, Mo-, and P-containing hydrotreating catalysts utilizes aqueous solutions of ammonium heptamolybdate ((NH 4 ) 6 Mo 7 O 24 ), nickel nitrate (Ni(NO 3 ) 2 ), and phosphoric acid. 25,30 Moving toward the end members of the oxymolybdate series, molybdenum trioxide (MoO 3 ), nickel carbonate (NiCO 3 ) and phosphoric acid, 31 or sodium molybdate (Na 2 MoO 4 ·2H 2 O) and nickel nitrate (Ni(NO 3 ) 2 ) 32 are other feasible sources of Mo and Ni. Some studies utilized highly condensed and symmetric starting materials such as Keggin 33,34 and Anderson complexes. 35,36 Overall, there is no clear answer as to which type of impregnation solution yields the best catalysts.
Characterization studies conducted over the years have a focus on describing the active NiMoS phase at atomic or nanometer scale, e.g., utilizing EXAFS to explore the coordination number of Mo 37−40 to determine particle size and staking degree of NiMoS particles, or the degree of edge decoration through promotor atoms. Active site(s) and molecular reaction routes of Co−Mo−S structures synthesized on Au(111) were monitored by means of scanning tunneling microscopy (STM), 41 and first-principles calculations were used to describe particle structure 42,43 as well as mechanisms of reactions taking place at the active sites located on the edges of Co−Mo−S and Ni−Mo−S nanocrystals. 42,44,45 To a much lesser extent, information regarding metal dispersion on alumina or the influence and change of the alumina pore structure on and during metal deposition can be found.
Here, we investigate two hydrotreating catalysts, where malic acid (MA) and phosphorus were added to the NiMo catalyst formulation by co-impregnation. 6,7 These two catalysts were prepared on the same alumina carrier with identical metal and phosphorus loadings and contain the same organic additive. The only difference is that they were prepared using two different impregnation solutions. Catalyst A was prepared with (NH 4 ) 6 Mo 7 O 24 and Ni(NO 3 ) 2 as Mo and Ni sources (route A), while catalyst B was prepared from MoO 3 and NiCO 3 (route B). Thus, the difference lies in the impregnation chemistry brought about by these impregnation solutions, which will be shown to lead to substantial structural differences of the active phases. For a more comprehensive understanding of the structure difference, we present a detailed structural characterization of these two samples and discuss their performance levels on structural grounds. Catalyst characterization is based on physical methods, spectroscopy, diffraction, tomography, and catalytic activity measurements. In particular, we used N 2 physisorption and temperature-programmed reduction (TPR) as physical methods, and X-ray photoelectron spectroscopy (XPS), Raman, and X-ray diffraction (XRD) as standard spectroscopy and diffraction methods. 2D and 3D imaging methods including scanning transmission electron microscopy (STEM), energy-dispersive X-ray spectroscopy (EDX), and ptychographic X-ray computed tomography (PXCT) were used for advanced structural characterization. The catalytic activity of our samples was evaluated in thiophene, dibenzothiophene, and gas oil hydrodesulfurization tests. Impregnation solution (16.4 mL) was then added to each 20 g of alumina extrudes (diameter: 1 mm, length: 3−13 mm). The extrudates were kept for a period of 2 h under slow movement on a roller bank and subsequently dried overnight at 120°C yielding the catalyst precursors (oxide). To preserve the malic acid for the subsequent sulfidation process, the prepared samples were not calcined. 50 Freshly sulfided catalysts (A and B) and spent samples recovered from the gas oil HDS test are referred to as (sulfide) and (spent), respectively. The final metal loading of these catalysts was determined by inductively coupled plasma optical emission spectroscopy (ICP-OES), and the results are compiled in Table 1.
Catalyst Activation.
To prepare the sulfided catalyst samples, ∼50 mg of oxidic precursor (75−125 μm) were loaded in the middle of a valve-sealed stainless-steel reactor with an inner diameter of 4 mm and heated up to 350°C with a 2°C/ min ramp rate in 1 bar H 2 /H 2 S (10% v/v) at a flow rate of 50 mL/min. The sample was kept under these conditions for 2 h and subsequently cooled to room temperature in He atmosphere. The sulfided catalyst samples were then transferred to and stored in a N 2 -filled glovebox for further characterization. During sulfidation, malic acid is decomposed and the decomposition products are released. The final active catalyst does not contain any organic compounds.
2.4. Catalytic Activity Measurements. 2.4.1. Thiophene HDS Activity. To avoid contact with air, catalyst activation and activity tests were performed sequentially in a stainless-steel reactor with an inner diameter of 4 mm at ambient pressure. In detail, ∼76 mg of oxidic precursor (75−125 μm) was mixed with 200 mg of SiC and loaded in the middle of the reactor. The samples were then sulfided under a constant flow of H 2 /H 2 S (10% v/v) at a flow rate of 50 mL/min at 350°C for 2 h. The temperature was then increased to 400°C and the inflow switched to a reaction or testing feed composed of 4% (v/v) thiophene in H 2 (100 mL/min) for 13 h. The steady-state catalyst activity was measured by gas chromatography (GC) equipped with a flame ionization detector (FID) and an RTX-1 column with 0.32 mm ID. The normalized reaction rate (r Thio ) was calculated according to where F Thio is the molar flow of thiophene in mol Thio h −1 , m cat is the catalyst mass in g, ω Mo is the fraction of metal in mol Mo g cat −1 , and X is the conversion.
2.4.2. Dibenzothiophene (DBT) HDS Activity. DBT HDS activity was measured in a fixed-bed high-pressure reactor under gas and liquid feed trickle flow conditions. The reactor (I.D.: 4 mm) was packed with 200 mg of oxidic precursor (75−125 μm) diluted with 1 g of SiC. To activate the catalyst, we heated the reactor to 350°C at a heating rate of 2°C/min in a 50 mL/min H 2 /H 2 S (10% v/v) and kept there for 2 h. After that, the oven temperature was adjusted to 270°C, the pressure was increased to 20 bar, and the feed was switched to the reaction feed containing 4 wt % DBT and 2 wt % adamantane in nhexadecane, with a liquid hourly space velocity (LHSV) of 9.2 h −1 and a H 2 /feed ratio of 200 L kg −1 . Adamantane is used as the internal reference compound for GC analysis. The steadystate activity was determined after 12 h reaction. Products were analyzed by online GC-FID equipped with RTX-1 column with 0.32 mm I.D. and 30 m in length. The reaction rate constant was calculated via i k j j j j j y where WHSV is the weight hourly space velocity, ω DBT is the fraction of DBT, ω Mo is the fraction of Mo and X is the conversion.
Gas-Oil HDS Activity.
Gas-oil activity tests were conducted in a fixed-bed high-pressure reactor with a trickle flow of gas and liquid feed (60 bar, H 2 /feed = 350 NL/kg, liquid hourly space velocity (LHSV): 1.2 h −1 ). The reactor, 4 mm ID, was packed with 0.75 mL of catalyst extrudates and sandwiched between two 10 cm layers of Zirblast. Before reaction, the catalysts were pretreated with a sulfidation feed (gas oil feed spiked with 2.69 wt % Sulfurzol) at 200°C for 5 h, followed by heating to 280°C for 5 h and finally to 315°C for 5 h. Afterward, the temperature was lowered to 200°C, the sulfidation feed switched to the testing gas oil feed (1.28 wt % S, 234 ppm N) and the temperature subsequently increased to the target run temperature, under which the S content in the product stream is ≤10 ppm. The desired run temperature was analyzed 470 h after the start of the test. The sulfur content in the products was analyzed by atomic emission spectroscopy-inductively coupled plasma-mass spectroscopy (AES-ICP-MS). After the run, the samples were extracted from the reactors in a nitrogen-operated glovebox and stored in sealed vessels for further characterization.
2.5. Catalyst Characterization. 2.5.1. N 2 Physisorption. Textural properties, i.e., specific surface area, pore volume, and average pore size, were determined by means of N 2 physisorption in a Micromeritics AutoChem apparatus. Prior to physisorption measurements, samples were pretreated overnight with N 2 at 120°C.
XRD.
Crystalline phases were determined with a Bruker D2 Phase powder diffraction system using Cu Kα radiation. The acquired X-ray diffraction (XRD) patterns covered an angular range of 2θ = 10−70°. The patterns were recorded with a step size of 0.01°and a step acquisition time of 0.5 s. The average crystallite size, L, was calculated using the Scherrer equation where K is a dimensionless shape factor, with a value close to unity; λ is the wavelength of the X-ray radiation; β is the line broadening at half the maximum intensity (FWHM), and θ is the Bragg angle. Instrument broadening was taken into account. 2.5.3. TPR. TPR measurements were performed using a Micromeritics AutoChem II 2920 equipped with a fixed-bed reactor and a thermal conductivity detector (TCD). Around 50 mg of the respective sample materials were loaded into a glass reactor and pretreated at 200°C for 1 h in a 50 mL/min helium flow. The temperature was then increased to 900°C at 5°C/min in a H 2 /He (5% v/v) mixture and kept for 1 h, while the hydrogen consumption was monitored by TCD.
The Journal of Physical Chemistry C pubs.acs.org/JPCC Article 2.5.4. Raman Spectroscopy. Raman spectra were acquired using a confocal Witec α 300 R microscope equipped with a 532 nm diode excitation source, a 1200 lines/mm grating (BLZ = 500 nm), and a CCD detector. A Zeiss LD EC Epiplan-Neofluar Dic 50×/0.55 objective was used. The presented spectra are the result of 30 accumulations with an acquisition time of 10 s per accumulation.
2.5.5. XPS. XPS spectra were recorded at room temperature using a Thermo Scientific K-Alpha spectrometer, equipped with a monochromatic small-spot X-ray source (Al Kα = 1486.6 eV) operating at 72 W and a spot size of 400 μm. Sulfided and spent samples were prepared in a N 2 -operated glovebox. The mechanically ground samples were dispersed on a carbon tape-covered alumina holder, which was then transferred into the XPS apparatus via an airtight transport vessel. The whole process was done without exposing the sample to air. For the measurement, the background pressure was 2 × 10 −9 mbar. The survey scan and region scan were measured using a constant pass energy of 160 and 40 eV respectively. Data analysis was performed using the CasaXPS software with a Shirley background subtraction and Gaussian−Lorentzian fitting procedure, where the binding energy (B.E.) was calibrated using the C 1s peak at 284.8 eV as reference.
2.5.6. PXCT. 3D structural measurements were carried out by means of PXCT. 47,51,52 PXCT is a lensless quantitative imaging technique in which each tomographic projection is calculated by means of ptychographic phase-retrieval algorithms. 53,54 Tomographic reconstruction retrieves the complex-valued refractive index of the examined sample, providing tomograms of both phase and amplitude contrast. 54 Away from sample-relevant absorption edges, the retrieved refractive index decrement values can be converted to electron density as described in Diaz et al. 55 Measurements were carried out at the cSAXS beamline of the Swiss Light Source at 6.2 keV photon energy at room temperature in an inert atmosphere. A series of sample cylinders, ≈25 μm in diameter, extracted central, from a catalyst pellet of samples A (sulfide), B (sulfide), A (spent), and B (spent) were examined. The sample cylinders or pillars were prepared using a micro-lath and focus-ion-beam (FIB) milling ( Figure S2). 56 The obtained quantitative electron density tomograms possess, on average, a half-period spatial resolution of 40 nm. The resolution was evaluated through Fourier shell correlation (FSC) ( Figure S3). 57 Details regarding tomogram acquisition and analysis can be found in the Supporting Information.
2.5.7. STEM and EDX. The local morphology and elemental composition of the sulfide and spent catalysts were determined by STEM and EDX mapping using a probe-corrected JEOL ARM 200F transmission electron microscope operating at an acceleration voltage of 200 kV. The preparation of the sulfide and spent samples was conducted in a glovebox, specifically, around 5 mg of catalyst sample were dispersed in n-hexane to make a suspension, a few droplets of which was then placed on a Cu grid. The grid was then transported to STEM. The mean length of individual MoS 2 platelets and the average number of layers per particle were calculated from acquired STEM images using ImageJ. The mean length was determined by fitting a lognormal function to the platelets size distribution. The degree of stacking (N) was calculated according to eq 4. where N i is the number of MoS 2 layers within a particle and n i is the amount of individual MoS 2 platelets counted for a given number of layers N i . The local elemental distribution in the sulfide and spent catalysts was determined by STEM-EDX mapping using the same JEOL ARM 200F by means of a 100 mm 2 (1 srad) Centurio SDD EDX detector. The correlation between Mo and Ni was calculated via MATLAB, as shown in Figure S8, detailed information can be found in the Supporting Information. Table 2, where it can be noted that the main products are biphenyl (BP) and cyclohexylbenzene (CHB), no bicyclohexyl (BCH) was observed under our reaction conditions.
Physiochemical Bulk Characterization of the Oxidic Catalyst Precursor.
To investigate the origin of these activity differences we first acquired powder XRD ( Figure 1a), TPR (Figure 1b), BET data (Table 1 and Figure S1), and Raman spectra (Figure 2) of the supported catalysts in their oxidic precursor form. Table 1).
The reducibility of samples A (oxide) and B (oxide) was explored by means of TPR. Overall, their reduction profiles are rather similar. In Figure 1b, both catalyst samples exhibit dominant reduction peaks around 400 and 800°C. The lowtemperature reduction peaks can be attributed to a reduction of Ni oxide species, as well as a partial reduction of polymolybdates that have a weak interaction with the support (Mo 6+ to Mo 4+ ). 59,60 The broad peak at the higher temperature is attributed to the deep reduction of all Mo species, including tetrahedrally coordinated Mo 4+ species. The differences in TPR profiles between samples are as follows. In the low-temperature reduction region, sample A (oxide) shows three reduction peaks at 410, 452, and 516°C, while only one peak occurs at 390°C in sample B (oxide). Differences in Mo−support interactions due to distinctly different oxomolybdate species likely account for the existence of three reduction peaks in the low-temperature reduction area in A (oxide). In sample B (oxide), differences in the precursor structures are small, so that Mo−support interactions are more uniform and result in one (broad) reduction peak. Consistently, the high-temperature reduction peak in sample B (oxide) is centered around 770°C and is ∼40°C lower than the corresponding peak in A (oxide), again indicating a weaker metal−support interaction in B (oxide). This type of interaction for sample B also leads to higher MoS 2 crystallites stacking upon sulfidation and is commonly agreed as a requirement for the formation of type II Ni−Mo−S phases. 25,61,62 Following, we used Raman spectroscopy to determine the nature of Mo and Ni oxide species present on the alumina surface ( Figure 2). 11 The Raman spectra of sample B (oxide) show a broad band at 950 cm −1 together with a shoulder at 860 cm −1 and two less intense bands at 360 and 225 cm −1 , which are considered to be the vibrational signature of octahedrally coordinated polymolybdate species. 63−65 The spectrum of sample A (oxide), exhibits in addition a band at 1047 cm −1 , which is assigned to the ν s (NO 3 ) stretching vibration of nitrate anions (originating from the nickel nitrate source). 66 The main bands at 980, 880, 600, 380, and 240 cm −1 are due to [PMo 12 O 40 ] 3− species. Detailed band assignments can be found in refs 67, 68. In addition, to the band position implied differences in coordination state, the shape of the bands themselves is worth noting. The widths of Raman bands are frequently positively correlated with the degree of crystallinity of the probed material, as such the sharp bands in the spectrum of sample A (oxide) point to mainly crystalline species, while the broad bands in the spectrum of sample B (oxide) indicate a higher degree of structural disorder. The Mo coordination difference observed above already exists in the respective impregnation solutions (Figure S4), and the reason for this difference is that Mo aggregation is a pH-dependent process yielding different oxomolybdates, e.g., MoO 4 2− and MoO 3 , as explained in Section 1.
Changes in Catalyst
Composition during Activation/Sulfidation. Temperature-dependent changes during 70,71 The spectra of samples A (oxide) and B (oxide) were fitted in the same way with oxidic contributions only. Figure 4 shows the sulfidation profile of samples A (sulfide) and B (sulfide). While the final sulfidation degree of Mo and Ni is comparable in both samples, the evolution toward this end value differs. For sample A (sulfide), the sulfidation rate of Ni is faster than that of Mo, from which we conclude that a substantial portion of Ni is sulfided before the formation of MoS 2 has started. This could mean that this part of Ni remains as isolated NiS x particles and will thus not participate in the formation of the catalytically active Ni−Mo−S phase. 12,46,72 In sample B (sulfide), the situation is reversed, i.e., Mo sulfidation precedes that of Ni, meaning MoS 2 particles already exist when Ni sulfidation starts, providing the right environment for Ni−Mo− S formation. 73 The concentration of NiMoS species in samples A (sulfide) and B (sulfide) as extracted from XPS (Table 3) verified this assumption.
Spatial Resolved Analysis of Sulfided and Spent Catalysts.
To investigate how the different preparation routes affect the catalyst structure and composition locally, sulfided and spent catalysts of types A and B, were characterized by means of PXCT and STEM/EDX. This combination of techniques provides information from the micron scale (PXCT) to the nanoscale (STEM/EDX). Figure 5 shows volume renderings and sagittal cuts through the PXCT-acquired electron density tomograms of catalyst A (sulfide and spent) and catalyst B (sulfide and spent). As the half-period spatial resolution of these tomograms is on average 40 nm ( Figure S3), local or direct compositional analysis is restricted to larger mesopores (and above) and or reliant on partial volume analysis utilizing a priori compositional knowledge. Partial volume effects refer to the occupancy of a single voxel by multiple, spatially unresolved, components, leading to a fractional occupancy-related electron density. 47 The theoretical electron densities of the main sample or catalyst components are 0.785 n e Å −3 for amorphous Al 2 O 3 and 1.1 n e Å −3 for NiMoO 2 . Further information can be found in Table S1. Based on these values, we consider that an electron density below that of amorphous Al 2 O 3 can be directly related to the degree of internal porosity. Similarly, an electron density above that of amorphous Al 2 O 3 provides information about the amount of MoS 2 clusters present within a selected voxel (see the Supporting Information for detailed tomogram analysis information). Adhering to this interpretation we can identify four compositionally distinct domains in the acquired electron density tomograms, which are spatially resolved pores, lowdensity (high internal porosity) alumina, high-density (low internal porosity) alumina, and areas rich in MoS 2 species, >30 vol %.
A metal deposition difference can be observed between samples A (sulfide) and B (sulfide) (Figure 5a−d). Visible in sample A (sulfide), are isolated areas or clusters rich in MoS 2 species (dark orange dots) and continuous circular domains rich in MoS 2 species, potentially caused by a diffusion-limited drying process and reflective of metal aggregation. These domains are absent in sample B (sulfide), displaying a more homogeneous distribution of MoS 2 . Further evidence of this can be found in the fact that the smaller clusters present in sample B have a lower This metal deposition behavior is also found in the tomogram corresponding electron density histograms ( Figure 6). Here, we see a shoulder in the high-electron-density region (bottom right area) for sample A (sulfide); while in sample B (sulfide), no such high-electron-density area is observed. From the PXCT data, we can infer that the metal dispersion in sample B (sulfide) is much higher than in sample A (sulfide). Looking at the sagittal cuts through the tomograms of these two samples, it can also be noted that the above-mentioned high-electron-density and MoS 2 -rich regions did not uniformly form throughout the alumina domain. Visible in the sagittal cuts is a diminishing electron density gradient from the larger pore space into highdensity alumina domains. This observed electron density gradient can result from particle aggregation at the boundary of high-and low-density alumina due to metal deposition limitation encountered upon entering these domains, i.e., a result of pore transport limitations. 47 These limitations appear to be much stronger in sample A (sulfide). No such gradient is observable in the low-density or high-porosity alumina domains.
Focusing on the spatially resolved pore structure we observe further differences between these two samples ( Figure S5). While the pores in sample B (sulfide) have a diameter between 60 and 420 nm, the pores found within sample A (sulfide) are slightly larger with a diameter distributed in the range of 60−760 nm. As the compositions of catalyst and support material are the same, we infer that this difference was caused by specific interactions of the impregnation solution with the alumina carrier, resulting in alterations of the pore structure. The unchanged pore size distribution between the sulfide and their respective spent samples ( Figure S5) provides further evidence that structural differences of the carrier between samples A (sulfide) and B (sulfide) were introduced during catalyst preparation. Moreover, it demonstrates that liquid phase sulfidation used in the gas oil test does not change the hierarchical structure of the catalysts, which makes a comparison between sulfide samples (gas phase sulfidation) and spent samples (liquid phase sulfidation) used for stability study, reasonable.
As shown in Figures 5 and 6, the electron densities of samples B (sulfide) and B (spent) are comparable, indicating that activity test conditions did not cause much difference to the structure of sample B (sulfide). For sample A we do observe a significant change following the activity test. Visible is a decrease of the high-electron-density components, the histogram peak previously associated with MoS 2 cluster changes in position from 0.9 to ∼0.8 n e /Å 3 and decreases in relative intensity. To understand why the high-electron-density region decreases, elemental analysis was conducted ( Table 1). The results show that the metal contents in sample A (spent) (9.16 wt % Mo, 2.57 wt % Ni) is lower than in A (sulfide) (10.42 wt % Mo, 2.81 wt % Ni), from which we conclude that metal, presumably in the form of MoS 2 species, was washed off under the gas oil activity test conditions.
As the here acquired PXCT data are limited in spatial resolution and chemical element insensitive, we mechanically fractured the catalyst pellets to obtain an electron-microscopycompatible specimen. Subsequent examination using STEM and EDX mapping allowed us to probe the effect preparation, activation and use had on the structure of the active MoS 2 platelets as well as the catalyst's local elemental composition. Figure 7a shows STEM images of samples A and B and provides information on the length distribution of MoS 2 platelets as well as their layers distribution. The mean length of MoS 2 platelets in sample A (sulfide) is 3.7 nm with an average 1.7 layers per particle; particles in sample B (sulfide) are smaller (3.2 nm) with a higher number of layers (2.0). The feature of small particle size and high degree of stacking per particle is often considered a prerequisite to the formation of type II NiMoS phases. 46 While the activity test had no effect on the mean length of MoS 2 platelets in sample A (spent), a slight increase in mean length can be observed in sample B (spent). Additionally, a notable de-stacking was observed: the average number of layers per particle decreased from 1.7 (A (sulfide)) to 1.3 (A (spent)) and from 2.0 (B (sulfide)) to 1.6 (B (spent)), Figure 7b. De la Rosa et al. 74 assign this de-stacking behavior during catalyst operation to the pressure applied in the HDS activity test, since at high pressure, the formation of multilayered stacks through van der Waals forces seems counterbalanced by the strong interaction of adsorbed substances favoring stabilization of single MoS 2 layers.
To probe the elemental distribution in the prepared catalysts on the nanoscale we referred to a combination of HAADF STEM and EXD mapping. Figure 8a shows the HAADF STEM image revealing the presence of two morphologically distinctly different phases in sample A (sulfide). EDX-derived compositional maps of Mo, Ni, and Al (Figure 8b,c) suggest these phases to correspond to unsupported NiMoS particles and isolated NiS x species. As suggested by XPS, this could be the result of the faster sulfidation rate of Ni under the chosen preparation conditions. In sample B (sulfide), the metals are dominantly homogeneously dispersed as expected for Ni−Mo−S particles. The spatial correlation of Ni and Mo, i.e., their dispersion behavior, can be quantified using a normalized Ni−Mo correlation degree extracted from EDX mapping data, as shown in Figure S8. Here it is shown that the average correlation degree in sample A (sulfide) is 0.7, while it is 0.9 in sample B (sulfide). After the gas-oil test, the average Ni−Mo correlation degree, for samples A (spent) and B (spent), decreased to 0.7 and 0.5, respectively. The origin of this decrease could be owed to an aggregation of MoS 2 and NiS x , during the gas oil activity test (see EDX data of samples A (spent) and B (spent) in Figures S6 and S7). Under the testing conditions, NiS x will have sufficiently high mobility to enable aggregation. 75
DISCUSSION
The comparative analysis of two "compositionally" identical alumina-supported NiMo hydrotreating catalysts, originating from different preparation methodologies, revealed pronounced differences in both catalytic activity and stability. Examination of the supported catalysts in the oxidic precursor state, the active sulfide, and post-use or spent state revealed these differences to be a result of compositional and structural modifications of both the support and the catalyst.
The oxidic precursors were characterized by means of TPR, XRD, and Raman spectroscopy. The results indicate that in samples A and B, Mo and Ni species exist on the alumina carrier surface in different coordination states. As a consequence, samples A and B exhibit varying degrees of metal−support interactions (Figures 1 and 2) and ultimately a distinct response to hydrogen reduction, as well as sulfidation, i.e., to the transformation of Ni−Mo−O into Ni−Mo−S phases. XPS results (Figures 3 and 4) show that both samples arrive at the same degree of Mo and Ni sulfidation at the end of the sulfidation process; however, the evolution toward the state is different. In sample A (sulfide), most part of Ni is sulfided before the pronounced formation of MoS 2 has started, so it is likely this Figure S8), extracted from the EDX mapping data, since a higher average degree of Ni−Mo correlation detected in sample B (sulfide) is the expected appearance for a sample consisting predominantly of NiMoS particles. This difference in sulfidation chemistry indicates that sample A (oxide) has a higher metal−support interaction than sample B (oxide), and this is reflected as a lower degree of stacking and larger MoS 2 particles in sample A (sulfide) as observed by STEM (Figure 7). Different metal coordination and metal−support interactions in the oxidic precursors are not the only changes caused by preparation conditions; changes in the (pore)structure of the alumina carrier are also observed. Pore diameter distribution values extracted from PXCT data ( Figure S4) show that sample A (sulfide) has larger pores compared to sample B (sulfide), most likely formed as the result of partial alumina dissolution during the preparation (impregnation) process. The pore diameter data derived from BET measurements underpin this assumption (Table 1); sample A (oxide) has a larger pore diameter than sample B (oxide). The alumina dissolution process can be considered to be a consequence of the formation of heteropolymolybdate, as initially proposed by Carrier et al. 76 They conclude that alumina dissolution occurs in the presence of molybdates, and especially in the pH range between 4 and 6. As mentioned in Section 2.2, the pH value of impregnation solution A is around 5 and does thus fall into this pH range, whereas the pH of solution B is <1. However, the pore size distribution of the sulfide samples with their respective spent samples is comparable.
Industrial hydrotreating catalysts are prepared and delivered as oxidic precursors, which makes it necessary to understand their structure. However, true understanding of catalytic performance lies in characterizing the structure of the active sulfides and relating their structures to performance data. To understand the 3D structure of our two sulfided samples, we used PXCT measurements to reveal metal dispersion and pore structure at the micrometer scale. PXCT is a technology that provides information on electron density differences in the measured samples. By comparing volume rendering and sagittal cuts of samples A (sulfide) and B (sulfide) (Figure 5), major differences can be seen; the uniform distributed electron density indicates that metals are well dispersed in sample B (sulfide), while for sample A (sulfide), besides the part of the welldistributed metals, two remarkable circular areas with high electron density, likely regions of increased metal (MoS 2 ) concentration can also be observed. However, based on the PXCT data, it cannot be decided if metal aggregation processes cause this high electron density. To get a better understanding of that, we performed STEM/EDX, where large unsupported MoS 2 and isolated NiS x patches were detected in sample A (sulfide) (Figure 8a). Hence, it is reasonable to consider that this part of metals, which according to our XPS data did not participate in the formation of Ni−Mo−S particles, exists as isolated MoS 2 and NiS x species. Likely, these species aggregate under operation conditions; similar types of larger, isolated MoS 2 and NiS x patches are observed in samples A (spent) and B (spent) (Figures S6 and S7). Furthermore, an obvious difference in electron density between samples A (sulfide) and A (spent) observed in the extracted PXCT data (Figures 5a,b,e,f and 6) indicates that metal domains in sample A (sulfide) are not stable under operation conditions. To verify if this change is due to a change of metal concentration during the catalytic activity test, we determined metal concentrations in the fresh sulfided and spent samples by means of ICP measurements (Table 1). Sample A (spent) contains 12.1% less Mo and 9% less Ni than sample A (sulfide), which means that this portion was washed off during the activity test. This was different for sample B; ICP data show similar metal contents in B (sulfide) (10.70 wt % Mo, 2.56 wt % Ni) and B (spent) (10.69 wt % Mo, 2.57 wt % Ni). Here, PXCT and STEM/EDX results both consistently show a Ni and Mo distribution pattern in line with the existence of NiMoS particles and confirm our interpretation of the XPS data that the metals are largely leveraged into forming active and sufficiently stable NiMoS particles. A summary of sample properties and differences is given in Scheme 1.
CONCLUSIONS
We have investigated two HDS catalysts (A and B) that are compositionally identical, same alumina carrier, organic additive, metal, and phosphorus loading, but that are prepared under different conditions, i.e., different sources of Mo and Ni were used for the impregnation solutions. While the impregnation solution of sample A was made from (NH 4 ) 6 Mo 7 O 24 and Ni(NO 3 ) 2 as metal sources, MoO 3 and NiCO 3 were used in the case of sample B. The structural differences between these two catalyst samples, caused by changes in preparation conditions, were studied in detail by
Scheme 1. Summary of Samples Properties
The Journal of Physical Chemistry C pubs.acs.org/JPCC Article characterizing the oxidic precursors, the sulfided and the spent state of these catalysts by means of TPR, XRD, Raman spectroscopy, PXCT, STEM, and EDX. Our results indicate that the alumina pore structure is quite different between these two samples after impregnation and larger pores are formed in sample A (oxide) during preparation. Further, sample B (oxide) has a higher reducibility of the metal species and a weaker metal−support interaction. After sulfidation, the metals are well dispersed on the alumina carrier surface and primarily occur as active NiMoS particles, which are stable and do not visibly change during operation. Sample A (sulfide), however, is very different. Only part of the metal converts into active NiMoS particles, while the remaining metal occurs as unsupported MoS 2 and isolated NiS x clusters, which are either washed off during operation or aggregate into larger, unsupported NiMo domains. This structural description explains the enormous activity differences between these two samples and explains why sample B is much more active than sample.
In this work, we revisited a problem of catalyst preparation and identified a critical performance driver that was largely overlooked in many studies, i.e., the chemicals used to design an impregnation solution. Not only could we show that impregnation solutions based on (NH 4 ) 6 Mo 7 O 24 �a source of molybdenum used in many studies�yield catalysts of low performance, but we were also able to explain this finding by structural properties and visualize those in the 3D space. Pushing catalyst innovation in a mature technology like hydrodesulfurization is challenging and requires in-depth understanding of catalyst activity relations across different length scales of catalyst structure. ■ ASSOCIATED CONTENT
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acs.jpcc.2c05927. Volume rendering ptychographic tomograms; shown are volume reconstructions of and cut slices through the retrieved electron density tomograms of sample A (sulfide) (AVI) Volume rendering ptychographic tomograms; shown are volume reconstructions of and cut slices through the retrieved electron density tomograms of sample B (sulfide) (AVI) Volume rendering ptychographic tomograms; shown are volume reconstructions of and cut slices through the retrieved electron density tomograms of sample A (spent) (AVI) Volume rendering ptychographic tomograms; shown are volume reconstructions of and cut slices through the retrieved electron density tomograms of sample B (spent) (AVI) Extended methods section regarding PXCT sample preparation, data acquisition, data reconstruction, the methods for resolution and dose evaluation as well as tomograms and EDX analysis; physisorption results of two oxides; SEM of one of the examined NiMo catalyst pillars; Fourier shell correlation (FSC) line plots of the electron density tomograms of examined catalyst pillars; Raman spectra of two respective impregnation solutions; normalized frequency pore size distributions for samples A (sulfide), B (sulfide), A (spent), and B (spent); highangle annular dark-field (HAADF) STEM image and energy-dispersive X-ray spectroscopy (EDX) maps of molybdenum (Mo), nickel (Ni) and aluminum (Al) corresponding to the STEM image of samples A (spent) and B (spent); and normalized Ni−Mo correlation degree of samples A (sulfide), B (sulfide), A (spent) and B (spent) extracted from EDX mapping data (PDF) ■ AUTHOR INFORMATION | 9,216.2 | 2022-10-24T00:00:00.000 | [
"Chemistry"
] |
Safe trajectory planning for autonomous intersection management by using vehicle to infrastructure communication
The development of autonomous vehicle or self-driving car integrates with the wireless communication technology which would be a forward step for road transportation in the near future. The autonomous crossing of an intersection with an autonomous vehicle will play a crucial role in the future of intelligent transportation system (ITS). The fundamental objectives of this work are to manage autonomous vehicles crossing an intersection with no collisions, maintaining that a vehicle drives continuously, and to decrease the waiting time at an intersection. In this paper, a discrete model of the one-way single intersection is designed. The vehicle-to-infrastructure (V2I) communication is implemented to exchange information between a vehicle and an intersection manager which is the roadside infrastructure. The safe trajectory of autonomous vehicles for the autonomous intersection management is determined and presented by using discrete mathematics.
Introduction
The vehicle technology has grown rapidly in the past decade. Several systems have been installed into commercial vehicles to assist the driver to provide a more comfortable drive, including improving of the safety of the driver, passenger, and the pedestrian or cyclist. Recently, there has been a highly increased activity in the development of the autonomous vehicle research, which was initiated in 2005 by the first competition of autonomous vehicles. The Defense Advanced Research Projects Agency (DARPA) Grand Challenge was organized. In 2007, the DARPA Urban Challenge showed the progression of the autonomous vehicle. Several teams successfully developed a vehicle that has the ability to drive itself and achieve the assigned task. As a result, the self-driving car or autonomous vehicle is now successfully developed by many research groups either in universities or more recently by private companies [1][2][3]. They proved the performance of driving in a real-traffic environment, autonomously with the capability of safety. It showed that the use of this technology is possible in the near future. Already, some cities in the USA currently allow the autonomous vehicle to drive on the same street as other vehicles, legally. Furthermore, the increased use of wireless communication technology is making a huge contribution to applications with the cooperation of multiple robots. Many multiple robots and cooperation researches were focusing on the mobile robot application, e.g., robot soccer, task allocation, area exploration, robot formation, and swarm robotics [4][5][6][7]. With the existing technology of wireless communication, the application of autonomous intersection management (AIM) (Additional file 1) is possible. The safety of driving is the first priority of road transportation. An intersection, especially, is considered one of the high-risk places for accidents to occur. In addition, traffic congestion is also very important and serves as the second objective of the traffic management.
There are two different approaches from previous research studies that developed the collision avoidance at an intersection: with and without communication. Without communication, the stand-alone autonomous vehicle is equipped with several sensors to measure its state, e.g., GPS, position, orientation, and velocity, which is now in general use. Also used, an environment sensor to measure the vehicle's surrounding, e.g., the laser range finder, radar, and camera, were used to locate the static and dynamic obstacles around a vehicle and then plan a collision free path by using the stop and go technique [8,9]. With communication, [10] presented the latest wireless communication standard for vehicle communication. The IEEE 802.11p with the spectrum of 5.9-GHz band, dedicated short-range communications (DSRC) is the standard that was developed for use only with vehicle communication. The time scheduling method by means of intelligent agents was introduced in [11]. It determined the arrival time of a vehicle and the time that a vehicle would stay at an intersection by sharing the state information and then passing it back to the following vehicle by using mobile ad hoc networks. Hafner et al. [12] presented the automated collision avoidance at an intersection between two vehicles, using vehicle-to-vehicle communication (V2V). It used V2V to share the state information of two vehicles to find the potential collision area and then control the longitudinal velocities of both vehicles to prevent them from reaching the collision area at the same time. Similarly, [13] proposed the method of sharing the state of vehicle inertia to create the navigation function that creates a safe and smooth path without fully stopping at the intersection. Sheng et al. [14] propose the method of intersection collision groups: each vehicle broadcasts its collision situation based on the path selection. When a vehicle reaches the communication range, the collision free path was determined by comparing the initial member of collision group and of another incoming vehicle. The driving speed was cooperatively calculated for the safe crossing of the intersection. The proposed concept in [15] consists of intersection geometry, to map the collision region by using the first-come first-serve (FCFS) principle to manage a vehicle crossing an intersection. Another method is using vehicle-to-infrastructure communication (V2I). Bruns and Trächtler [16] and Bruns [17] used the concept of network flow to model the intersection. The intersection was separated into small equal connected sections. The incoming vehicle has to reserve nodes based on the selected route, and the safe trajectory is determined by using dynamic programming. This resulted in the centralized control principle. Moreover, the extended study in [18] was considering optimization of multiple objectives to improve driving efficiency. The fuel consumption and duration of a journey were minimized by using the technique of dynamic programming.
Autonomous intersection management with the 'call ahead' concept was presented by [19][20][21]. Every car must send a reservation message to the intersection manager, and it will check the availability of the requested space. If the requested message is not in conflict with the intersection policy, a car is allowed to pass through the intersection. Otherwise, the car has to generate and send new request messages until it gets the permission from the intersection manager or, in the worst case, stop before entering the intersection. From the sample research works, the common parameter information that is shared is mostly the state of the car. Then, the possible collision event is computed and the velocity is controlled, speeding up or slowing down, to avoid the collision scenario.
In this paper, the authors propose the methodology of planning the safe trajectory for crossing an intersection, including improving the capacity of the intersection. Aforementioned works proposed the methodology of autonomous intersection management with the main focus of using artificial intelligent, supervised rulebased or machine-learning technique. On the other hand, we presented the different concept for managing vehicles crossing an intersection. The discrete time event was implemented in order to determine the safe trajectory so that the trajectory of each vehicle can be computed deterministically, the position of a vehicle can be controlled exactly to a particular time. The discrete model of a single intersection was presented. In addition, the idea of green wave, where a vehicle is able to continuously drive through an intersection, was investigated in the area of adaptive traffic light research. We have applied this concept for our autonomous intersection management. However, the traditional traffic light will be replaced by the intersection manager. Our approach relies on the exchange of information between an incoming car and the intersection manager. To coordinate a car to the intersection manager, the following message protocol has been designed: a vehicle sends a message to the intersection manager to request the state of intersection. The intersection manager will check its state and whether it is occupied by a previous vehicle. Then, it will update the time index and reserve it for the incoming vehicle. It will then return the message of the time index back to the incoming vehicle. The management mechanism is able to express the information as a personal, virtual traffic signal. Each vehicle will get an individual identity time index in which to occupy the intersection in order to then plan the safe trajectory to reach each node within the given time. The simulation of autonomous intersection management for a single intersection is developed. The results show the improvement in velocity capacity and the traffic flow rate of an intersection compared to the traffic flow model.
Intersection model
In this work, the symmetrical four-way intersection with a single lane for each incoming and outgoing street is represented. The physical shape of this intersection is composed of four connected streets. Every street shares the common characteristic of driving direction. There are three possible choices of driving routes for each street that are composed of left, straight, and right directions. The model of the street and intersection is illustrated in Figure 1: where r x is the number of possible routes of the fourway intersection, x is the driving direction, r 1 is left direction, r 2 is straight direction, and r 3 is right direction.
The assumption is that every vehicle on each of the four connected streets is able to select the route independently. For this reason, there are 12 combination patterns in total. The total combinations of route choices of this intersection model can be expressed by the product of route choices and the total number of streets: s m ¼ s 1 ; s 2 ; s 3 ; s 4 1; 2; 3; 4∈m; m∈ℝ þ g j f ð 2Þ n f ¼ 12 s m ⋅r x g j f ð3Þ where s m is the total number of street of an intersection, m is street members of an intersection (North, East, West, South), and n f is the total number of possible routes.
For the simulation purpose, the vehicle's dynamics are configured based on the geometry of the intersection. The maximum allowance of driving velocity of a vehicle is limited at 120 km/h, and the presumed average velocity is at 80 km/h and the minimum is set at 0. The maximum acceleration is set at 2 m/s 2 . It assumed based on the changing velocity from average to maximum in 5-s time interval. For deceleration, in the same way, the decreasing of velocity from average to 0 is able to set the value of deceleration at 4 m/s 2 . In addition, the flow input to the intersection from each side of the street is limited at the maximum of 2,000 vehicles/h, according to the traffic flow model in [22,23].
Autonomous intersection management
The traditional traffic light system works on the principle of centralized control, i.e., a vehicle must stop when the light is red and it can go ahead when the light is green. Drivers will plan trajectory based on their visual data. On the other hand, autonomous intersection management is a fully autonomous system. Technically, autonomous intersection management relies on the communication between vehicles and the intersection manager. It will replace the traffic light with the intersection manager as well as replacing the typical vehicle with the autonomous vehicle. The intersection manager has the ability to communicate wirelessly with every incoming vehicle. Likewise, the vehicle also has the same feature, in order to transmit and receive information to intersection manager. The responsibility of the intersection manager is that it will prioritize the timing index, corresponding to the occupied space and tell a vehicle when it can pass through the intersection, based on the incoming, requested message from vehicles. In the same way, an autonomous vehicle will follow the policy from the intersection manager strictly and accurately. The trajectory will be planned based on the returned, available timing index from the intersection manager. The management mechanism is similar to the personal, virtual traffic signal. Every vehicle will get the personal timing index from the intersection manager and drive according to the received policy. The V2I communication is the tool in which the requested message from vehicle to the intersection manager is delivered and vice versa. The message protocol is defined in the section below.
Crossing intersection problem
In order to cross an intersection, the nature of the problem is the resource sharing. In this case, the resource is concerned as a space and how vehicles use the limited space together. Therefore, the problem will be dealing with space and time. In practice, a vehicle is allowed to drive over the intersection area following the traffic signal. That is, the method to manage several vehicles to use the intersection area at different points of time. For the sample scenario, there are two vehicles on the different streets. The red vehicle (no. 1.) drives on the West Street and plans to go to the North Street. On the other side, the green vehicle (no. 2.) drives on the East street and the destination is the West street. The trajectory of both vehicles is clearly crossed over. Therefore, the collision can occur while vehicle no. 1. is turning left and vehicle no. 2. goes straight and both vehicles arrive at the confliction point at the same time. The general scenario of crossing an intersection is illustrated in Figure 2a: where P * is the coordinate of the confliction point, x * is the position in x direction, y * is the position in y direction, x 1 and x 2 are the x positions of vehicles no. 1 and no. 2, respectively, and y 1 and y 2 are the positions of vehicles no. 1 and no. 2, respectively.
Vehicle-to-infrastructure communication protocol
There are several works in vehicle inter-communication by using wireless communication. In this work, we implemented wireless communication between a vehicle and an infrastructure by using the normal standard of wireless local area network (WLAN, IEEE 802.11) for computer communication. User datagram protocol (UDP) together with a broadcasting technique is used to communicate between a vehicle and the intersection manager.
In order to limit the number of vehicles communicating with the intersection manager, a vehicle will start sending a message when it reaches the designated range of communication. The communication region is set at the radius of 100 m away from the center of the intersection as illustrated in Figure 2b. The intersection manager will be polling the message and updating its state every 0.1 s or with in10-Hz frequency. Then, it returns the computed timing index, with the maximum and minimum acceleration allowance back to the requested vehicle. The requested message package from a vehicle contains the following six information: Vehicle identification code (vehicle ID): it is used to identify that a vehicle is present and to prevent the wrong determining vehicle, the Internet protocol (IP) address or the media access control (MAC) address can be used to represent a vehicle. Location: the position information integrates with the digital map of the local street containing the current street where a vehicle is located. It is used to determine the approaching vector of a vehicle to an intersection. Destination: the information containing the expected street where a vehicle will drive to. The direction of travel can be computed through corresponding the current location. The intersection manager extracts the message information into the proposed parameters. The arrival time to the intersection and the leaving time are determined based on the provided information of the successor vehicle and the predecessor vehicle. The intersection will be updated and the new time slot will be transmitted to the requested vehicle. The V2I communication mechanism is illustrated in Figure 3.
Discretizing intersection
Following the crossing-an-intersection problem, a way to manage vehicles crossing intersection without using traffic light control is to manage the time interval of using an intersection space for incoming vehicles. This problem is expressed with the discrete time event, where the space and time can be solved deterministically. The reason is that the space of an intersection is constant, and the required output is time of possession corresponding to the specific reservation space. The exact position of a vehicle can be calculated deterministically at every time step by the given, inputted velocity. Then, it is able to guarantee that the intersection space will be reserved by only one vehicle at a time.
As mentioned earlier, the nature of crossing an intersection is generally to manage multiple vehicles not driving over the same area at the same moment of time. If we can then calculate the exact time that a vehicle can drive through the conflict area and a vehicle is able to follow that policy, the collision will not occur. Then, the problem of crossing an intersection can be modeled as a discrete problem. With the proposed intersection model, there are two processes of discretization, composed of distance discretization and time discretization.
Distance discretization: the intersection is firstly discretized into a section of distance. And every section is represented by each node of a discretized distance. It contains position coordinates of each discretized distance. In addition, each node is connected with an edge that is a discretized distance and the whole travelling distance is equal to the summation of the total discretized distances: where i is the index of a vehicle, k is the discretized step, and f is the final step of the discretization. s i,k is the discretized distance of step k of the trajectory of a vehicle.
S i,f is the total travelling distance of a vehicle. P i,k is the discrete position of a vehicle, and with respect to the Cartesian coordinate, it is the function of each discretized distance. The desired trajectory of both vehicles in the proposed scenario is able to discretize into a set of connected nodes. The problem of the space reservation will transfer the location to the network of nodes. The illustration of the intersection distance discretization is shown in the Figure 4. Concerning time discretization, if there were a timing problem, an accident is able to occur if, and only if, the vehicles meet each other at a specific point of the intersection at the same time. To prevent that situation, time is discretized. The discretizing time step is constant. Time will be discretized into small steps, and the summation of the total discretized time will equal to total travelling time. Time discretization criteria can be written as the following equations: where t k is the time step, T f is the total travelling time, and Δk is the discretizing time. The time discretizing model corresponding with the distance discretizing model for crossing an intersection of both vehicles is illustrated in the Figure 5.
The trajectory of each vehicle is planned by the vehicle itself based on the returned timing index from the intersection manager. The possession time of each node is calculated and accumulated from one discretized section to the next discretized section. The process is so that when a vehicle is reaching the designated communication region of the target intersection, it has to send a requested message to the intersection manager by wireless communication. From the V2I message protocol, the proposed information is extracted. This information, along with the current state of intersection, is then used to generate the trajectory of a vehicle. In Figure 6, the discrete trajectory of two vehicles is shown while they are crossing an intersection. The vehicle's trajectory was plotted together with the lateral and longitudinal distance and time discretization.
The process of reservation is made through iteratively calculating the nodes parameters, coordinate data of distance, and time of the incoming vehicle. The illustration of the node reservation is shown in Figure 7. The horizontal axis is the discretized distance. The vertical axis is the discretized time. The occupied nodes are shown by red, cross circles, and the blue circles represent the free nodes. In this case, there is no reservation of the current node from the predecessor, it means that the state of intersection is free to reserve for that period of time. The successor has the right to occupy the required nodes by setting the time of possession to the nodes based on its desired velocity. The distance discretization process will divide the total travelling distance (s f ) into small connected sections (s k , s k + 1 , …, s k + i | ∀ s k ∈ s f ), and the required set of nodes will be determined based on the received information from a requesting vehicle. The reservation nodes are obtained with the information of time and distance (N[s i, k , t i,k ]). Integrating with the information of the current velocity of a vehicle, the average travelling time (t f ) to the destination can be determined by the relationship of the linear motion principle. In addition, the discretization of time into equally small timing steps (Δk) is applied to assign the timing index of each node to the corresponding discrete distance. The size of a vehicle is taken into account, when computing the number of nodes to be used and reserved.
On the other hand, the incoming vehicle is not allowed to reserve a node which has already been reserved for the previous vehicle. In order to make a successful reservation, the timing index for the specific node for the successor is shifted. The time is increased with respect to the predecessor timing index by the discretized time step, until the node is free to reserve. The cost of node is defined as the function of the accumulated time given by a specific node and the relative time between successor and predecessor. Generally speaking, the cost of node indicates the absolute value of time until the node will be released or it means until a vehicle has already left the intersection.
The time of possession of the successor is dependant on the situation of the predecessor. In the same way, the system determines the successor state based on the predecessor state. Then, it is able to consider the system as the first order system. According to the first order system property, the forward Euler method is used to update the timing index of the successor node. The term of the prior time is defined by the progression of the possession time which depends on the velocity of the predecessor itself. The time of possession of the predecessor is counting down along the increasing of accumulated distance until it is vanished when a vehicle has already passed the occupied node and the state of the node will be changed from an occupied node to a free node. Comparing to the predecessor state, the relative time between the predecessor and the successor is determined at the same reference distance after the message of the successor was received. Therefore, the absolute possession time of each node is iteratively updated until the predecessor vehicle has left the intersection. The time update can be written as the following equations: where i is the predecessor node, j is the successor node, t À k is the prior timing index of the successor, t k is the posterior timing index of the successor, Δk i is the accumulative time step of the predecessor node, s k is the discretized distance, v is the average velocity, and T d is relative time between the predecessor and the successor determined from a reference distance.
The recursive determination is required to find a solution of this discrete problem. The tool that we used to implement for the node reservation is dynamic programming (DP). Dynamic programming is frequently used for solving complex problems by breaking them down into several sub problems. It then solves each sub problem, part by part, and combines those solutions. Similarly, dynamic programming can deliver the optimal solution. It looks into all possible solutions of the problem and will select the best solution, e.g., finding the shortest path between two points is the most popular application of DP.
Therefore, dynamic programming is appropriate for solving the proposed discrete problem. It is used to find the trajectory of a vehicle at every discretized time step. Shown in the presented scenario, is the classic problem of crossing an intersection. The node reservation method is able to provide the safe trajectory of vehicles while crossing an intersection. The pseudo algorithm of node reservation for intersection management is calculated by using dynamic programming which is provided in the Algorithm 3.1. The management mechanism relies on the communication between vehicles and the intersection manager. To prevent the message crashing, the principle of FCFS is implemented for ordering the message queue. The intersection manager will make a service based on the sequence of the received message from vehicle.
The discretization of distance and time combined with the node reservation allows the possession time of required nodes to be calculated and then reserved. The state of an intersection is updated and waits for the next iteration from the next vehicle's request. The intersection state was reserved by the predecessor vehicle, and the time of possession was registered. The following vehicle is allowed to reserve the node after the possession time of the predecessor vehicle has expired. For this reason, both vehicles are at different places or nodes at the same time. The result of the vehicle's trajectories while crossing an intersection which are computed by using DP is illustrated in Figure 8. It is a time-distance plot. The horizontal axis indicates the discretized distance, and the vertical axis indicates the discretized time. The node reservation of a vehicle is illustrated. The trajectory of the first vehicle is showed in red and the following vehicle is showed in green. Gray represents the predecessor vehicle that has already left the intersection.
Simulation results
According to the focus of this work, the cooperative trajectory planning algorithm, the simulation is implemented based on the proposed method, regardless the technique of communication. Since the communication mediums are considered as tools to exchange the information between vehicle and infrastructure, any communication standard, which provides the fit qualification, can be applied to autonomous intersection management. In this work, the Internet protocol has been used for communicating between vehicle and infrastructure. The wireless local area network with UDP protocol is implemented. In general, a computer is set as the vehicle server in order to generate the requested messages and send to the intersection manager over the WiFi, IP address. Meanwhile, another computer is the intersection Update state k++ Return manager for simulating the autonomous crossing. However, the option of the local host IP address 127.0.0.1 with different broadcast communication ports is used for communication in this simulation to run a simulation on a stand-alone computer, four communication ports for vehicles from each single street and another port for the intersection manager. The communication will be updated every 0.1-s time interval or in 10-Hz frequency.
The four-way intersection with a single lane of incoming and outgoing traffic is used as the reference model in the simulation scenario. The traffic flow rate is able to configure from the minimum 1 vehicle/h up to the maximum 3,000 vehicles/h. The maximum velocity is allowed at 100 km/h, and the minimum is set at 0 km/h. The range of communication is set at 100 m, radius from the center of the intersection. The parameters in this simulation are set through the following configuration. The traffic flow rate in this simulation is assumed to be homogeneous. The balance of the traffic flow is presumed by setting the same amount of flow rate to every incoming street. The inputted flow rate from each incoming street is configured at 1,500 vehicles/h. Therefore, the estimated gross flow rate, number of vehicles that will cross an intersection, will be equal to 6,000 vehicles/h. However, there is no fixed configuration of the route plan. Every vehicle on each street is able to select its own route randomly. The intersection manager only determines the priority of crossing an intersection without forcing to change the original route plan from the vehicles.
On the assumption that the communication between vehicles and intersection manager is performed by using host IP address, for this reason, the communication time between them is considered very small and it can be neglected. In addition, assuming that there is no incomplete data in transmitting and receiving, thus, the package loss throughout the communication is not investigated. In this simulation, the safe trajectory of vehicles while crossing an intersection is the main interest. The resulted trajectories are collected from vehicles on each street drive throughout an intersection. The communication radius is the initial distance of the vehicle trajectory. The intersection border is represented by the red line. The plot shows that the safe trajectory is guaranteed, even though vehicles are entering the intersection at the different times. The resulted plot of trajectories, of vehicles, while crossing an intersection is shown in Figure 9.
The second observed parameter is the average driving velocity of vehicles. The data was collected from a total of 40 vehicles that crossed intersection. Ten vehicles from each street were sampled, and their velocity data was collected. The average velocity is determined by arithmetic mean of the whole data from start to the end of travelling. From the data, the average velocity of vehicles on the North street is 82.9 km/h, on East street is 82.3 km/h, on West street is 76.4 km/h, and on the South street is 85.6 km/h. Then, the average velocity for crossing the intersection with four incoming streets is equal to average of the average velocity of each street, which is 81.8 km/h.
From the result, vehicles on each street can drive with nearly the same level of average velocity. On the other hand, it can be interpreted in the terms of total travelling time. When the total travelling distance is equal and vehicle drive is close to the average velocity, the average total travelling time will be nearly the same. The average velocity of vehicles on four streets, North, East, West, and South, are plotted in Figure 10. The third observed parameter is the crossing intersection time. The crossing time is defined as the time the vehicle took to drive from the initial distance until finished crossing an intersection. The data was collected by the intersection manager. From the data, the maximum crossing time is 6.76 s, minimum is 5.26 s, and the average crossing time is 6 s. The result showed that vehicles spent almost the same time in crossing the intersection. The number of stops at the intersection was not found. All vehicles crossed the intersection continuously without stop at the configured level of traffic flow. The average crossing time at the intersection is shown in Figure 11.
The relationship between traffic flow rate and average velocity is shown in Figure 12. The data was plotted between flow rate in horizontal axis and average velocity in vertical axis. The flow rate was determined based on the traffic flow theory using traffic density data that was collected by counting the number of incoming messages of the requested vehicles. According to the traffic flow model [22,23], the average velocity will decrease when the flow rate is increasing before the density will reach the critical value. In general, after the flow rate of 3,000 vehicles/h, average velocity will gradually decrease. However, the result showed that all vehicles still can maintain the higher velocity in the higher flow rate zone. In short, it can be expressed that the throughput of the system is increased because vehicles can use the higher velocity compared to the traffic flow model.
Conclusions
The fully autonomous intersection management system is not widely implemented due to several factors. The first obvious factor is that the autonomous vehicle itself is not ready for operation on real roads. However, the development of autonomous vehicles is progressing very well and it has recently been approved for use on public roads. In addition, the wireless communication for vehicles is not currently installed in the commercial vehicle. Most of the research in traffic management has been working on intelligent traffic signal control because the traffic light infrastructure system already exists. It focuses on increasing the performance of the traffic light system by adapting the timing of light signal. The period of red and green light timing is adapted based on the current traffic. Another approach has been working on the improvement for traffic safety, collision avoidance system, for example.
On the other hand, in this work, we try to develop a completely autonomous system for the concept of future, intelligent transportation. The primary objective of this work is to build a system that guarantees the collision-free crossing of an intersection and, as a secondary purpose, alleviates the traffic congestion. The standard of wireless communication for a vehicle has been recently introduced [8]. We implemented the methodology for an autonomous intersection management through the use of V2I communication. The communication protocol is designed, and the node reservation algorithm is implemented. The concept of virtual personal traffic signal is introduced. Each vehicle will get an individual, given time from the intersection manager. The discrete mathematics is applied to model the crossing intersection problem, and dynamic programming is used to calculate the trajectory of a vehicle. The simulation program for a single intersection is developed based on the proposed methodology. The result shows the successful cross of an intersection, without a collision. Furthermore, all vehicles are driven continuously. It can be expressed that the waiting time at an intersection is decreased compared to the traditional traffic light. The limitation of this work is that the simulation is able to simulate only a single intersection. We will extend this work to multiple intersections in the future work.
Future work
In the real environment of the road traffic, there is not only a single intersection. A lot of connected intersections cause the road networks to be very complex. The traffic management for multiple intersections is necessary for studying the traffic behavior at the microscopic level. To manage the traffic flow of multiple intersections, the coordination between neighborhood intersections, infrastructure-to-infrastructure communication (I2I) will be implemented in future work. Furthermore, the traffic flow theory will be investigated to observe the macroscopic traffic behavior.
Additional file
Additional file 1: Autonomous intersection management. | 7,905.8 | 2015-02-19T00:00:00.000 | [
"Computer Science"
] |
Induction of Tolerance and Immunity by Dendritic Cells: Mechanisms and Clinical Applications
Dendritic cells (DCs) are key regulators of immune responses that operate at the interface between innate and adaptive immunity, and defects in DC functions contribute to the pathogenesis of a variety of disorders. For instance, cancer evolves in the context of limited DC activity, and some autoimmune diseases are initiated by DC-dependent antigen presentation. Thus, correcting aberrant DC functions stands out as a promising therapeutic paradigm for a variety of diseases, as demonstrated by an abundant preclinical and clinical literature accumulating over the past two decades. However, the therapeutic potential of DC-targeting approaches remains to be fully exploited in the clinic. Here, we discuss the unique features of DCs that underlie the high therapeutic potential of DC-targeting strategies and critically analyze the obstacles that have prevented the full realization of this promising paradigm.
INTRODUCTION
Immune responses result from a complex interplay between the innate and adaptive immune system. Dendritic cells (DCs) are an important subset of antigen-presenting cells (APCs) that specialize in priming different types of effector T cells and, thus, tailor the outcome of an immune response and having a central role in the immune system with a unique ability to control both immunity and tolerance. Compared to other APCs such as macrophages and B cells, DCs are considered the most efficient APCs capable of efficiently processing and presenting exogenous antigens on both MHCII and MHC I molecules to naïve CD4 + and CD8 + T cells, respectively, thus initiating the adaptive immune response. DCs were first discovered in 1973 by Ralph Steinman, who was awarded a Nobel Prize in 2011 for that discovery. DCs comprise a heterogeneous population of bone-marrow-derived cells that are seeded in all tissues. Five major types of DCs can be distinguished: plasmacytoid DC (pDCs), type 1 conventional DCs (cDC1), type 2 cDCs (cDC2) also referred to as myeloid DCs (mDCs), Langerhans cells and monocyte-derived DCs (MoDCs) (1)(2)(3)(4), which differ in their phenotype, localization, and function as summarized in Table 1 (2,(5)(6)(7). In peripheral tissues, DCs capture antigens using different mechanisms. DCs loaded with antigens subsequently migrate into the draining lymph nodes via afferent lymphatics, where peptides loaded on DCs histocompatibility complex (MHC) class I and II molecules to be recognized by T-cell receptor (TCR) on T lymphocytes (8). Immature DCs (iDCs) can present self-antigens to T cells to maintain immunological tolerance either through T cell deletion, induction of T cell anergy or the differentiation of regulatory CD4 + CD25 + FoxP3 + T cells (Tregs) (9). After encountering appropriate stimuli, DCs differentiate into mature DCs, which are characterized by a decrease in endocytic activity, upregulation of MHC class I and II molecules and costimulatory molecules and responsiveness to inflammatory chemokines (10). Mature, antigen-loaded DCs promote the differentiation and activation of T cells into effector T cells with unique functions and cytokines profiles by providing immunomodulatory signals through cell-cell contacts and cytokines (8,11). As a result of the progress made by research studies worldwide, there is now evidence of a central role for DCs in initiating antigen-specific immunity and tolerance, which has been widely translated into different approaches for vaccine design in preclinical and clinical programs (12)(13)(14)(15).
DCs SUBSETS
DCs comprise two major classes: plasmacytoid DCs (pDCs) and conventional or classical DCs (cDC) ( Table 1) (11,16). pDCs represent a small subset of DCs which accumulate mainly in the blood and lymphoid tissues and enter the lymph nodes through the blood circulation. For maturation, pDCs selectively express activating FcR as well as Toll-like receptor 7 and 9 (TLR7 and TLR9). On the contrary, they express low levels of MHC class II and costimulatory molecules in the steady state. Upon recognition of foreign nucleic acids, they start to produce type I interferon (1,11). pDC-derived IFNα can also induce the activation of other DC subsets or B cells into plasma cells via cytokines and surface signaling (17). cDCs form a small subset of tissue hematopoietic cells present in most of lymphoid and nonlymphoid tissues, where they constantly acquire tissue and blood antigens. cDCs excel in priming naïve T cells due to their superior ability to migrate loaded with antigens to T cell one of lymph node and to process and present antigens. Moreover, cDC1 have a unique potential to induce cellular immunity against intracellular pathogens and malignant cells due to the processing and crosspresentation of exogenous antigens on MHC class I molecules to activate CD8 + T cells and T H 1 cells. On contrary, cDC2 are known potent inducers of CD4 + T cell response (1,11). MoDCs mainly differentiate from monocytes in peripheral tissues during inflammation and induce context dependent differentiation of CD4 + T cells into T helper 1 (T H 1), T helper 2 (T H 2) or T helper 17 (T H 17) cells (7).
DC ACTIVATION
DCs in the resting state are considered to be immature but primed to acquire pathogen-associated molecular patterns (PAMPs) and damage-associated molecular patterns (DAMPs) in situ through a variety of surface and intracellular receptors, namely (1) cell surface C-type lectins, (2) surface and intracellular TLRs, and (3) intracellular helicases that recognize nucleic acids, such as retinoic acid-inducible gene I (RIGI) (18) ( Table 1). iDCs are potentially tolerogenic due to their capacity to facilitate the suppression of autoreactive T cells and the clonal expansion of Tregs, which might be addressed in the manufacturing of DC-based vaccines for autoimmune disease treatment (19) (Figure 1). DCs undergo a series of phenotypic and functional changes upon exposure to activation signals, leading to their maturation (10). This process is associated with the following events: (1) downregulated antigen-capture activity, (2) increased expression of surface MHC class II molecules and enhanced antigen processing and presentation, (3) increased levels of chemokine receptors, e.g., CCR7, which allows migration of the DC to lymphoid tissues; (4) increased expression of costimulatory molecules associated with the capacity to stimulate or suppress T cells through different signaling axes: CD80/CD86-CD28, CD40-CD40L, OX40L-OX40, ICOSL-ICOS and galectin (GAL)9-TIM3, CD80-CTLA4, PDL1-PD1, PDL2-PD1, respectively ( Figure 2); and (5) enhanced secretion of cytokines and chemokines, leading to the development of an immune response T cell subtypes, e.g., CD4 + T cells such as T H 1, T H 2 and Tregs (8, 20) (Figure 1).
INDUCTION OF T CELL TOLERANCE vs. ACTIVATION BY DCs
Different DCs subsets are specialized to capture and process antigens that are presented on MHC molecules and recognized by T cells, resulting in final clonal T cell selection leading to a wide T cell repertoire as summarized in Table 1 (21). Among DC subsets, pDCs show relatively limited priming of naïve T cells, unless stimulated to induce CD8 + T cells (22). Conversely, cDC1 provide efficient processing and cross-presentation of exogenous antigens on MHC I molecules to activate CD8 + T cells and T H 1 cell responses as a response to tumor cells or intracellular pathogens (23,24) and cDC2 are known to be inducers of CD4 + T cell responses (25,26). Importantly, MoDCs can be generated to promote context-dependent differentiation of CD4 + T cells toward a T H 1, T H 2, or T H 17 phenotype (27). This variety of T cells represents an infinite tool for specific therapies that increase or decrease T-cell function. The efficient activation of naïve T cells requires the following: (1) binding of the TCR to the peptide-MHC complex on DCs, (2) the interaction of costimulatory molecules at the interface between DCs and T cells, and (3) additional signals from the local environment (28). The presence of these three signals is crucial for full T cell activation (Figure 2). Under inflammatory conditions, large numbers of mature DCs accumulate in T cell areas of the draining lymph nodes for a sustained period of time (29). Mature DCs presenting high levels of antigen/MHC complexes allow strong and sustained TCR occupancy, delivering T cells the main stimulatory signal (30). Simultaneously, high levels of costimulatory and adhesion molecules expressed on mature DCs are required for amplification of the signal initiated by the TCR and for increased adhesion between the DC and the T cell, thus increasing the strength and duration of the interaction, respectively (10). Subsequent strong activation of signaling pathways downstream of the TCR and the costimulatory receptors in the presence of cytokines or factors eliciting immunostimulation and the effector T cell phenotype results in full T cell activation, proliferation, and differentiation into effector and memory cells (Figures 1, 2) (31).
In contrast, DCs that engulf the antigen in the absence of a local inflammatory signal remain in the immature, tolerogenic In the presence of a maturation signal (proinflammatory cytokines and Toll-like receptor ligands), DCs become activated and transition to a stimulatory phenotype, which subsequently leads to the induction of effector/cytotoxic T cell responses. In contrast, incubation of iDCs with different mediators or genetic modification of DCs in the absence of maturation factors can lead to the generation of tolerogenic DCs, which induce anergy, apoptosis or activation of Tregs.
state with low expression of MHC molecules and costimulatory molecules, such as CD80 and CD86 (9,32,33). Presentation of antigen to T cells in the absence of sufficient CD80/CD86 stimulation of CD28 molecules on T cells leads to the activation of anergy-associated genes under the control of nuclear factor of activated T cells (NFAT) and induction of T cell anergy (34,35) (Figures 1, 2). Moreover, low or no signal through the CD28 receptor is a prerequisite for the induction of Treg differentiation (36). Thus, tolerogenic DCs (tolDCs), DCs with regulatory properties, play a pivotal role in immune tolerance (37).
The tolDC population consists of naïve iDCs or alternatively activated semimature DCs induced by apoptotic cells or regulatory cytokine milieu, such as IL-10 and transforming growth factor β (TGF-β) (20). Immunosuppressive DCs can also be generated under tumor microenvironment-derived factors, such as β-catenin, indoleamine 2,3-dioxygenase (IDO), endoplasmic reticulum (ER) stress, lactate, vascular endothelial growth factor (VEGF), IL-10, TGF-β, prostaglandins, accumulation of adenosine, increased levels of lactate and hypoxia (38)(39)(40)(41)(42). TolDCs contribute significantly to the induction and maintenance of immune tolerance through various mechanisms. They promote effector T cell anergy and elimination of autoreactive T cells, participate in the generation and maintenance of a population of naturally occurring Tregs, allow the generation of IL-10-producing T H 1 and T H 3 regulatory cells, and allow the conversion of differentiated T H 1 cells into Frontiers in Immunology | www.frontiersin.org T H 2 cells (43,44). These processes are mainly due to the high production of the regulatory cytokine IL-10, which promotes the generation of Tregs and T H 2 cells and inhibits DC maturation in a paracrine manner (45). Furthermore, regulatory DCs express various immunomodulatory molecules and immunosuppressive molecules that inhibit proinflammatory immune responses and induce immune tolerance. Indeed, the expression of PD-L, ICOS-L, thrombospondin, prostaglandins, and adenosine was documented to participate in the induction of T cell anergy. A number of mechanisms contribute to the clonal deletion of T cells including the interaction between FasL on DCs and Fas molecules on T cells, the expression of GAL-3 that binds to TIM3 on T cells or the production of IDO that leads to subsequent tryptophan depletion. TolDCs were also reported to induce Tregs or B regulatory cells (Bregs) by the expression of PD-L molecules, Ig-like inhibitory receptors IL-T3 and IL-T4, human leukocyte antigen G (HLA-G), anti-inflammatory cytokines IL-10, TGF-β, IL-27 and IL-35, retinoic acid, heme-oxygenase and IDO (9,46). Finally, the functionality of tolDCs is connected with their metabolic activity, such as lipid accumulation, enhanced oxidative phosphorylation, fatty acid oxidation, and modulation of glycolysis (39,47).
THE ROLE OF DCs IN CANCER
The immune system plays a critical role in the control of tumorigenesis based on experimental and clinical observations in both mice and humans, as formulated by the cancer immunosurveillance and immunoediting hypothesis (48). The plasticity of malignant cells resulting from their genetic instability may eventually give rise to new phenotypes with reduced immunogenicity and various mechanisms for the evasion of tumor cells from immunosurveillance, leading to malignant proliferation (49). Malignant cells escape immunosurveillance by different mechanisms, some of which are: (1) reduced immune recognition (including loss of tumor antigen expression and MHC class I and costimulatory molecule expression), (2) increased resistance to apoptosis (through STAT3 signaling), or (3) development of an immunosuppressive tumor microenvironment (including the production of cytokines, e.g., VEGF, TGF-β, IL-10 and increased expression of immunoregulatory molecules, e.g., PD-1/PD-L1, TIM-3, LAG-3), which lead to the development of malignant diseases (48,50). Different DC subsets can be found in the majority of human tumors and play a crucial role in cancer immunosurveillance, as tumor-infiltrating DCs can migrate to regional lymph nodes to present tumor antigens to naïve tumor-specific T cells (51). However, naïve antigen-specific CD8 + T cells cannot directly eliminate malignant cells. Thus, to become effector cytotoxic T cells, they need to be activated by professional APCs. Cross-presentation is an essential mechanism that allows DCs to present exogenous antigens on MHC I molecules to CD8 + T cells, which become the main mediators of anti-tumor immunity (52). Importantly, the contribution of the different DCs subtypes to cross-presentation a cross-priming (in induction of effector CD8 + T cells in vivo) varies depending on the experimental setting. cDC1 are mainly associated with superior cross-presentation of tumor antigens to CD8 + T cells and polarization of CD4 + T cells into T H 1 phenotype resulting in induction of anti-tumor immunity (53)(54)(55). cDC2 and MoDCs may also cross-present tumor antigens and cDC2 are known to be essential for priming of anti-tumor CD4 + T cell response (56). Moreover, the effector activity of T cells depends on DC-derived cytokines, including IL-12 and type-I IFN. Both cDC1s and cDC2s produce IL-12 following TLR stimulation. Tumor infiltrating cDC1s are also the main producers of different chemokines, including CXCL9 and CXCL10, which help to promote the recruitment of CD8 + T cells into the tumor microenvironment (TME) (57). Therefore, elevated levels of tumor-infiltrating DCs inversely correlate with tumor grade and stage and have a robust prognostic value in multiple cancers, including non-small-cell lung carcinoma (NSCLC), melanoma, renal cell carcinoma, breast cancer, ovarian, and colorectal carcinoma (58)(59)(60)(61)(62)(63).
However, the tumor microenvironment employs various mechanisms that lead to the functional impairment of DCs (7). First, in the TME, iDCs differentiate from hematopoietic progenitors following an encounter with an antigen/danger signal (64). However, the differentiation of DCs in the TME is often mediated by the interplay between IL-6 and macrophage colony-stimulating factor (M-CSF), resulting in the recruitment and accumulation of functionally deficient and frequently iDCs unable to induce the proliferation of tumor-specific CD4 + and CD8 + T cells (65,66). Second, DCs in their function as APCs are sampling tumor antigens through the capture of dying tumor cells and initiating the anti-tumor immune response. Dying tumor cells provide three different signals to DCs and other phagocytes: "find-me, " "eat-me "and "do not eat me" (67). A number of find-me signals have been characterized that act in a context-dependent manner, including lipid lysophosphatidylcholine (LPC), sphingosine 1-phosphate (S1P), CX3CL1 and the nucleotides adenosine triphosphate (ATP) and uridine triphosphate (UTP) (67). Immunogenic phagocytosis is mediated by eat-me signals, namely, ectocalreticulin (CALR), surface-heat shock protein (HSP) 90, and phosphatidylserine (68,69). The "do not eat me" signals serve as negative regulators of phagocytosis, mainly including CD47 and lactoferrin (70). Therefore, homeostatic clearance of dying cancer cells could be accelerated or impaired by the different molecules provided by tumor cells, which results in enhanced or impaired phagocytosis of malignant cells (71). Third, the functional capacity of DCs in the TME is negatively impacted through different mechanisms, including the activation of STAT3 signaling in DCs via different cytokines frequently expressed in tumors (IL-6, VEGF and IL-10) (72). Moreover, tumors may condition local DCs to form suppressive T cells, such as Tregs, IL-13-producing CD4 + T cells and natural killer T cells (NKT cells), leading to a tumor-induced functional deficiency of DCs that results in decreased expression of costimulatory molecules, decreased production of IL-12, suppressed endocytic activity, inhibited antigen-processing machinery, and poor viability (73)(74)(75)(76)(77). Altogether, these and other findings suggest that malignant cells can exploit DCs to evade immunity. However, the majority of clinical protocols harnessing patient DCs do not consider the fact that DCs once administered back to patients might quickly lose their activity.
THE ROLE OF DCs IN AUTOIMMUNITY
Previous studies have described the link between peptide presentation by HLA class II molecules expressed on APCs and autoimmune diseases. In different autoimmune diseases, DCs are bearing certain autoimmune risk-conferring HLA class II molecules with the distinct hotspots in the peptide-binding groove that favor the presentation of particular self-antigens that will ultimately be recognized by self-reactive TCR. In the case of type 1 diabetes (DM1), the presence of specific amino acid in the binding groove of HLA-DQ8 alleles favors the binding of insulin-derived peptides. Similarly, in the case of rheumatoid arthritis (RA), HLA-DR4 molecules bearing the conserved amino acid motif (shared epitope) favor the presentation of citrullinated self-peptides leading to activation of citrulline-specific CD4 + T cells and subsequent production of anti-citrulline antibodies that foster RA but prevent natural ligands bearing arginine instead of citrulline (78).
Aberrant cDC and pDC phenotypes and functions due to underlying genetic defects or a chronic inflammatory environment were shown to be associated with the development of various autoimmune diseases, such as RA, systemic lupus erythematosus (SLE), multiple sclerosis (MS) or DM1 (45,(79)(80)(81). DCs can either induce or suppress the autoreactive T cell response, and their effect depends on the DC subset, the degree of maturity, signals obtained from the local microenvironment and crosstalk with other immune and stroma cells. Under noninflammatory conditions, lymphoid-resident immature cDCs or specialized types of tolDCs bearing self-antigens suboptimally activate naïve CD4 + and CD8 + T cells, thus maintaining immune tolerance and affecting the regulation of autoimmune diseases. Aberrant intrinsic tolDC function, such as impaired IL-10 secretion, defective ability to remove apoptotic cells, defective antigen processing machinery or absent negative regulators of inflammation, can contribute to DC hyperactivation and trigger autoimmunity (82)(83)(84)(85). DC hyperactivation might also result from environmental triggers such as an inflammatory cytokine milieu induced by bacteria (86,87), excessive IFN production in response to viral infection as observed in DM1 (88), oxidative stress induced by noxious agents as observed in RA (89) or danger signals released under cell stress or from necrotic and late apoptotic cells as documented in SLE and DM1 (90,91). Activated cDCs accumulate in lymphoid and non-lymphoid tissues during autoimmune disease progression. Hyperactivated cDCs present self-antigens, prime naïve autoreactive CD4 + T cells including follicular helper T cells, promote cross-priming of CD8 + T cells and orchestrate the maturation of B cells leading to the subsequent expansion of autoantibodies and immune complex formation (81). Furthermore, mature cDCs generate an inflammatory environment by producing high levels of pro-inflammatory cytokines such as IL-1β, IL-6, IL-12, and IL-23 that induce a deleterious imbalance between T H 1, T H 2, and T H 17 cells and contribute to local inflammation and tissue destruction. Although partially regulated, the autoimmune response persists due to ongoing stimulation of autoreactive T cell clones and B cell clones. pDCs play a central role in the pathogenesis of IFN-driven autoimmune diseases such as SLE and psoriasis. In SLE, pDCs are activated by immune complexes formed by the aggregation of autoantibodies, stress proteins, such as high mobility group box 1 (HMGB1), and self-DNA released from apoptotic cells that have not been cleared or by nucleic acid-containing nets released from activated neutrophils. These complexes are delivered to endolysosomes to activate TLR7 or intracellular DNA sensors, such as cGAS-STING, to further activate pDCs and IFN-α secretion (92)(93)(94). On the other hand, pDCs can also reduce autoimmune responses by secreting IDO and inducing Tregs depending on the disease stage and signals from local tissues (95,96).
DC-BASED CANCER IMMUNOTHERAPY
Immunotherapy strategies harnessing DCs have been developed based on their unique capacity to coordinate innate and adaptive immune responses (10). The main aim of DC-based cancer vaccination is to induce tumor-specific cellular and humoral immunity resulting in the reduction of tumor mass and induction of immunological memory, which will control cancer relapse. Therefore, a critical step in cancer vaccine preparation is to provide mature DCs with specific tumor antigens. This can be achieved by the following: (1) culturing ex vivo DCs derived from patients with tumor antigens and activation stimuli and subsequently transferring the activated DCs back into patients or (2) inducing tumor antigen uptake by DCs directly in vivo (7, 97) The first proof-of-principle studies exploring DC immunotherapy were performed in the early 1990s based on the discovery that DCs can be obtained from CD14 + monocytes or CD34 + progenitors from leukapheresis products by culturing the cells in vitro in the presence of IL-4 and GM-CSF for 5-6 days (98). The first clinical study of a DC anti-cancer vaccine in B-cell lymphoma patients was reported by Hsu and colleagues in Nature Medicine in 1996 (99). Since then, ∼200 clinical studies have been performed of single treatments using mostly monocytederived DCs and measuring the immune response, which have been comprehensively reviewed elsewhere (12,13,97,100). These studies concluded that DC-based vaccines are safe and potent for inducing the expansion of circulating tumor-specific CD4 + and CD8 + T cells (101)(102)(103). Although an anti-tumor immune response is frequently observed, objective clinical responses remain low, with a classic objective tumor response rate rarely exceeding 15%, as currently concluded in the meta-analysis provided by Anguille and colleagues (13,14,21). Although considerable progress has been made over the years, most of the studies have, unfortunately, been performed in late-stage patients with strong immunosuppression mechanisms already in place (104)(105)(106). To date, limited phase II and III trials ( Table 2) have been performed with DC-based immunotherapy and, therefore, more clinical studies evaluating early stage patients or patients with preneoplasia are strongly needed.
EX-VIVO DC-BASED VACCINES
Different ex vivo DC-based immunotherapy clinical trials have recently been concluded with encouraging clinical outcomes (100). Completed clinical studies have analyzed the following: (1) different protocols for DC preparation, (2) different DC activation stimuli, (3) different forms of antigen preparations from short peptides to complex whole-tumor-cell hybrids, and (4) different types of DC vaccine applications. First, the FDAapproved cell-based therapy for the treatment of hormonerefractory prostate cancer Provenge (Sipuleucel-T) is a vaccine consisting of autologous peripheral blood mononuclear cells (PBMCs) obtained by leukapheresis, including DCs activated with a fusion protein of a prostate antigen (prostatic acid phosphatase; PAP) and GM-CSF. Treatment with Sipuleucel-T resulted in a 4.1-month-prolonged median survival compared with placebo (25.8 vs. 21.7 months). The impact of this first FDA-approved cancer vaccine has been significant, however this product is not readily available for different reasons, including logistic and financial problems (107). More phase II and III clinical trials using autologous MoDCs obtained from patientderived CD14 + blood monocytes or from the CD34 + progenitors are shown to be effective against different cancer types and are summarized in Table 2. Phase III clinical trials using Mo-DCbased cancer vaccination are ongoing in metastatic colorectal cancer (NCT02503150, autologous tumor lysate), castrationresistant prostate cancer, which is combined with first-line chemotherapy (NCT02111577; VIABLE, MoDC vaccine loaded with antigens from an allogeneic apoptotic tumor cell line) and melanoma (NCT01983748, autologous tumor RNA antigen). In addition to colorectal, prostate cancer and melanoma cancer, DCs are intensively studied in glioma and renal and ovarian carcinoma ( Table 2) (108, 109).
IN VIVO DC TARGETING
Another approach to recruit natural DCs for cancer immunotherapy is to target DC subsets in vivo via specific receptors, e.g., DEC205, CLEC9A, and langerin to target cDC1s; CLEC4A4 to target cDC2; CLEC7A (dectin 1) to target cDC2 and MoDCs; CD209 (DC-SIGN), mannose receptor and macrophage galactose-type lectin to target macrophages, using antibodies to deliver antigens and activating agents (110)(111)(112). Compared to ex vivo DC generation protocols, in vivo targeting allows vaccines to be produced on a larger scale and, most importantly, allows direct activation of natural DC subsets in the patient's body. Importantly, in the absence of adjuvants, targeting antigens to DCs might induce tolerance rather than anti-tumor immunity, which would have substantial value in the context of autoimmunity. Currently, numerous in vitro and in vivo studies in humans are focused on DC-targeting vaccine development. In a phase I trial, a DC-based vaccine consisting of a fully human anti-DEC205 monoclonal antibody fused to the tumor antigen NY-ESO-1 and accompanied by a topical or subcutaneous application of TLR agonists (resiquimod) showed the efficient generation of NY-ESO-1-specific cellular and humoral responses and led to partial clinical responses without toxicity (113). Nevertheless, the correlation with clinical responses remains unclear, and larger studies will be needed to evaluate the efficacy of this therapy. Clinical trials of anti-DEC205-NY-ESO-1 are currently ongoing in acute myeloid leukemia (NCT01834248), ovarian cancer (NCT02166905) and melanoma (NCT02129075).
The advantage of such an approach is that maturation stimuli activate only DCs targeted by the antibodies, thereby preventing any toxicity or undesirable systemic activation (13). A different approach of targeting DCs in vivo, called GVAX, involved engineering irradiated gene-transfected tumor cells to secrete GM-CSF to stimulate the recruitment and activation of APCs (114). One phase II trial testing an allogeneic pancreatic cell line that secretes GM-CSF in combination with/without recombinant live attenuated L. monocytogenes engineered to secrete mesothelin (CRS-207) and low dose cyclophosphamide resulted in the recruitment of T cells into the TME and improved overall survival in patients with advanced pancreatic cancer (115,116). However, a phase IIB study failed to show improved overall survival in patients treated with the combination or CRS-207 alone compared with the survival of patients on chemotherapy. Importantly, two different phase III clinical trials to evaluate the therapeutic efficacy of GVAX in prostate cancer patients were conducted. The VITAL-1 trial comparing GVAX to docetaxel plus prednisone in castration-resistant prostate cancer was terminated after showing low efficacy by interim analysis. VITAL-2 comparing GVAX in combination with docetaxel vs. docetaxel in combination with prednisone was also terminated based on interim results showing an increased risk of death in the GVAX arm compared to the control group (117). In this line, promising results showing that FMS-like tyrosine kinase 3 ligand (FLT3L) administration enhanced anti-tumor immunity and limited the tumor cell growth in mouse models (118), are currently followed by clinical trials (NCT01811992, NCT01976585, NCT02129075, and NCT02839265).
DC-BASED THERAPY OF AUTOIMMUNE DISEASES
The current treatment of most autoimmune diseases involves lifelong administration of systemic immunosuppression drugs coupled to anti-inflammatory therapies and hormone replacement. In addition, systemic immunosuppression is inevitably associated with undesirable side effects. Thus, the main goal of autoimmune disease treatment would be the longterm reinduction of self-tolerance. With respect to autoimmune disorders, cell therapy based on autologous tolDCs generated from peripheral blood monocytes following ex vivo generation in GM-CSF and IL-4 cell culture medium might be beneficial over standard immunosuppressive treatment in terms of its complex effect on the immune system and the possibility to restore long-term antigen-specific tolerance while avoiding generalized immunosuppression.
In order to achieve the best in vivo tolDC efficacy, all the parameters of tolDC therapy, namely, optimal dose, administration route, and frequency of tolDC administration, have to be properly defined as we believe they could dictate what kinds of immune responses are activated to modulate autoreactive T-cells and induce immune tolerance. To date, the best route of tolDC administration is still not known and several challenges remain to allow tolDCs to migrate into draining lymph nodes for T cell encounter or to reach the site of inflammation. In most clinical trials, tolDCs have been administered subcutaneously or intradermally proximal to the inflammatory site to increase tolDC migration to draining lymph nodes where autoreactive T cells predominate and to reach the site of inflammation (119). At the same time, intranodal application and direct administration into the intestinal lesions of tolDCs has also been tested in phase I clinical trials in patients with MS and Crohn's disease, respectively (120). In MS, however, tolDC shuttle across the blood brain barrier seems to be required for the efficient treatment of MS. Recent data suggest that the introduction of de novo CCR5 expression using mRNA electroporation into tolDCs might facilitate migration od tolDCs into the inflamed central nervous system and improve the treatment outcome in MS (121). Moreover, the ability of tolDCs to modulate T cell responses might be influenced by the current clinical status of the patient. Indeed, we documented in our studies that hyperglycemia reduces the ability of tolDCs to induce stable Tregs from naive T lymphocytes that can suppress antigen-specific T-cell anergy (122,123). In that case, metabolic control, we believe, might be relevant for refining the inclusion criteria for clinical trials involving patients with DM1 and the maintenance of a tight metabolic control seem to be beneficial in patients considered for tolDC therapy.
Similar to DC-based cancer vaccines, a number of in vivo studies have documented that tolDCs require pulsing with relevant antigens to reach efficient clinical responsiveness following tolDC therapy (124). However, in some instances, antigen loading tolDCs leads to a worse condition and a higher incidence of autoimmune disease (125,126). In contrast, different in vivo studies have suggested that the presence of autoantigen is not necessary for tolDC preparation as tolDCs may upload relevant autoantigens once injected in vivo and induce antigenspecific tolerance (127). Moreover, autoimmune diseases are not commonly defined by one universal autoantigen. Suitable disease-specific autoantigens such as insulin and glutamic acid decarboxylase 65 (GAD65) or transgenic myelin oligodendrocyte glycoprotein (MOG) or myelin basic protein have been defined in DM1 and MS, respectively (128,129). However, in some autoimmune disorders, the specific autoantigen remains unidentified despite significant effort. In addition, not all patients display a uniform autoantigen pattern as antigen spreading, posttranslational modification, and development of neoantigens usually occur during the progression of the disease and complicate the search for the target antigens of the autoimmune response (128). A possible strategy seems to be to use a surrogate "universal" antigen, e.g., HSPs that are ubiquitously expressed in different types of inflammatory tissues (130).
EX-VIVO DC-BASED VACCINES
The ex vivo generation of stable, maturation-resistant tolDCs followed by their adoptive transfer represents novel immunotherapy for the antigen-specific treatment of autoimmune disorders. TolDCs can be established from monocytes from a patient's blood cultured using various pharmacological agents Vitamin D (VitD) and its analogs, dexamethasone, rapamycin, salicylates, and NF-κB inhibitors, a cocktail of immunomodulatory cytokines (IL-10, TGF-β), growth factors (GM-CSF, M-CSF), and pathogen products and with the use of apoptotic cells or genetic engineering (131). All of these approaches generally suppress the maturation or activation of DCs and reduce the ability of DCs to produce IL-12p70 through different mechanisms (131). Additional activation of tolDCs by lipopolysaccharide (LPS) or its non-toxic analog monophosphoryl lipid A (MPLA) has been shown to improve the antigen-presenting capacity and migratory ability of tolDCs (132). Common features of tolDCs include low antigen presentation capacity combined with the loss or reduction of costimulatory signals, expression of inhibitory molecules, and an anti-inflammatory cytokine profile. Generated tolDCs can be loaded with one or more antigens to confer specificity. To do so, suitable disease-associated antigens such as preproinsulin peptides or GAD65 for DM1, basic myelin proteins for MS or thyreoglobulin for autoimmune thyroiditis are necessary. Once injected in vivo, tolDCs are expected to induce antigen-specific tolerance through various mechanisms, such as induction of autoreactive T cell anergy, induction of apoptosis, and induction of various types of Tregs and Bregs (133).
The first clinical study on tolDC therapy was conducted in 2011 in adult patients suffering from autoimmune DM1. TolDC therapy was safe, and some patients exhibited increased blood levels of B220 + CD11c + B cells together with evidence for Cpeptide reactivation posttreatment (134). To date, further phase I/II clinical studies have been completed or are currently in progress in DM1, RA, MS, and Crohn's disease ( Table 3) (135). A Rheumavax study on tolDCs from RA patients established by NF-κB inhibitor and pulsed with citrullinated peptides documented decreased numbers of effector T cells, decreased levels of proinflammatory cytokines and chemokines and reduced DAS 28 score ( Table 3) (136). Another study tested the safety, feasibility, and acceptability of dex-VitD3-treated tolDCs pulsed with autologous synovial fluid as a source of autoantigens (AutoDecRa study) or tolDCs generated in the presence of TNF-α and relevant disease peptides (CreaVax study) in patients with RA. Both studies indicated tolDC therapy to be safe and showed signs of clinical improvement (137). Intraperitoneal administration of Dex/VitD-treated tolDCs in Crohn's disease revealed clinical improvement in some patients associated with an increase in Tregs and reduction in IFN-γ levels (138). Recently, Zubizarreta and colleagues reported the safety, feasibility, and signs of efficacy of tolDC therapy in patients suffering from MS and neuromyelitis optica. Indeed, i.v. administration of peptideloaded tolDCs led to a significant increase in the production of IL-10 in PBMCs stimulated with the peptides as well as an increase in the frequency of regulatory IL-10-producing Tregs (139,140). Additionally, follow-up studies testing the safety of VitD3 or dexamethasone-treated tolDCs loaded with relevant disease peptides are currently recruiting patients with MS (135).
IN VIVO DC TARGETING
Ex vivo-generated tolDCs have certain disadvantages, such as laborious, patient-specific, tailored-made preparation and high cost. To overcome these limitations, new approaches are being conducted to establish tolDCs in vivo. One possibility is the selective antigen-specific targeting of the DC-restricted endocytic receptor DEC205 with monoclonal antibodies in the absence of maturation stimuli to promote immunological tolerance (141). Another approach exploits coadministration of free autoantigens or autoantigens encapsulated with nanoparticles, microparticles, or liposomes bearing tolerogenic factors that are delivered specifically to DCs (142) or infusion of early-stage apoptotic cells that possess immunomodulatory properties and should prevent autoimmunity or even treat ongoing inflammatory processes (143). Another strategy is based on the non-inflammatory natural process of clearance of red blood cells by splenic APCs. Indeed, transfusion of engineered erythrocytes with covalently attached autoantigenic peptides was documented to induce antigenspecific immune tolerance via the uptake and processing of apoptotic cellular carriers for tolerogenic presentation by host splenic APCs in DM1 and SLE (144).
DCVAC, AN IMMUNOTHERAPY APPROACH HARNESSING DCs TO TREAT BOTH CANCER AND AUTOIMMUNITY
DCVAC, an investigational immunotherapy treatment based on a new active cellular immunotherapy platform, aims to treat cancer or autoimmune diseases by inducing or suppressing patients DCs, respectively. The unique capacity of DCs to induce both immune activation and tolerance under distinct circumstances is widely used for the preparation of several immunotherapy products currently tested in multiple phase I clinical trials in patients with autoimmune diseases and phase II and III clinical trials in cancer patients. The most advanced immunotherapy treatment in the oncology field is designed for prostate (DCVAC/PCa), ovarian (DCVAC/OvCa) and lung (DCVAC/LuCa) cancer patients. Based on theoretical assumptions and experimental data, cancer immunotherapy has the greatest potential when applied at the early stages of the disease or to patients following a radical surgical intervention after a removal of a large amount of tumor tissue (145). In contrast, in advanced stages of the disease, cancer immunotherapy might have a limited impact on malignant cell eradication due to the establishment of tumor-induced immunosuppression (68,146). Moreover, preclinical and clinical testing supports the fact that the goal of immunotherapy in the late disease stages is not necessarily complete eradication of the tumor but rather the establishment of an equilibrium state between the host immune system and malignant cells (147). Therefore, it is beneficial to combine immunotherapy with other treatment options, for instance, chemotherapy or radiotherapy (145,148). The concept of combined chemoimmunotherapy explores the fact that cytostatic treatment might not only eradicate the tumor mass but also neutralize tumorinduced immunosuppression, thus facilitating the effect of the concurrent immunotherapy, as discussed previously in detail elsewhere (146,(149)(150)(151). Therefore, numerous phase II clinical trials are ongoing to evaluate the potential of DCVAC in patients at various stages of disease ( Table 2). The DCVAC technology in cancer therapy has been focused on a number of principles. First, DCVAC technology using high hydrostatic pressure (HHP)treated allogenic tumor cell lines is used to activate patients' DCs by a broad range of tumor antigens to induce a complex anti-tumor immune response. The major advantages of this method are that (A) multiple epitopes can be presented on MHC molecules of different haplotypes, thus having the potential to induce both CD4 + and CD8 + T cell responses to a wide spectrum of antigens and (B) for the time it takes for antigen processing results in prolonged antigen presentation. Second, the concept of combination therapy is also being investigated in patients with advanced cancer in combination with multiple treatment modalities, including chemotherapy and hormone therapy, to produce synergistic effects and to improve the clinical outcome. Third, long-term activation of the immune response is achieved. DCVAC is applied in several doses over a prolonged period, which leads to enhanced stimulation of the anti-tumor immune response in the patient. DCVAC/PCa, DCVAC/OvCa, and DCVAC/LuCa immunotherapy is manufactured from monocytes harvested from patient leukapheresis (Figure 3). Monocytes are differentiated ex vivo into iDCs in the presence of IL-4 and GM-CSF for 6 days (152)(153)(154). iDCs are subsequently loaded with tumor cell lines of the appropriate origin based on overlap with the expression profiles of tumor-associated antigens (Figure 3) (155,156). A particular way to enhance the immunogenicity of tumor cells used in the protocol is to induce immunogenic cell death (ICD) and increase the exposure/release of DAMPs to enhance DC maturation. HHP is a potent inducer of ICD, as documented both in vitro and in vivo (157)(158)(159)(160)(161)(162). Moreover, HHP treatment of tumor cells can be easily standardized and performed in good manufacturing practices (GMP) conditions to allow its incorporation into the manufacturing protocol. The patient's own DCs engulf the dying tumor cells and, once activated using TLR3 ligand polyI:C, present tumor antigens on their surface (152). The resulting product is frozen, stored in liquid nitrogen and shipped to the treatment site. The first dose is administered to the patient ∼4 weeks after leukapheresis. A single leukapheresis yields up to 15 doses of DCVAC, which is sufficient to treat the patient for more than 1 year. After being thawed and diluted, DCVAC is administered subcutaneously at various treatment intervals, depending on the trial design. After administration, mature DCs migrate to the draining lymph nodes and activate a tumor-specific immune response (163,164). Similar to boosting the immune system in cancer patients, DCVAC technology might be exploited to regulate unwanted autoimmune processes and induce long-term antigen-specific tolerance in patients suffering from autoimmune disease, such as DM1. DCVAC aimed at the immunotherapy of patients with DM1 consists of tolDCs generated in vitro from peripheral monocytes isolated from patient leukapheresis (Figure 3). First, iDCs are generated from monocytes in the presence of GM-CSF and IL-4, similar to DCVAC for cancer patients. Then, in contrast to DCVAC, tolerogenic factors (dexamethasone and VitD2) are introduced to the culture at the indicated days to induce the tolerogenic phenotype of DCs. As antigen loading might decrease the disease-protective effect of tolDCs in animal models of DM1, diabetogenic antigens are not introduced into the manufacturing protocol (125,165). Finally, tolDCs are activated with the MPLA to improve tolerogenic properties as reported previously (132). Ultimately, tolDCs maintain a semimature phenotype and exhibit tolerogenic properties even under strong inflammatory conditions (166). Overall, DCVAC active cellular immunotherapy represents a personalized treatment of prostate, ovarian, and lung cancers and potentially also autoimmune diseases. The aim of the ongoing phase I to phase III clinical trials is to evaluate the efficacy and confirm the safety of this approach in order to offer new treatments for cancer malignancies and autoimmune disorders.
CONCLUSIONS
DC vaccination has proven to be safe and feasible in multiple clinical trials, as shown over the past two decades. Vaccination strategies involving DCs have been designed with regard to the unique capacity of these cells to coordinate innate and adaptive immune responses. The main aim of DC therapy is therefore to induce tumor-specific effector T cells that can reduce the tumor growth and induce immunological memory to control tumor relapse in cancer patients. In contrast, the main aim of DC therapy in autoimmune disorders is to expand and induce T cells, usually Tregs, that suppress immunity. Significant advances have been achieved in the last 20 years, and DC vaccines are continuously being optimized. The contemporary view on the potential role of DCs in cancer and autoimmune therapy has expanded remarkably, moving from ex vivo generated DC-based vaccines to a broad array of therapeutic options. However, we still need to learn more about potential combination therapy which could promote the efficacy of established cancer therapies and the identification of reliable biomarkers that can predict the propensity of cancer patients to benefit from DCbased immunotherapy.
AUTHOR CONTRIBUTIONS
All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.
FUNDING
The authors were supported by Sotio, Prague, Czech Republic. Some of the studies and clinical trials cited in this review were supported by Sotio, Prague, Czech Republic. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.
ACKNOWLEDGMENTS
We thank all of the patients and volunteers who participated in our studies and clinical trials. We thank former and current members of Sotio, Prague, Czech Republic and the Department of Immunology, 2nd Medical School, Charles University, Prague, Czech Republic for their contributions to the progress of DCVAC clinical development. | 9,634.8 | 2019-10-29T00:00:00.000 | [
"Medicine",
"Biology"
] |
Augmented reality as a novel approach for addiction treatment: development of a smoking cessation app
Abstract Objective Augmented reality (AR) is a rapidly developing technology that has substantial potential as a novel approach for addiction treatment, including tobacco use. AR can facilitate the delivery of cue exposure therapy (CET) such that individuals can experience the treatment in their natural environments as viewed via a smartphone screen, addressing the limited generalizbility of extinction learning. Previously, our team developed a basic AR app for smoking cessation and demonstrated the necessary mechanisms for CET. Specifically, we showed that the AR smoking cues, compared to neutral cues, elicited substantial cue reactivity (i.e. increased urge) and that repeated exposure to the AR smoking cues reduced urge (i.e. extinction) in a laboratory setting. Here we report the next step in the systematic development of the AR app, in which we assessed the usability and acceptability of the app among daily smokers in their natural environments. Method Daily smokers (N = 23, 78.3% female, Mean Age = 43.4, Mean Cigarettes/Day = 14.9), not actively quitting, were instructed to use the AR app in locations and situations where they smoke (e.g. home, bar) at least 5 times per day over one week. The study is registered in clinicaltrials.gov (NCT04101422). Results Results indicated high usability and acceptability. Most of the participants (73.9%) used the AR app on at least 5 days. Participants found the AR cues realistic and well-integrated in their natural environments. The AR app was perceived as easy to use (Mean = 4.1/5) and learn (mean of 2 days to learn). Overall satisfaction with the app was also high. Secondary analyses found that 56.5% reported reduced smoking, with an average 26% reduction in cigarettes per day at follow-up. Conclusions These findings set the stage for a randomized controlled trial testing the AR app as an adjuvant therapy for treating tobacco dependence, with potential applicability to other substances. KEY MESSAGE This study found that the augmented reality (AR) smartphone application that utlized cue exposure treatment for smoking cessation was perceived as easy to use and learn in the natural, day-to-day environment of daily smokers. Findings set the stage for a larger clinical trial testing the AR app as an adjuvant therapy for treating tobacco dependence, with potential applicability to other addictive behaviors.
Introduction
Cigarette smoking remains the leading cause of cancer and preventable cause of disability and mortality in the United States [1]. Despite wide-spread motivation to quit smoking among smokers [2] and efficacious smoking cessation treatments [3], many people who successfully quit smoking relapse [4]. The omnipresense of smokingrelated cues (e.g. ashtrays, other smokers) in their environment increases the urge to smoke, making relapse more likely [5]. As such, there is an ongoing need for the development of novel and effective smoking cessation treatments or treatment adjuvants that reduce cue-provoked urges. Cue exposure therapy (CET) targets cueprovoked urges by repeated exposure to drug-related cues (e.g. cigarettes) without the actual reinforcing effects of drug use (e.g. nicotine). That is, CET extinguishes the cue-provoked urge to use the drug [6][7][8][9]. However, its efficacy is often short-lived due to limited generalization of extinction learning to other contexts [10], such as the many locations and situations where drug use occurs. This is known as the renewal effect [11,12]. Augmented reality (AR) has recently emerged as a rapidly developing technology that has substantial potential [13,14] for treating addiction, including tobacco use [15]. AR involves the real-time insertion of digitallycreated images into a user's natural environment as viewed through a screen or headset. In recent years it has been utilized for entertainment (e.g. Pok emon Go), retail sales (e.g. viewing potential household products as they might appear in one's home), and medical treatment and training [16]. AR contrasts with virtual reality (VR), in which a user is immersed as much as possible into a fully digital environment as experienced via a headset and other sensory apparati. Although VR has been used to treat addiction [17], AR has functionality that addresses limitations of VR (e.g. high cost in development, low realism, not the user's actual environment [15]). The ability of AR to insert digitally created objects into the user's environment in real-time within the user's natural environment directly addresses limitations in the generalizability of extinction learning. The AR object can be either still or moving and placed in virtually an infinite number of environments (e.g. home, bars, social situations), which is likely to facilitate exposure to drug cues in typical drug use settings while also enhancing the immersive experience and ecological validty.
AR also has several unique practical advantages, as compared to VR, including relatively low cost in developing digital objects, high degree of realism given the simplicity of the AR objects, and real-time exposure by inserting AR objects into the user's natural environment [15] (e.g. an ashtray on the user's end table in their living room). AR can be implemented via most personal mobile device, including smartphones without extra equipment such as headsets or helmets as in VR. Thus, individuals using AR for substance use treatment can view the AR objects superimposed on their typical real-world drug using environment. Given the rapid increase in smartphone use, even among low-income populations (e.g. 76% of low-income populations own a smartphone in the U.S. [18]), AR has great clinical and therapeutic potential in accessibility and affordability. AR has been applied to CET for phobias and has demonstrated preliminary [13,14,19] and comparable efficacy in reducing fear and anxiety in small animal phobias when compared to invivo exposure [19]. However, the applicability of AR for the treatment of addiction has not yet been tested.
Our team developed a basic AR smartphone app for the delivery of CET as an adjuvant strategy for smoking cessation treatments, and we evaluated whether it met the two necessary conditions for CET: (1) that the smoking cues elicited cue-reactivity, particularly self-reported urge to smoke and (2) that repeated exposure to the cues produced evidence of extinction. Both of these conditions were met in a controlled laboratory-based setting among daily smokers [20][21][22]. Specifically, we developed 6 smoking paraphernalia AR objects (e.g. cigarette, ashtray) activated via a smartphone AR app. Our earlier studies demonstrated that the AR cues were perceived as realistic and well-integrated into the user's environment [20,21]. Additionally, AR smoking cues demonstrated higher cue reactivity with a large effect size as compared to AR neutral cues, with cue-reactivity magnitude comparable to in vivo smoking cues ( Figure 1(a), [20]). In a subsequent proof of concept study, daily smokers in acute abstinence were randomized to either extinction (viewed AR smoking cues) or control (viewed AR neutral cues) conditions in which they completed a single session of multiple trials of cue exposure using the AR app in a laboratory setting [22]. We found that posttest cue-provoked urge was lower in the extinction condition than in the control condition with a moderate effect size, suggesting that cue reactivity to AR smoking cues can be extinguished (Figure 1(b), [22]). Together, these studies provide foundational evidence for the potential clinical utility of AR cues in CET for addiction with a focus on smoking behavior.
To advance the AR app toward evaluation in a fullscale randomized controlled trial (RCT), the current study assessed its usability and acceptability among daily smokers not currently making a quit attempt in their typical natural environments. Although the study was not powered to test clinical outcomes, we describe proxies of behavior change (e.g. motivation to quit, cessation self-efficacy, urges to smoke) and changes in cigarettes per day.
Participants
Daily smokers were recruited between November 2021 and February 2022 through online advertisements and a participant database at the Tobacco Research and Intervention Program at Moffitt Cancer Center. The final sample size was determined based on the recommended sample size for usability studies being at least N ¼ 20 to capture 98.4% of potential usability problems [23]. Inclusion criteria were: (1) ! 18 years of age, (2) currently smoking ! 3 cigarettes per day (CPD) for the past year, (3) motivated to quit smoking within the next month (via a single question reflecting the 'preparation' stage of the Transtheoretical Model, [24]), (4) having an AR-compatible smartphone that the participant was willing to use during the study, (5) having a valid home address and phone number, and (6) being able to speak, read, and write in English. Exclusion criteria were the regular use of other tobacco products (>30% of the time) and having a household member already enrolled in the study. Participants were aware that they would be asked to test and provide feedback on an app under development, and that they would not be receiving smoking cessation assistance per se.
Procedure
The study was a single-arm pilot trial that tested the usability and acceptability [25,26] of an AR app for smoking cessation for 7 days among daily smokers.
Given this was a usability study rather than a clinical trial, one week seemed sufficient to obtain feedback about the functionality of the AR app. All study procedures were conducted remotely. Participants were phone screened to determine eligibility. Eligible participants provided verbal consent to participate in the study and were aware that their data would be fully anonymized in the publication of the findings. Participants were then provided instructions on the AR app and sent an electronic link to a baseline survey. Upon completion of the baseline survey via REDCap, materials comprising participant procedures, a manual on how to download and use the AR app (iOS or Android version), the National Cancer Institute's Clearing The Air smoking cessation booklet, and the contact information for the Florida state tobacco Quitline were mailed to each participant. Study staff then re-contacted participants to ensure that they downloaded the AR app and to answer any questions.
Participants were instructed to use the AR app at least 5 times per day over the next 7 days in locations and situations where they typically smoked (e.g. home, outdoors) and when they experienced an urge to smoke. We selected 5 daily uses as the target for this study in an initial attempt to balance the need for adequate extinction duration across multiple contexts with the desire for reasonable participant burden for encouraging adherence. At the start of each day, participants were asked to indicate in the app how many cigarettes they smoked the day before. One day after the completion of the 7-day AR app use, a link to complete a follow-up survey via REDCap was sent to participants, after which study staff contacted participants to conduct a semi-structured phone interview to obtain additional feedback on the AR app (not reported in this paper). Participants were compensated up to $80. The project was registered in clinicaltrials. gov on September 24, 2019 (NCT04101422) and addressed the 'Testing AR Application' aim, using an independent sample. All procedures were approved by the Advarra Institutional Review Board (Protocol Number 20007). Data and study materials will be available upon reasonable request.
Augmented reality app and cue exposure session
The AR app used in our previous lab-based studies was developed on the Unity V R platform and ran on Apple 10xr iPhones provided to participants. The app included six smoking and six neutral AR images [20][21][22]. Data collected by the app were transferred to the study desktop computer following each session. The initial app presented each AR cue for 1 minute in a given cue exposure session with a fixed number of AR cues presented. For the present study, the AR app was updated for field testing as follows: (1) one additional AR cue, coffee, resulting in 7 AR cues in total; (2) inclusion of a question on the number of cigarettes smoked the prior day; (3) real-time data transmission from the app to the institutional server; (4) added Android compatibility; and (5) variable presentation length of each AR trial (30 to 60 seconds) and number of AR cues per session (3 to 7 cues). The range of trial duration was selected to provide adequate exposure without threatening engagement and compliance due to participant boredom or frustration. Participants received a daily reminder to complete the daily assessment at 10:00am. Upon completion of the daily assessment, participants were prompted to conduct an AR cue exposure session at that time or select to delay it. If they opted to delay the AR session, the app displayed, 'Okay, no problem. Remember to come back and do the AR session at least 5 times a day.' Furthermore, after completing an AR session, the app displayed, 'Well done! You finished an AR session. Please complete 5 each day!' There was no further reminder notification to complete a session. Whenever a participant opened the AR app, the home screen appeared, and participants could select to complete an AR cue exposure session. Menus for a practice session (using an AR rubber duck cue) and contact information of the study team were also available from the home screen.
In each cue exposure session, participants were instructed to move the phone to aim the camera at a flat surface where they would typically place smokingrelated paraphernalia until a blue placement circle appeared on a flat surface (Figure 2(a)). The circle would appear once the app identified a horizontal plane (flat surface). The participant could then move the circle to any location on that plane. Once the circle was positioned in the desired location, participants tapped the circle to trigger the AR cue to appear at that location. They then pressed the 'Start' button to begin the cue exposure session (Figure 2(b)) and viewed the AR cues by viewing the app on their smartphone. Each session consisted of 3 to 7 trials of cue exposure. The order, presentation length, and number of AR cues were randomized. To enhance the immersive experience with the AR cues, participants were instructed to approach the AR cue from different angles and distances. The AR cues consisted of six smoking-related proximal cues (i.e. cigarette, pack of cigarettes, pack and lighter, pack and ashtray, cigarette and lighter, and a lit cigarette in an ashtray with smoke motion) and one smoking-related distal cue (i.e. a cup of coffee; Figure 2(c)). Each cue was presented for 30 to 60 seconds. At the beginning of each session and following each cue exposure trial, participants reported their current urge level by answering a 1-item question on the app.
Measures
Baseline survey Self-reported demographic information (age, ethnicity, race, biological sex, marital status, education, and income), tobacco use history (average CPD on the days they smoked in the past month), and overall strength of urge to smoke in the past month (three response options: none/slight, moderate/strong, and very/extremely strong) were assessed. Nicotine dependence was assessed by the 6-item Fagerstr€ om Test for Cigarette Dependence (FTCD; [27,28]). The FTCD is a standard self-report measure that assesses nicotine dependence that uses a mix of yes/no items and multiple-choice items. Its total score ranges from 0 to 10 (higher ¼ greater dependence). To describe behavior change in smoking-related variables, three well-established standardized self-report measures that have demonstrated adequate psychometric properties in previous studies were administered: (1) The Contemplation Ladder [29] to assess motivation to change smoking behavior, (2) 10-item Short-form Abstinence-Related Motivational Engagement Scale (ARME; [30]) to assess ongoing engagement in abstinence, and (3) 9-item Self-Efficacy Scale -Smoking [31] to measure perceived confidence for not smoking in situations where they typically smoke (e.g. social situations, when experiencing negative affect). The Contemplation Ladder asks an individual to choose one motivation level out of 10 levels (0 ¼ No thought of quitting smoking cigarettes, 10 ¼ Taking action to quit smoking cigarettes [29]). The ARME uses 7-point Likert scale (1 ¼ Completely Disagree, 7 ¼ Completely Agree) and the total score ranges from 7 to 70 (higher ¼ greater ongoing engagement in abstinence [30]). Lastly, the Self-Efficacy Scale is rated on 5-point Likert scale (1 ¼ Not at all confident, 5 ¼ Extremely confident) and its total score ranges from 9 to 45.
Augmented reality app measures
Within the AR app, participants completed a singleitem assessment of daily cigarette smoking (i.e. 'How many cigarettes did you smoke yesterday?'). Engagement with the app (i.e. number of sessions completed per day, number of discrete days the app was used, number of AR sessions completed on day the app was used, and number of AR cues viewed on days the app was used) was also captured.
Follow-up survey
Usability and acceptability were measured using the following self-report questionnaires at follow-up based on the literature on mHealth usability [25,26]: (1) a 3-item reality/co-existence questionnaire that was developed by the team to assess reality ('How real did the object seem to you?'), environment co-existence ('How well did the object appear to be part of the scene?'), and user co-existence ('How much did you feel the object was right there in front of you?') on a 10-point Likert scale (1 ¼ Not at all, 10 ¼ Very real/very well/very much), (2) the 10-item System Usability Scale (SUS) that has been validated in prior studies [32] to assess learnability and usability of the system (e.g. 'I thought the app was easy to use') on a 5-point Likert scale (1 ¼ Strongly Disagree, 5 ¼ Strongly Agree; higher score ¼ greater usability; [33]), (3) a two-item scale to assess ease of use at the beginning and at the end of the week (How easy/difficult was it to use the app?) on a 5point Likert scale (1 ¼ Very difficult, 5 ¼ Very easy), (4) a single item to assess ease of learning (How many days did it take to get comfortable using the app?; 7 response options: 1 to 7 days), (5) two items to assess satisfaction (Would you recommend this app to a friend or family member to help them quit smoking?; Overall, how satisfied were you with the app?) on a 4-point Likert scale (1 ¼ No/Quite dissatisfied, 4 ¼ Yes/Very satisfied), and (6) a single-item to assess usefulness (Would this app appeal to you if you were currently attempting to quit smoking? yes/ no). Besides the SUS, the rest of the usability and acceptability measures in the follow-up survey reported in the current study were developed by our study team and mimic those used in previous research in this area [20,21].
To describe change in motivation and self-efficacy at posttest, the Contemplation Ladder, ARME, and Self-Efficacy Scale were administered. To describe changes in the strength of urge to smoke, a singleitem measure of smoking urge over the past week was collected (3 response options: non/slight, moderate/strong, and very/extremely strong). Although participants were not offered smoking cessation treatment and not actively attempting to quit or reduce smoking, past week smoking behavior (average CPD on the days they smoked) was assessed. Participants were also asked where (sample options: home, outdoors, work, bars), when (sample options: with food, while doing an activity, first thing in the morning, when feeling a negative emotion), and with whom (sample options: with familiar people, with other smokers, alone) they used the AR app. Additionally, the number of participants who contacted the study team for additional help using the app was tracked.
Data analysis
Descriptive analyses (e.g. means, SDs, proportions) were conducted on the study variables including demographics, self-report measures on usability and acceptability, app use, motivation, self-efficacy, and change in CPD. Although the current sample was not powered for inferential statistics, paired t-tests and McNemar's test were conducted to explore changes in the proxy variables between baseline and follow-up. Figure 3 presents the CONSORT diagram. We enrolled 43 participants who returned the baseline survey. Of these, 26 participants downloaded the app onto their smartphones. Of the 17 who did not download the app, we learned that at least 7 determined that their phones were not AR compatible. Three of the participants who downloaded the app did not complete the AR sessions at least 4 times, leaving 23 participants for analysis. We determined that completing at least 4 AR sessions would provide the minimum AR app experience necessary to provide feedback on the app. Table 1 presents baseline data, including demographics (78.3% female; 13.0% Hispanic/Latinx; 17.4% Black/ African American), smoking characteristics, previous experience with AR apps, and initial interest in using the AR app. Participants were moderately nicotine dependent and reported a strong interest in using the AR app. Approximately half reported previous experience with AR apps. Table 2 presents the descriptive statistics of the usability and acceptability outcomes. Seventeen participants (74%) used the app at least 5 days and 12 (52%) used the app daily for 7 days. The mean number of AR sessions completed on days the app was used was 3.8 (SD ¼ 1.5) and the mean number of AR cues viewed was 22.9 (SD ¼ 8.6) on days the app was used. Participants reported that the top three places where they used the AR app were home (91%), car (48%), and outdoors (44%). The top three occasions when they reported using the app included with coffee (52%), before going to bed (52%), and when bored (52%). Participants also reported using the app while doing an activity (48%) or first thing in morning (48%). Lastly, participants primarily used the app when they were alone (83%) followed by being with familiar people (35%) and/or other smokers (22%).
Usability and acceptability
On the follow-up survey, participants reported an overall positive experience with the AR app. They reported that the AR cues were highly realistic, were well-integrated in the environment, and appeared as if the cues were right in front of them. The AR app was perceived as easy to use and learn as indexed by the System Usability Scale (M ¼ 79.5, SD ¼ 17.1). There was a statistically nonsignificant increase in the perceived ease of using the app over time (t(22)=-1.37, p¼.184) and 87% reported that the app was overall easy to use at the follow-up survey. The average number of days to become comfortable with the app was 2 with the majority reporting only 1 day (n ¼ 15; 65.2%), and only 17% (n ¼ 4) asked for additional help from the study team. The 4 participants who asked for additional help provided the following reasons: trouble downloading because they had the wrong app name; incorrect ID number; trouble placing the indicator; and asking where the practice tutorial was located. High satisfaction was reported such that 65% reported that they would recommend the app to a friend or family member to help them quit smoking and 70% expressed overall satisfaction with the app. Lastly, 48% reported that the AR app would be appealing to them if they were currently attempting to quit smoking. Table 3 presents the descriptive statististics of the smoking variables at baseline and follow-up. No significant changes were found on the Contemplation Ladder, the ARME, or the Self-Efficacy Scale. However, both the proportion of participants who reported strong urge to smoke and CPD significantly declined. Specifically, participants were less likely to experience very/extremely strong urges at follow-up (17%) than at baseline (57%, p<.01). Participants also reported smoking slightly fewer cigarettes at follow-up as compared to baseline (t(22)=-2.90, p<.01). In particular, 13 participants (57%) reported having reduced the number of cigarettes smoked in the past week at followup, with an average reduction of 26% among them (M ¼ 3.7, SD ¼ 1.8).
Discussion
Previous laboratory-based research established that the AR app met the necessary mechanistic criteria underlying CET-cue reactivity and extinction. The current study extended that research by examining the usability and acceptability of the AR app among daily smokers in their natural environments.
Usability was supported by high ratings across several measures. Consistent with our earlier findings in laboratory settings [20,21], AR cues were perceived as highly realistic and well-integrated in the user's natural environment. Perceived reality/co-existence is one crucial component for an immersive AR experience [34], which was met in our study. Although only 52% of participants used the AR app every day for a full week, the majority used it on at least 5 days, supporting the feasibility of the AR app. Regarding ease of use, the average rating on the System Usability Scale was very high, and participants reported increased ease of use over time. Similarly, participants reported high learnability, with the majority reporting feeling comfortable using the app within the first day. Anecdotally, difficulties appeared to be related to particular viewing conditions (e.g. dim light) and idiosyncratic limitations with some Android phones. Moreover, few participants reached out for additional help from the study team.
Acceptability of the app was deemed moderate to high. Overall, participants reported high satisfaction with the app, although only half of participants reported that the app would be appealing if they were currently attempting to quit. However, this is not surprising given that the current study focused on the usability of the extinction trials, so the basic app did not include other user engagement features that will be added later, nor did we provide other standard components of smoking cessation interventions. Whereas app functionality regarding CET was our top priority at this developmental phase, other features such as gamification or progress feedback were not yet implemented, nor was it yet integrated into a smoking cessation program. Thus, it is possible that our acceptability ratings were impacted by the somewhat barebones nature of this initial version of the app.
Exploratory analyses revealed some small but significant changes in smoking proxies and behavior over the week of app use. No changes were found in the cessation motivation or self-efficacy indices over this short time-frame. However, both strong urge to smoke and CPD decreased from baseline to follow-up. These are the clinical indices that theoretically should be most affected by CET, so it is encouraging that we detected a change signal even within this feasibility trial.
It is important to acknowledge the limitations of this study, particularly in comparison to a full-powered clinical trial. The sample size was underpowered for inferential statistics across participants, participants were not enrolling in a smoking cessation program per se, the CET intervention lasted only a week, and this version of the AR app included only AR presentation without features to enhance engagement (e.g. gamification, reminders to complete the AR sessions) or additional smoking cessation assistance. Moreover, this singlearmed trial did not include control or comparison arms, which limits causal conclusions about the exploratory longitudinal findings. Despite these limitations of a usability/acceptability trial, the study yielded encouraging findings with respect to both the usability and acceptability of an AR app that presents smoking-related stimuli in participants' natural settings. One remaining concern is the availability of AR-compatible smartphones, given that a relatively high proportion of potential participants were excluded because their devices were not AR-compatible. It is likely, though, that these individuals were using older devices, since most contemporary smartphones now have this capability. Thus, this concern should attenuate with the diffusion of newer technology.
Given the novelty of clinical AR apps, several research questions remain. First, this app was envisioned to be integrated into a comprehensive smoking cessation intervention. Therefore, research is needed to determine how best to pair the app with an existing treatment. Relatedly, the incremental efficacy of the AR app for smoking cessation should be tested in comparison to standard smoking cessation interventions (e.g. cognitive behavioral therapy, pharmacotherapy). There are also various theoretical, functional and implementation questions that have been elaborated elsewhere [15]. Other sensory modalities such as odor and sound could also be explored to enhance the realism and engagement of the AR stimuli [13]. Collecting daily information on the place, time, and social situation during the extinction trials would contribute to a better understanding of how the app was used.
Finally, the current results, together with our previous laboratory findings, have demonstrated that a smoking-related AR app elicits cue reactivity [20,21], produces extinction [22], and is feasible and acceptable to participants. Thus, not only does this suggest that CET for smoking cessation may be an effective and feasible treatment option, but AR-based CET could also be considered for the treatment of addictive behaviors involving other substances, such as alcohol or illicit drugs. AR-based CET offers the advantage of conducting cue-exposure in users' diverse substance-use environments-which should enhance the generalizability and maintenance of extinction-without the risks associated with the presence of actual substances or their paraphernalia. Thus, AR offers the potential to break through the barrier of the renewal effect, which has limited the utility of CET for smoking and other substance use [12]. Further, aside from extinction-based treatment, AR could also be utilized to train coping responses such as cognitive-behavioral or mindfulness skills that can be used when individuals encounter naturally-occurring high-risk situations [15]. Moreover, once AR headsets or glasses become more available and affordable, users will be able to view AR images directly, rather than via a smartphone screen, which should further improve realism, engagement, and, ultimately, efficacy. Limitations of AR for CET should also be acknowledged. These include reliance on a single sensory modality (vision) to date, in comparison to in vivo stimuli that may include tactile, olfactory, and auditory cues. Additionally, some drugrelated stimuli, such as the presence of particular individuals, are currently too complex to be created as AR images. Finally, the lack of direct therapist involvement during the CET sessions, while offering flexibility, may come at a cost of extinction session control, consistency, engagement, and adherence. Future research should examine the cost-benefit ratio of CET with respect to its advantages and limitations.
The current study was the first, to our knowledge, to test and demonstrate both usability and acceptability of an AR app for smoking cessation in the real world. Together with our previous laboratory studies, this research provides evidence supporting the clinical potential of AR-based CET for smoking cessation and relapse prevention. The current findings set the stage for an RCT testing the AR app as an adjuvant therapy for treating tobacco dependence, with potential applicability to other substances. | 6,965 | 2022-11-08T00:00:00.000 | [
"Medicine",
"Psychology",
"Computer Science"
] |
Investigation on the leading phase of Al 2 O 3 /YAG eutectic crystals prepared by directional solidification
The leading phase of eutectic materials has an effect on the solidification behavior and further influences on their properties. Many studies have been carried out on the leading phase of metal/metal and nonmetal/metal eutectic alloys. Nevertheless, few studies were focused on no-metal/no-metal eutectic materials. In the present work, the leading phase in the Al 2 O 3 /Y 3 Al 5 O 12 eutectic crystal during solidification was investigated by electron back-scattered diffraction and discussed according to the classical solidification theory. It is observed that the Y 3 Al 5 O 12 was the leading phase. The leading phase was determined by the wetting angle and the undercooling. At a given wetting angle, the Y 3 Al 5 O 12 would be the leading phase when the undercooling exceeds the critical value.
Al 2 O 3 /Er 3 Al 5 O 12 , 5,6 and Al 2 O 3 /Y 3 Al 5 O 12 (YAG) 7,8 eutectic crystals, have attracted great interest because of their potential applications at ultra-high temperatures. 9 These materials have superior flexural strength at temperature close to their melting point, [10][11][12][13] good oxidation, and creep resistance. 14,15 Eutectic growth in crystals/alloys involving the nucleation and coordinated growth of two or more phases from one liquid phase has always been the topic of interest. 16,17 Nucleation conditions usually determine the solidification evolution of a liquid melt. Two phases do not nucleate at the same time in eutectic alloys. There must be a leading phase, which nucleates and grows preferentially. 18 However, there are no uniform conclusions as to the cause of the leading phase formation in eutectic alloys. For instance, in the metal/ metal eutectic alloys, the component possessing smaller dynamic undercooling might nucleate first and act as the leading phase. However, for nonmetal/metal eutectic alloys, such as Al/Si eutectics, 19,20 Si is usually considered as the leading phase though it has a higher dynamic undercooling (1-2°C) than that of Al (0.02°C). That is attributed to constitutional supercooling in the alloys and detailed interpretation can be found in Ref.18。For nonmetal/nonmetal eutectics, such as Al 2 O 3 /YAG eutectic crystals, in which both phases have high melting entropies, the leading phase has not yet been studied in detail. Understanding the leading phase and nucleation during solidification is the kernel parameters for the microstructure optimization of the engineering ceramics. It can also ensure their excellent properties for the as-grown crystals, 19 so that it is necessary to study the leading phase in Al 2 O 3 /YAG binary eutectic crystals.
Herein, we investigated the leading phase of Al 2 O 3 /YAG eutectic crystals with two kinds of seed bars used. It is found that the leading phase of the Al 2 O 3 /YAG eutectic crystal is determined by the wetting angle and the undercooling. At a given wetting angle, the Y 3 Al 5 O 12 would be the leading phase when the undercooling exceeds the critical value. This finding is of clear significance for the microstructure optimization and design of the Al 2 O 3 /YAG eutectic crystal.
| EXPERIMENT PROCEDURE
Nano-Al 2 O 3 and Nano-Y 2 O 3 powders purchased from Beijing Yosoc Science& Technology Co. Ltd were mixed at a mole ratio of 79:21 and ball milled for 10 hours. Precursors were prepared with a pressure of 50 MPa for 30 minutes. Next, the precursors were sintered at 1550°C for 2 hours. Directional solidification was carried out with an optical floating zone (OFZ) furnace with 4 × 3 kW xenon lamps in vacuum environment. The withdraw rate was 10 mm/h. Crystals were prepared with up to 10 mm in diameter and ~120 mm in length. In order to start the solidification, a polycrystalline Al 2 O 3 bar and a single crystal bar (c-axis sapphire) were used as the seeds.
The as-prepared samples were sectioned by a diamond saw along the growth direction. The samples were ground with SiC paper untill 2000# and further polished down to 2.5 μm diamond paste.
The microstructure was observed with scanning electron microscopy (SEM, LEO, Supra35). The orientation was successfully determined by electron back-scattered diffraction (EBSD, NordlysNano).
| RESULTS AND DISCUSSION
The microstructure between the seed bar (polycrystal Al 2 O 3 ) and Al 2 O 3 /YAG eutectic crystals is shown in Figure 1. A layer of the YAG with 1-2 μm in thickness was identified on the seed crystal. The preliminary results indicate that the leading phase to nucleate and crystallize is the cubic YAG and then the trigonal alumina. The typical coupled eutectic growth morphology was observed afterward.
Specimens were prepared for EBSD investigation to further study the relationship between the seed bar and Al 2 O 3 / YAG crystals. Firstly, a polycrystalline alumina bar was used as the seed. Figure 2A shows the EBSD band-index micrographs at the interface between the seed bar and Al 2 O 3 /YAG eutectic crystals. The dark area is Al 2 O 3 , and the gray area is the YAG. Figure 2B shows the corresponding EBSD orientation maps of Al 2 O 3 . In order to show the relationships clearly, the EBSD map of the YAG is not shown. It can be observed that the YAG (the white area) is the leading phase to crystallize on the seed. It is worth noting that the orientation of Al 2 O 3 in the as-prepared eutectic crystals is totally different with the orientation of the Al 2 O 3 seed. Namely, the epitaxial growth of Al 2 O 3 does not occur, which indicates that Al 2 O 3 is not the leading phase. Otherwise, the Al 2 O 3 in the eutectic crystal would grow epitaxially along the seed and the orientation would be the same as the seed. 21 For comparison, a c-sapphire was used as the seed bar. Figure 3 shows the EBSD map between the c-sapphire and the eutectic crystal. The white area in Figure 3A is the YAG. The morphology of Al 2 O 3 in the eutectic is quite different from that of the c-sapphire. The interface between the c-sapphire and the eutectic is straight. These results further demonstrate that the YAG is the leading phase during solidification. In this part of the work, the liquid ceramic zone was soaked for 1 minute to ensure that the c-sapphire was melted. Therefore, the orientation relationship of Al 2 O 3 in the eutectic is the same as that of the c-sapphire. The goal is to verify the effect of interfacial energy on microstructure, and results can be found in Ref. 7 The heterogeneous nucleation rate, I, is described by terms involving the rate of atom attachment to the growing nucleus, the concentration of clusters, and the activation barrier for nucleation. 22 As reported by Yan et al, 23 the phase with larger I is more likely to be the leading phase. The value of the nucleation rate of binary Al 2 O 3 /YAG eutectic can be calculated based on the classical nucleation theory 20 : where ΔG 0 n is activation energy for the nucleation of the critical cluster radius, ΔG d is activation free energy for diffusion across S/L interface, and k B is Boltzmann constant, 1.38 × 10 −23 J/K. ΔG 0 n can be written as 20 follows: where σ is S/L interfacial energy, Δs f is entropy of fusion, θ is wetting angle, and f (θ) = (2 + cosθ) (1 − cosθ) 2 /4 is wetting angle factor.
(1) The nucleation rates of Al 2 O 3 and the YAG are calculated using the parameters listed in Table 1. 13,24,25 For heterogeneous nucleation, the wetting angle factor shows striking effects on the nucleation rate. Since wetting condition is good between nucleus (either Al 2 O 3 or the YAG) and the substrate (Al 2 O 3 ), that is, small θ, two cases of f (θ) = 0.01 where θ is 30° and f (θ) = 0.001 where θ is 15° are considered. The results are presented in Figure 4. The nucleation rate of the YAG becomes higher than that of Al 2 O 3 when the undercooling is more than 97°C for f (θ) = 0.001 or 317°C for f (θ) = 0.01.
According to Figure 4, the competitive nucleation behavior between Al 2 O 3 and the YAG changes when the undercooling ΔT exceeds a critical value. In the OFZ technique, the temperature of the liquid eutectic is higher than 1820°C (melting point). The temperature of the alumina seed (substrate) is about 1000°C. High undercooling is expected in the process. Therefore, the nucleation rate of the YAG is larger than that of Al 2 O 3 , and the cubic YAG is expected to nucleate from the melt first and forms as a leading phase during directional solidification.
| CONCLUSIONS
In summary, the leading phase in Al 2 O 3 /YAG eutectic crystals during solidification was investigated with a polycrystalline Al 2 O 3 bar and c-sapphire as the seed, respectively. The YAG was the leading phase. The heterogeneous nucleation rate was calculated based on the classical nucleation theory. The leading phase was determined by the wetting angle and the undercooling. At a given wetting angle, the YAG might be the leading phase when the undercooling exceeds the critical value. 1∕3 , where k is a shape geometry factor, and for nonmetals, k = 0.32, ΔH m is melting entropy, N 0 is Avogadino's constant, and V s is solid molar volume.
F I G U R E 4
Calculated nucleation rates of Al 2 O 3 and the YAG for heterogeneous nucleation | 2,282 | 2020-06-04T00:00:00.000 | [
"Materials Science"
] |
A novel method to remove impulse noise from atomic force microscopy images based on Bayesian compressed sensing
A novel method based on Bayesian compressed sensing is proposed to remove impulse noise from atomic force microscopy (AFM) images. The image denoising problem is transformed into a compressed sensing imaging problem of the AFM. First, two different ways, including interval approach and self-comparison approach, are applied to identify the noisy pixels. An undersampled AFM image is generated by removing the noisy pixels from the image. Second, a series of measurement matrices, all of which are identity matrices with some rows removed, are constructed by recording the position of the noise-free pixels. Third, the Bayesian compressed sensing reconstruction algorithm is applied to recover the image. Different from traditional compressed sensing reconstruction methods in AFM, each row of the AFM image is reconstructed separately in the proposed method, which will not reduce the quality of the reconstructed image. The denoising experiments are conducted to demonstrate that the proposed method can remove the impulse noise from AFM images while preserving the details of the image. Compared with other methods, the proposed method is robust and its performance is not influenced by the noise density in a certain range.
Introduction
Atomic force microscopy (AFM) is a powerful tool in the fields of nanoscience and nanotechnology, because it can be used to acquire high-resolution images of all kinds of samples in various environments. Nowadays, AFM is also used to acquire information about the physical properties of the samples [1]. The quality of the images is of fundamental importance for acquiring information about structure and properties of the sample surface at the nanoscale. The more details are shown by the image, the more information about the sample will be obtained. However, noise that comes from the environment and the instrument itself will reduce the quality of the AFM images, and details of the image may be hidden in the noise [2].
There is no method that can filter out all types of noise at the same time. Therefore, different filtering methods, according to the type of the noise, need to be applied to acquire high-quality images with removed noise [3,4]. Chen [5] has proposed "unsupervised destripe" to remove the non-uniform stripe noises from AFM images. Orthogonal wavelets are applied to filter the Gaussian noise from AFM images [6]. For the impulse noise in AFM images, the median filter is generally applied [7,8], where every pixel is replaced by the median value of pixels of the neighborhood. In order to reduce the image blurring and the loss of details, the adaptive median filter [9] is proposed where only the noisy pixels are replaced. The removal of impulse noise is decomposed into two steps: the identification of noisy pixels and the recovery of the true image. To further improve the denoising performance, machine learning [10] and neural networks [11,12] are introduced to help remove the impulse noise. First, machine learning or neural networks are used to improve the accuracy of the recognition of noisy pixels. Then, the noise pixels are replaced by the median value. The key to improve the filtering effect of these methods is to more accurately identify the noisy pixels. The performance of the filtering will decrease as the impulse noise density increases [13]. It is noteworthy that an image with the noisy pixels removed can be seen as an undersampled image that is generated through compressed sensing (CS) sampling. Therefore, compressed sensing can be introduced for the removal of impulse noise. If the sampling rate is higher than the sparsity of the original signal, the original signal can be recovered through compressed sensing [14]. This means that the noise removal does not degrade the image quality.
CS is an established way to use few compressed data to represent high-dimensional data. It has been used to recover the signal from data sampled far below the classic Nyquist sampling rate under certain conditions [15,16]. Generally, the purpose of introducing CS into AFM is to increase the imaging rate [17][18][19]. The essentials for applying CS in AFM are the sparse representation of the image, the generation of a measurement matrix and the design of a reconstruction algorithm. For the AFM image, it will show sparsity in an orthogonal transform domain, although it is not sparse in the spatial domain. The identification process of noisy pixels can be seen as compressed sampling. However, it is impossible to ensure that the process meets the demand of theoretical reconstruction guarantees [20,21]. Bayesian compressed sensing (BCS) [22] provides a better reconstruction performance than other methods when the theoretical reconstruction guarantees are partially destroyed [23]. Different from other reconstruction methods, BCS does not need the acquisition of the sparsity of the original signal and can achieve a good reconstruction even when the original signal is not sparse [14]. In addition, AFM tracks the sample line-by-line to obtain the image, which means that the acquisition process of two adjacent lines of the image can be regarded as irrelevant. Therefore, a novel method to remove impulse noise for AFM image using BCS is obtained according to the AFM imaging characteristics, shown as Figure 1. Here, a novel method based on BCS is proposed to remove the impulse noise from AFM images. First, the proposed method transforms the removal of impulse noise from AFM images into a problem of CS imaging in AFM, and the details of the method are presented. Then, denoising experiments are shown to demonstrate that the proposed method can remove the impulse noise while retaining the details of the image. In addition, the proposed method is robust and its performance will not be affected by the impulse noise density within a certain range. Finally, detailed discussions about the proposed method are given.
Details of Denoising through Bayesian Compressed Sensing
Introduction of compressed sensing in AFM Usually, the AFM tracks a sample following a raster scan mode and every pixel of the image is sampled independently. Therefore, a n × n AFM image can be seen as a 2D matrix and all rows of the matrix are stacked together to generate a long vector x T ∈ R N×1 (N = n × n), where N is the total number of pixels of the AFM image. If the AFM conducts M samplings, the process of the AFM imaging can be expressed as (1) where y∈ R M×1 is the measurement result, Φ ∈ R M×N is the measurement matrix, x T ∈ R N×1 represents the true sample to-pography, and e is the measurement noise. If M is smaller than N, then the AFM imaging is CS imaging. Because the AFM only samples one pixel of an image at a time and moves around to obtain an image, the compressed sampling of the AFM can be seen as randomly collecting partial elements of x. The undersampling process can be modeled by an identity matrix with some rows removed, shown in Figure 2. In order to recover the true AFM image from the undersampled data, the sample topography x must be sparse, i.e. some elements x must be zero, and the measurement matrix Φ must meet certain constraint conditions [24]. However, an AFM image cannot be sparse in real space. A sparse dictionary Ψ ∈ R N×N is introduced to improve its sparsity [25]. Thus, x T can be expressed as x T = Ψα (α ∈ R N×1 ) and α is sparse, i.e., some elements of α are zero. Equation 1 can be rewritten as (2) where Γ = Φ × Ψ ∈ R N×N . Since the number of measurements M is smaller than the number of unknowns N, Equation 2 cannot be solved directly. Because there are some zero elements in α, the solution of Equation 2 can be expressed as an L 0 -norm optimization problem (3) where ε is the bound of the noise. The solution of Equation 3 is a non-deterministic polynomial problem (NP problem) [26]. When both the number of total elements of α and its non-zero elements are large, a solution of Equation 3 cannot be achieved in practice. Generally, the non-convex L 0 -norm can be approximated by a convex L 1 -norm [27], (4) Now, solving Equation 2 is transformed to an L 1 -norm minimization.
Identifying and deleting noisy pixels
The problem of impulse noise removal has been transformed into a problem of compressed sensing. Therefore, the first thing is to obtain the measurement matrix and the undersampled image, which can be achieved by identifying and removing the noisy pixels. There are two ways, including interval approach and self-comparison approach, to identify noisy pixels. For the interval approach, noisy pixels can be identified according to (5) where a is the lower limit of the pixel value and b is the upper limit. A pixel x i out of the range (a,b) will be regarded as noisy pixel and will be defined as 0. After all elements have been checked according to Equation 5, the elements defined as 0 will be removed directly to generate an undersampled image. When the distribution of noise intensity is relatively narrow, the noisy pixels can be identified easily by setting a lower limit a and an upper limit b. Although some regular pixels may be mistaken for noisy pixels, this will not reduce the quality of the denoised image [14]. When the distribution of the noise intensity is dispersive, appropriately increasing the lower limit while decreasing the upper limit allows for more noisy pixels to be identified. However, this may also cause more regular pixels to be removed by mistake, and even a large number of pixels in a certain area might be deleted. If there are too many regular pixels removed by mistake, the quality of the denoised image will drop because of the loss of image information. Therefore, the range of the interval should be set carefully.
The self-comparison approach is another way to identify noisy pixels. Noisy pixels are identified by comparing the target pixel with its neighborhood. At first, an r × r (r < n) area of the image with the target pixel in the center is chosen, where x max and the x min are the maximum pixel value and minimum pixel value in the area, respectively. Then, the target pixel value x i (i = 1,…, N) will be compared with x max and x min according to (6) If the pixel x i is exactly the maximum value or the minimum value of the area, it will be regarded as a noisy pixel and defined as 0. All pixels of the image will be checked according to Equation 6. After all the pixels were checked, the elements defined as 0 will be removed directly. The self-comparison approach avoids that a large number of pixels in the same area is removed by mistake. Although some regular pixels might be deleted by mistake especially for pixels with large or small values, the quality of the denoised image will not be reduced because of the robustness of the proposed method. However, if there are a lot of flat areas in the image, the approach may delete regular pixels incorrectly, and its performance will be worse than that of the interval approach.
When CS is applied to AFM imaging, every row of the AFM image is stacked together to build a 1D vector x T . Although a 2D denoised image is obtained, the reconstructed result directly obtained from AFM CS imaging is a 1D vector of dimension N. Therefore, the neighborhood of the target pixel has been changed to a 1D interval with a size of r. However, converting an image to a 1D vector will lose 2D structural information, which means it may not be the best choice to treat the image as a vector. Because the AFM tracks the sample line-by-line, which means the acquisition process for each row is independent, treating the image as a vector will not lose image information. In addition, the end of one row is independent of the beginning of its adjacent row. Therefore, the self-comparison can be conducted line-by-line independently. Thus, an undersampled image is obtained by removing the noisy pixels.
The removal of noisy pixels can be regarded as compressed sampling. There are only two possibilities for every pixel. It is either a noisy pixel or a noise-free pixel. The noisy pixels are removed and the noise-free pixels are preserved. Therefore, the elements of the measurement matrix Φ only consist of 1 and 0, which is the same as the AFM CS imaging. If there is no impulse noise in the image, the measurement matrix Φ will be an identity matrix. Otherwise, the corresponding rows of the identity matrix need to be removed, as shown in Figure 2. Because the identification of the noisy pixels is performed lineby-line, the measurement matrix Φ should be constructed according to the corresponding row. Thus, the process of impulse noise removal can be described by Equation 2. n measurement matrices and measurement results will be generated for an AFM image with a size of n × n.
Fast reconstruction of the image
Usually, there will be only one measurement matrix Φ and one measurement result y obtained for the AFM CS imaging because all the rows of the image are stacked together to generate a vector [19,28]. In CS AFM imaging, recovering the true sample topography from the undersampled information has high computational cost. For a n × n AFM image, the compressed sampling matrix is M × N (n ≪ M < N), which requires a lot of memory space and computing resources [29]. For a typical AFM image with a resolution of 256 × 256 pixels, only the matrix will occupy 32GB RAM and the reconstruction time exceeds 1 h on a server. If the resolution of the AFM image continues to increase, the true sample topography can not be recovered easily. Therefore, the traditional reconstruction methods applied in AFM imaging are not suitable to recover the AFM image obtained from removing the noisy pixels. In order to remove the impulse noise in the AFM image quickly, a faster reconstruction of the image needs to be achieved.
If each row of the AFM image is regarded as a sub-vector, these sub-vectors are independent of each other, because they are simply stacked together. In addition, the AFM tracks the sample surface line-by-line to measure the topography, which means that the acquisition process of two adjacent lines of the image is independent. Because the tracking of one line is hardly affected by the adjacent tracking line, every tracking line of the AFM image can be compressed sampled and recovered independently. Therefore, the reconstruction of the AFM image can be decomposed into the reconstruction of a series of vectors. Recovering the image from undersampled AFM data line-byline will not reduce the quality of the image compared with the traditional method. After the removal of noisy pixels, n measurement matrices and results will be obtained, which means each row will be processed independently. In order to obtain a denoised image, the reconstruction will be performed n times.
CS provides a lot of methods to recover the original signal, such as basis pursuit [30], iterative hard thresholding [31], orthogonal matching pursuit [32] and Bayesian compressed sensing [33]. The measurement matrix generated by recording the position of the noise-free pixels may not fully meet theoretical reconstruction guarantees. In our previous work [34], which aims to develop a fast AFM image reconstruction from undersampled AFM data, BCS has been proven to be a better method to reconstruct AFM images from undersampled AFM data than other methods. There is also no guarantee that the measurement matrix obtained from the removal of noisy pixels will fully satisfy theoretical reconstruction guarantees. The developed reconstruction algorithm can be used to recover the image. The details of the BCS reconstruction algorithm in AFM are given in our previous work [34].
Results and Discussion
Denoising experiments are conducted to evaluate the performance of the proposed method. Impulse noise is added to AFM images to generate noisy images. The AFM images are con- verted to 8-bit grayscale images, and the resolution of the AFM images used in the paper is 256 × 256 pixels. The proposed method, median filter (window size: 7 × 7 pixels) and adaptive median filter (maximum window size: 7 × 7 pixels, minimum window: 3 × 3 pixels) are used separately to remove the impulse noise. The peak signal-to-noise ratio (PSNR) [35] and the structural similarity (SSIM) index [36] are used to evaluate quantitatively the performance of the filtering. The details of PSNR and SSIM can be found in [37]. The AFM image before adding impulse noise is considered as the standard image, and the values of PSNR and SSIM are obtained through comparing the denoised image with the standard image. If the value of PSNR is greater than 35 dB, the quality of the denoised image is very good and the distortion is negligible. When the PSNR is less than 30 dB, the performance of the denoising is poor and the distortion of the image is not negligible. When the SSIM is greater than 0.9, the denoising is acceptable regarding the visual quality.
The denoising performance of the proposed method using two different noise identifying approaches are tested separately. First, BCS denoising using the interval approach (interval-BCS denoising) is used to remove the impulse noise, and the denoised images are compared with those obtained by applying the median filter and the adaptive median filter. The experiments are conducted in Matlab on a personal computer (Intel Core i5, 8 GB RAM, Windows 10 x64). Impulse noise with a density of 0.4 is added to the images. The results are shown in (Figure 3a,f). In addition, the values of PSNR of the denoised image obtained by interval-BCS denoising are greater than 35 dB, which means the distortion caused by denoising is negligible. The PSNR value obtained by the median filter is less than 30 dB, which means that non-negligible distortion has occurred. The PSNR value obtained by the adaptive median filter is larger than 30 dB but less than 35 dB. The SSIM values obtained by the interval-BCS denoising, the median filter and the adaptive median filter are more than 0.9, which means all methods remove the impulse noise successfully from the perspective of visual quality. The interval-BCS denoising removes the impulse noise from the AFM image successfully while preserving the details, and the image distortion caused by the proposed method is negligible.
For the interval-BCS denoising, the two parameters upper limit and lower limit must be set. Different upper limits and lower limits are used to remove the impulse noise in the Figure 3b and Figure 3g. Figure 3e was obtained with an upper limit of 254 and a lower limit of 5, and Figure 3j was obtained with an upper limit of 250 and a lower limit of 5. Inappropriate upper and lower limits will cause regular pixels in a certain area to be removed, which may cause a drastic drop in the image quality. In Figure 4b was removed successfully with different upper and lower limits, the loss of image details can be seen clearly in Figure 4d-f. If the noisy pixels are close to the values of noise-free pixels, it is difficult to chose suitable upper and lower limits to distinguish noise-free pixels and noisy pixels.
The images obtained by BCS denoising using the self-comparison approach (self-comparison-BCS denoising) are shown in Figure 5. According to Equation 6, the comparison neighborhood is set as [−4,4], where the position of the target pixel is the origin. It can be seen that the impulse noise in Figure 5a2-d2 has been removed while details of the images are presented clearly. The images (Figure 5a5-d5) obtained by the self-comparison-BCS denoising are almost the same as the original images (Figure 5a1-d1). The PSNR values of the denoised images obtained by the self-comparison-BCS denoising are also greater than 35 dB (except in Figure 5b5), which means the image distortion caused by self-comparison-BCS denoising is negligible. However, there are non-negligible distortions occurring in the images processed by the median filter with PSNR values below 30 dB. And the PSNR values of the images acquired by the adaptive median are below 35 dB (some are less than 30 dB), which means its performance is also worse than that of the self-comparison-BCS denoising. In addition, the SSIM values obtained by the interval-BCS denoising, the median filter and the adaptive median filter are similar. Therefore, the self-comparison-BCS denoising removes successfully the impulse noise from AFM images while preserving details.
The comparison of self-comparison-BCS denoising and the interval-BCS denoising is shown in Table 1. It can be seen that the interval-BCS denoising and the self-comparison-BCS denoising yiled almost the same results regarding the removal of impulse noise from AFM images. When the image contains periodic structures and flat areas, the performance of the interval-BCS denoising is better than that of the self-comparison-BCS denoising, as shown in Table 1. For the noisy image in Figure 5c2, the performance of the interval-BCS denoising is better than that of the self-comparison-BCS denoising. The selfcomparison-BCS denoising may remove noise-free pixels incorrectly in the flat area, because pixels in the flat area have nearly the same values.
AFM images with added noise of different densities are processed by the self-comparison-BCS denoising to evaluate its robustness ( Figure 6). When the noise density is low, both the proposed method and the adaptive method exhibit excellent denoising performance. Although the denoising performance of the adaptive median filter is better than that of the proposed method when the noise density is low, its performance drops sharply with increasing noise density. In addition, impulse noise Figure 5. "Self" refers to the self-comparison-BCS denoising, "interval" refers to the interval-BCS denoising, "median" refers to the median filter, and "adaptive" refers to the adaptive median filter. filtering methods using machine learning [10], support vector machines [38], or neural networks [12] encounter the same problem as the adaptive median filter. When the noise density is lower than 0.5, the values of PSNR and SSIM acquired by the proposed method remain almost constant, with PSNR values of more than 40 dB and SSIM values of more than 0.9. That is to say, a high-quality denoised image with negligible distortion and high visual quality can always be obtained, regardless of the noise density. The proposed method shows a more stable performance than the adaptive median filter, and its denoising performance is better than that of the adaptive median filter when the noise density increases.
As the noise density continues to increase, the denoising performance of the proposed method will become worse. This is consistent with the theory that CS requires the sampling rate to be not less than the lower limit to guarantee reconstruction images without distortion [14], which can be described by (7) where M is the number of noise-free pixels, K is the sparsity of the signal, , and N is the total number of pixels. When the noise density is low, only a small number of pixels will be removed, which guarantees M ≥ CK ln(N/K). As the number of noisy pixels continues to increase, more and more pixels are removed from the image. When the number of remaining pixels M is less than CK ln(N/K), a decline of the quality of the image is inevitable. The robustness of the proposed method allows it to be used to remove impulse noise with different densities from AFM images. In addition, the erroneous removal of noise-free pixels by the proposed method can be regarded as reducing the sampling rate. A high-quality image can be obtained as long as the number of pixels is greater than CK ln(N/K). Therefore, erroneous removal of partial noise-free pixels will hardly affect the denoising performance. The proposed method will have a more stable performance in the removal of impulse noise than replacing the noisy pixel with the median value.
To further evaluate the performance of the proposed method, an AFM image with a high density of added impulse noise ( Figure 7) and an image of a polymer-blend sample (stiff polystyrene (PS) and soft polybutadiene (PB) polymer, Nanosurf) with noise acquired with a Park Systems XE-100 AFM ( Figure 8) are processed by the proposed method. It can be seen from Figure 7 that the performance of the proposed method is better than that of the median filter and the adaptive median filter. In addition, the noise can be seen clearly in the areas (A) and (B) of Figure 8a. The interval-BCS denoising (Figure 8b), the self-comparison-BCS denoising (Figure 8c), the median filter (Figure 8d,e) and the adaptive median filter (Figure 8f) are used to remove the noise. Comparing Figure 8a with Figure 8d, the noise in the area (A) is not removed completely. Although the noise in the areas (A) and (B) is removed completely after median filtering, the small PB islands in area (C) disappears along with the noise, which means a loss of details. The images obtained through median filter look smoother, but all details of the images are sacrified. There are more details reserved in the Figure 8f compared with the images obtained by the median filter. However, the proposed method reserves more details (small dots in area (C)) of the image.
Conclusion
A novel method to remove impulse noise from AFM images based on the Bayesian compressed sensing is proposed, which transforms the image denoising problem to a compressed sensing imaging problem of the AFM. First, two different ways, interval approach and self-comparison approach, are proposed to identify the noisy pixels. An undersampled AFM image can be obtained by removing the identified noisy pixels. Second, a series of measurement matrices is constructed by recording the position of the remaining pixels. Third, the BCS reconstruction algorithm is used to recover the image, and each row of the AFM image is reconstructed separately. The denoising experiments demonstrate that the proposed method can remove impulse noise from AFM images while preserving the details of the images. Upper and lower limits are key parameters for the interval-BCS denoising. The self-comparison-BCS denoising identifies noisy pixels by comparing a target pixel with its neighborhood. However, the performance of the self-comparison-BCS denoising is worse than that of the interval-BCS denoising when the image contains a lot of flat areas and periodic structures. The proposed method is robust because its performance remains stable in a certain noise density range, and the erroneous removal of few noise-free pixels hardly affects its performance. Therefore, the proposed method can be used to remove the impulse noise from AFM images with different noise densities without worrying about the degradation of the final image quality. The proposed method is an effective, competitive and robust method to remove impulse noise from AFM images. | 6,272 | 2019-11-28T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Analytical algorithm of weighted 3D datum transformation using the constraint of orthonormal matrix
Based on the Lagrangian extremum law with the constraint that rotation matrix is an orthonormal matrix, the paper presents a new analytical algorithm of weighted 3D datum transformation. It is a stepwise algorithm. Firstly, the rotation matrix is computed using eigenvalue-eigenvector decomposition. Then, the scale parameter is computed with computed rotation matrix. Lastly, the translation parameters are computed with computed rotation matrix and scale parameter. The paper investigates the stability of the presented algorithm in the cases that the common points are distributed in 3D, 2D, and 1D spaces including the approximate 2D and 1D spaces, and gives the corresponding modified formula of rotation matrix. The comparison of the presented algorithm and classic Procrustes algorithm is investigated, and an improved Procrustes algorithm is presented since that the classic Procrustes algorithm may yield a reflection rather than a rotation in the cases that the common points are distributed in 2D space. A simulative numerical case and a practical case are illustrated.
Background
Three-dimensional datum transformation is a frequently used work in geodesy, engineering surveying, photogrammetry, mapping, geographical information science (GIS), machine vision, etc., e.g., Aktuğ (2009), Akyilmaz (2007), El-Mowafy et al (2009), Ge et al (2013, Han and Van Gelder (2006), Horn (1986), Kashani (2006), Neitzel (2010), Paláncz et al (2013), Soler (1998), Soler and Snay (2004), Soycan and Soycan (2008), Zeng (2014). Usually, in order to compute the transformed coordinate, the transformation parameters in the transformation model (e.g., seven-parameter similarity transformation, see Aktuğ 2012, Leick 2004, Leick and van Gelder 1975 need to be solved with several control points in advance. So far, a large number of algorithms for recovering the parameters have been presented, which can be divided into two classes. One is the numerical iterative algorithm, and the other one is analytical algorithm. The former needs the initial parameter values, linearization, and iterative computation, e.g., Zeng and Tao (2003), Chen et al. (2004), Zeng and Huang (2008), El-Habiby et al. (2009), Zeng and Yi (2011), etc. In the case that the rotation angle is large, the initial values are difficult even never to be obtained in advance, and consequently, it leads to the failure of solution (see Zeng and Yi 2011). We should note that if global optimization algorithms are used, then no initial values are required (see, e.g., Xu 2003a, Xu 2003b, Xu 2002. In contrast, the latter does not involve the initial parameter values, linearization, and iterative computation, and can give the exact solution quickly. However, because of the complexity of mathematical derivation, only several analytical algorithms have been put forward. Grafarend and Awange (2003) presented the Procrustes algorithm which utilized the singular value decomposition technique. Shen et al. (2006) presented a quaternion-based algorithm which utilized the quaternion property and eigenvalue-eigenvector decomposition. Han (2010) presented a stepwise approach to individually calculate the transformation parameters by the physical interpretation of similarity transformation. Zeng and Yi (2010) presented a new analytical algorithm based on the good properties of Rodrigues matrix and Gibbs vector.
The present study is organized as follows. In the Methods section, a new analytical algorithm to weighted 3D datum transformation is derived in detail, based on the Lagrangian extremum law with the constraint that the rotation matrix is an orthonormal matrix. In the meanwhile, its stability is discussed when the distribution of 3D control points degenerates into 2D (planar) or even 1D (collinear). The presented algorithm and classic Procrustes algorithm are compared, and an improved Procrustes algorithm is presented since that the classic Procrustes algorithm may yield a reflection rather than a rotation in the cases that the common points are distributed in 2D space. In the Results and discussion section, a simulative numerical case and a practical case are given to demonstrate the presented algorithm, classic Procrustes algorithm, and improved Procrustes algorithm. Lastly, conclusions are made in the Conclusions section.
Presentation of the basic algorithm
The seven-parameter similarity transformation model can be expressed as subject to where a i ¼ X i Y i Z i ½ T and b i ¼ x i y i z i ½ T i = 1, 2, ⋯, n are the 3D coordinates of a common point in target and source coordinate systems of transformation, tagged as system A and system B, respectively. Superscript T stands for transpose, I 3 denotes a 3 × 3 identity matrix, and det means computing the determinant of matrix. λ denotes the scale parameter, t ¼ ΔX ΔY ΔZ ½ T denotes the three translation parameters, and R denotes the 3 × 3 rotation matrix, which contains the three rotation angles. Supposing R is formed by rotating angles α, β, and γ counterclockwise around the Cartesian X, Y, and Z axes, respectively, then R can be expressed by rotation angles as R ¼ cos γ cos β sin γ cos α þ cos γ sin β sinα sin γ sin α− cos γ sin β cos α − sin γ cos β cos γ cos α− sin γ sin β sin α cos γ sin α þ sin γ sin β cos α sin β − cos β sin α cos β cos α Using (3), the rotation angles α, β, and γ can be computed if R is recovered as where R ij is the element of R in the ith row and jth column. Introducing the following matrix form of the coordinates as then Eq.
(1) is rewritten as where 1 n ¼ 1 ⋯ 1 ½ |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} n , i.e., a row vector with n elements and all elements are 1. It is obvious that in order to determine the seven parameters, the number n of common points must be greater than or equal to 3. Considering the coordinates include errors, Eq. (7) is transformed as where E is the transformation error matrix. The criterion of the least squares can be constructed by the Lagrangian extremum law with the constraint of Eq.
(2), i.e., orthonormal matrix as follows. It is worthy of note that the constraint det(R) = + 1 is not imposed, since it can be separately treated at some extra computation as in the Stability of the basic algorithm and its modification section.
where tr denotes trace operation of matrix, Λ is a symmetric Lagrangian multiplier matrix, and P represents the weight matrix that every point has an isotropic weight and is independent of each other. Substituting the expression of E easily obtained from Eq. (7) into Eq.
(8), one can obtain If and only if the following conditions are satisfied, the Lagrangian extremum exists.
By Eqs. (9) and (10), one gets and thus, Obviously, t is the function form of λ and R. Substituting Eq. (16) into Eq. (7), one gets where I n − (1 n P1 n T ) − 1 P1 n T 1 n is the centering matrix.
, and thus, they are the centralized coordinate matrix, and then, Eq. (17) is written as Derivation of Eq. (19) makes use of the properties of trace operation, i.e., Substituting Eq. (19) into Eq. (11), one gets Obviously, λ is the function form of R. Substituting Eq. (19) into Eq. (12), one gets and thus, Substituting Eq. (19) into Eq. (13), one gets the following equation and the derivation process is given in the Appendix.
further substituting Eq. (24) into Eq. (25), one gets and substituting Eq. (27) into Eq. (24), one gets Let then Eq. (28) can be rewritten as Note that D T D is symmetric, non-negative definitive and so has non-negative real eigenvalues. The inverse of the square root of D T D can thus be computed using eigenvalue-eigenvector decomposition. where d i and v i for i = 1, 2, 3 are the eigenvalues and corresponding eigenvectors of the matrix D T D. So, Eq. (30) can be written as Stability of the basic algorithm and its modification Obviously, the construction of the inverse of the square root of D T D , i.e., Eq. (31), fails if one or two of d i for i = 1, 2, 3 equals to 0. Assume that the eigenvalues of the matrix D T D satisfies the following condition.
When the common points are distributed in a plane, i.e., 2D space, the matrix D T D is singular and of rank 2, and thus, One can compute the rotation matrix as follows.
where the sign ± is chosen so that det(R) = + 1 is satisfied.
The above case is the ideal one, but in the case that the common points are distributed in an approximate plane, although When the common points are distributed in a line, i.e., 1D space, the matrix D T D is singular and of rank 1, and thus, The rotation matrix is impossible to recover in a whole; however, one can recover at most two rotation angles by the following formula.
and the utilization of Eq. (40) make it feasible to compute the translation parameter and scale parameter. The above case is the ideal one, but in the case that the common points are distributed in an approximate line, although
Comparison to classic Procrustes algorithm and improvement of the classic Procrustes algorithm
The classic Procrustes algorithm presented by Grafarend and Awange (2003) is a well-known analytical algorithm of 3D datum transformation. It is also based on the Lagrangian extremum law, similarly to the presented algorithm in this paper. But differently, it does not constrain the orthonormal matrix condition. For the Procrustes algorithm, due to utilization of the singular value decomposition technique, the computed rotation matrix always satisfies the constraint condition R T R = I 3 ; however, in the cases that the common points are Translation Angle Scale Total Translation Angle Scale Total Translation Angle Scale Total 3 D 3 3 1 7 3 3 1 7 3 3 1 7 2D 3 3 1 7 0-3 0 -3 1 1 -7 3 3 1 7 1D 3 0-2 1 4 -6 3 0 -2 1 4 -6 3 0 -2 1 4 -6 distributed in a rigid or approximate plane, this situation that det(R) = − 1 rather than det(R) = + 1 usually happens. This means the computed R is a reflection instead of a rotation. For the presented algorithm in this paper, the constraint condition R T R = I 3 is imposed in the computation, and thus, it is always satisfied. And in the cases that the common points are distributed in a rigid or approximate plane, the sign ± in Eq. (36) is properly chosen so that det(R) = + 1 is satisfied.
To recover the exact rotation matrix by Procrustes algorithm when det(R) = − 1, the computation formula of R, i.e., Eq. (22) in Grafarend and Awange (2003) should be improved as whereṼ ¼ V 1 V 2 −V 3 ½ , V 1 , V 2 , and V 3 are the column matrix of V, and V 3 is the column matrix that corresponds to the singular value that is 0. Therefore, the general improved computation formula of R, i.e., Eq. (22) in Grafarend and Awange (2003) is which is stable for the cases that the common points are distributed in 3D and 2D spaces including approximate 2D space. In the case that the common points are distributed in 1D space, Eq. (43) can recover at most two rotation angles and be used to compute the translation parameter and scale parameter.
Simulative case
The case data is simulated as follows. In order to investigate the stability performance of the presented algorithm (PA) in this paper, the classic Procrustes algorithm (CPA) and improved Procrustes algorithm (IPA) in the cases that the control points are distributed in 3D, 2D, and 1D spaces, six sets of control point in system B is first given in Table 1, of which set 1 is distributed in 3D space, sets 2, 3, and 4 are distributed in 2D space, and sets 5 and 6 are distributed in 1D space. The distribution of six sets of control point is depicted intuitively in Fig. 1. Set 2 has only three control points, and it is necessary for the least point number to solve the seven parameters. Secondly, the theoretical seven parameters are given in Table 2. For the sake of an efficient test of the algorithms, the rotation angles are designed to be big angles. Thirdly, the coordinates of control points in system A are computed by Eq. (1), and the result is listed in Table 3. In this case, the stability of the three algorithms is focused, so the weight matrix is designated to identity matrix for easy demonstration. Next, the transformation seven parameters are recovered by the three algorithms, and the result is listed in Table 4. ME in Table 4 is the mean error and computed by Fig. 2 The distribution of seven control points in local system It is seen from Table 4 that the results of seven parameters are identical and accurate for the three algorithms in the case of set 1, i.e., the case that the control points are distributed in 3D space. For the cases that the control points are distributed in 2D space, PA and IPA have the identical and accurate results of seven parameters, but CPA has one correct result of seven parameters for set 4 and two wrong results of rotation angles for sets 2 and 3, because reflections rather than rotations are found, and all translation and scale parameters are correct. For the cases that the control points are distributed in 1D space, i.e., sets 5 and 6, the three algorithms all recover the correct translation and scale parameters and at most recover two rotation angles. The number of solvable transformation parameters for the three algorithms is counted and listed in Table 5.
Actual case
The case data is from Grafarend and Awange (2003). The coordinates of control points in system B (local system) and A (WGS-84 system) is listed in Table 6. The distribution of seven control points in local system is depicted in Fig. 2. From this figure, it is seen that the distribution of control points are in an approximate plane. In the process of PA computation, condition number of matrix D T D is 2.5 × 10 11 , so Eq. (32) is seriously ill-conditioned and yields a biased solution if not processed. When the weight matrix is an identity matrix, the computed results of seven parameters with PA and CPA are listed in Table 7. For the situation that the weight matrix is a point-wise matrix, i.e., every point has isotropic weight and is independent of each other, the point-wise matrix is generated by the way introduced in Grafarend and Awange (2003) and is listed in Table 8. The computed results of seven parameters with PA and CPA are listed in Table 9.
It is seen from Tables 7 and 9 that the results of PA and CPA are identical if the bias caused by decimal rounding is ignored. Hence, the PA is comparable with CPA.
Conclusions
The numerical case study shows that the presented new algorithm and improved Procrustes algorithm are both stable and reliable for the cases that the control points are distributed in 3D, and 2D including approximate 2D space, and can recover at most two angles as well as all translation and scale parameters for the cases that the control points are distributed in 1D space. The classic Procrustes algorithm also can compute all translation and scale parameters for all the cases that the control points are distributed in 3D, 2D, and 1D spaces; however, it may yield a reflection rather than a rotation in the cases that the common points are distributed in 2D space. The numerical case study also shows the presented algorithm in this paper, and improved Procrustes algorithm is both stable and reliable when the rotation angles are big. And the presented algorithm in this paper is comparable with the classic Procrustes algorithm when the point-wise weight matrix is involved. | 3,756.2 | 2015-06-30T00:00:00.000 | [
"Computer Science"
] |
Control of the Nanopore Architecture of Anodic Alumina via Stepwise Anodization with Voltage Modulation and Pore Widening
Control of the morphology and hierarchy of the nanopore structures of anodic alumina is investigated by employing stepwise anodizing processes, alternating the two different anodizing modes, including mild anodization (MA) and hard anodization (HA), which are further mediated by a pore-widening (PW) step in between. For the experiment, the MA and HA are applied at the anodizing voltages of 40 and 100 V, respectively, in 0.3 M oxalic acid, at 1 °C, for fixed durations (30 min for MA and 0.5 min for HA), while the intermediate PW is applied in 0.1 M phosphoric acid at 30 °C for different durations. In particular, to examine the effects of the anodizing sequence and the PW time on the morphology and hierarchy of the nanopore structures formed, the stepwise anodization is conducted in two different ways: one with no PW step, such as MA→HA and HA→MA, and the other with the timed PW in between, such as MA→PW→MA, MA→PW→HA, HA→PW→HA, and HA→PW→MA. The results show that both the sequence of the voltage-modulated anodizing modes and the application of the intermediate PW step led to unique three-dimensional morphology and hierarchy of the nanopore structures of the anodic alumina beyond the conventional two-dimensional cylindrical pore geometry. It suggests that the stepwise anodizing process regulated by the sequence of the anodizing modes and the intermediate PW step can allow the design and fabrication of various types of nanopore structures, which can broaden the applications of the nanoporous anodic alumina with greater efficacy and versatility.
Introduction
Electrochemical anodization processes have been used for the surface treatment of metallic materials for over 70 years [1][2][3]. In particular, of late, anodic aluminum oxide (AAO) films consisting of hexagonally packed nanoscale pore arrays have attracted considerable interest in the field of nanotechnology [4][5][6]. The highly ordered AAO films are typically produced by a two-step anodization process [4], where the initial AAO film formed by the first anodizing step is removed as a sacrificial layer to form well-defined hexagonal pre-patterns on the aluminum substrate, which lead to the formation of uniform cylindrical nanopore structures of alumina in a well-ordered hexagonal array in the second anodizing step. Further, AAO films with the desired pore diameter (D p ), interpore distance (D int ), and oxide layer thickness ( Figure 1) can be obtained through the regulation of anodizing conditions, such as an electrolyte type and its acidity, anodizing voltage, temperature, and time. For example, the conventional mild anodization (MA) process performed in sulfuric acid at 25 V results in alumina nanopore structures with D p = 20 nm and D int = 60 nm [2,5], while the MA process in oxalic acid at 40 V produces alumina nanopore structures with D p = 40 nm and D int = 100 nm [7]. Further, the MA process in performed in sulfuric acid at 25 V results in alumina nanopore structures with Dp = 20 nm and Dint = 60 nm [2,5], while the MA process in oxalic acid at 40 V produces alumina na nopore structures with Dp = 40 nm and Dint = 100 nm [7]. Further, the MA process in phos phoric acid at 195 V forms the alumina nanopore structures with Dp = 400 nm and Dint 500 nm [2,4,5,[7][8][9][10]. In a given electrolyte, it is also known that the Dp and Dint of the alu mina nanopore structures generally increase linearly with an increase in the anodizatio voltage [11,12]. Such processability of the anodization techniques facilitates the control o the resultant nanostructures for various applications [1,5,12]. In particular, the pore diam eter (Dp) and the interpore distance determine the porosity of the nanoporous layer an directly influence their performance in applications such as solar cells [13], chemical sen sors [14], photonic nanodevices [15,16], metallic nanowires [17,18], and anticorrosion coat ings [19,20]. The self-ordered hexagonal nanopore array of the anodic alumina with a un form pore arrangement also serves as an effective template for nanofabrication, havin advantages to cover a large surface area in a parallel way, compared to the other seria nanolithography techniques such as nanoindentation [21] and electron beam lithograph [22]. However, there are several limitations associated with the conventional MA process such as slow oxidation rate (e.g., 2-6 μm/h), narrow processing conditions, and the limite range of Dp and Dint attainable with a given electrolyte. Thus, the fast fabrication of or dered nanoporous alumina was introduced, referred to as hard anodization (HA). Usin a higher current density, the HA offers substantial advantages over the conventional MA processes, allowing faster oxide growth (more than ten times) with improved ordering o nanopores [7]. Using a higher voltage than the conventional MA process with a give electrolyte, the HA also allows one to obtain greater Dp and Dint than those attainable i the MA process [7,11]. For example, HA performed in oxalic acid at 120-150 V results i the alumina nanopore structures with Dp = 49-59 nm and Dint = 220-300 nm [7]. Howeve the HA process also has drawbacks, which include the risk of burning [23], electrica breakdown because of the high voltage involved [7,24], and the production of AAO film with poor mechanical robustness [6,24]. Thus, the anodizing voltage must be precisel controlled to avoid a sudden rise in current density in the HA process [24].
Regardless of such differences in the two distinct anodizing modes, both MA and HA processes produce cylindrical nanopore structures with straight walls, which limits th applicability of the nanostructures. In several applications, it is desired to have or use na nopore structures with varying pore diameter or interpore distance along the vertical d rection of the anodic oxide layer. Several methods have been explored for the design an synthesis of hierarchically ordered or vertically modulated nanopore structures. For ex ample, vertically modulated nanopore structures were realized by controlling anodiza tion voltages and steps for the use of templates for the development of advance However, there are several limitations associated with the conventional MA process, such as slow oxidation rate (e.g., 2-6 µm/h), narrow processing conditions, and the limited range of D p and D int attainable with a given electrolyte. Thus, the fast fabrication of ordered nanoporous alumina was introduced, referred to as hard anodization (HA). Using a higher current density, the HA offers substantial advantages over the conventional MA processes, allowing faster oxide growth (more than ten times) with improved ordering of nanopores [7]. Using a higher voltage than the conventional MA process with a given electrolyte, the HA also allows one to obtain greater D p and D int than those attainable in the MA process [7,11]. For example, HA performed in oxalic acid at 120-150 V results in the alumina nanopore structures with D p = 49-59 nm and D int = 220-300 nm [7]. However, the HA process also has drawbacks, which include the risk of burning [23], electrical breakdown because of the high voltage involved [7,24], and the production of AAO films with poor mechanical robustness [6,24]. Thus, the anodizing voltage must be precisely controlled to avoid a sudden rise in current density in the HA process [24].
Regardless of such differences in the two distinct anodizing modes, both MA and HA processes produce cylindrical nanopore structures with straight walls, which limits the applicability of the nanostructures. In several applications, it is desired to have or use nanopore structures with varying pore diameter or interpore distance along the vertical direction of the anodic oxide layer. Several methods have been explored for the design and synthesis of hierarchically ordered or vertically modulated nanopore structures. For example, vertically modulated nanopore structures were realized by controlling anodization voltages and steps for the use of templates for the development of advanced molecular separation devices [2,25], as well as for the fabrication of nanotubes [26] or nanowires [27,28] with variable sizes and geometries along the longitudinal direction. Multi-connected pores were also realized by reducing the anodizing voltage in situ during the anodization process, where the number of branches formed would depend on the changes in the anodization Nanomaterials 2023, 13, 342 3 of 14 voltage [29,30]. Alternating the MA and HA processes using different electrolytes, the vertical modulation of the pore diameter with the fixed interpore distance was also demonstrated [31]. Using pulsed anodization, the MA and HA processes were altered in the same electrolyte to fabricate hierarchically ordered alumina nanopore structures [31,32]. Hierarchically ordered nanopore structures were also synthesized by combining anodization and chemical etching [33]. Alumina nanopore structures with varying diameters were also realized by two-step anodization, where an intermediate pore-widening (PW) step was executed between the two consecutive anodization processes [34]. Highly ordered conical nanopores were also created by repeatedly applying anodization and PW steps [35]. Highly ordered nanopore structures of the inverted cone shapes were produced by multistep anodization with the intermediate PW process [31,36,37].
Although the variation of the geometry and dimension of the nanopore structures in the vertical direction was demonstrated by such modulation of the anodization conditions and modes as well as the combination with the PW step, the effects of the sequence between MA and HA processes and the intermediate PW step in association with the sequence on the variation have not yet been systematically studied and understood. In this study, we systematically examine the effects of the sequence between the two different anodization modes (i.e., MA and HA) in oxalic acid with the modulation of the anodizing voltages (40 and 100 V, respectively) on the variation of the alumina nanopore structures, by alternating the sequences such as MA→HA and HA→MA. We also examine the effect of pre-patterns of the aluminum substrate on the results. Moreover, we systematically examine the effect of the intermediate PW step in association with the sequence, by comparing the four combinations including MA→PW→MA, MA→PW→HA, HA→PW→HA, and HA→PW→MA, where the PW time is further varied at the given sequence. We analyze the variation of the pore dimensions such as D p and D int along the stepwise anodizing processes, as well as the pore morphology such as branching and pillaring. On the basis of the results, we propose the types of the nanopore morphology and hierarchy attainable by the modulation of the sequence of the two different anodizing modes along with the intermediate PW step.
Electropolishing
High-purity (99.9995%) aluminum foil (Goodfellow, 1 cm × 3 cm × 0.05 cm) was used as a substrate for the fabrication of nanoporous anodic alumina layers. The foil was degreased in acetone and ethanol with ultrasonication for 10 min and then rinsed in deionized water. It was then electropolished in a mixture of perchloric acid and ethanol (HClO 4 /C 2 H 5 OH = 1:4, v/v) under an applied potential of 20 V for 3 min, at 15 • C, to reduce surface irregularities. A platinum electrode was used as a counter electrode at the distance of 5 cm from the aluminum foil for the electropolishing and the following anodizing processes.
Pre-Patterning
For the formation of the pre-patterns on the aluminum foil, the MA was applied at 40 V in 0.3 M oxalic acid, at 1 • C, for 10 h. The aluminum foil sample was subsequently submerged in an aqueous solution containing 1.8 wt% chromic acid and 6 wt% phosphoric acid at 65 • C for 10 h to remove the sacrificial AAO layer from the aluminum surface.
Stepwise Anodizing
For the following stepwise anodizing processes, the MA and HA processes were also applied in 0.3 M oxalic acid, at 1 • C. For the MA process, 40 V was applied for 30 min. For the HA process, 100 V was applied for 0.5 min. The HA process was applied for the relatively short interval since the growth rate of the oxide film is much faster than that in the MA process. In addition to the pre-patterned aluminum foils, electropolished aluminum foils with no pre-patterning were also anodized in the same way for comparison. See Table 1 for the cases studied. The intermediate PW process (Figure 2) was conducted by immersing the firstanodized aluminum foil in 0.1 M phosphoric acid, at 30 • C. To examine the effect of the PW time on the stepwise anodizing processes, three different PW durations were tested in this study, including 0 (i.e., no pore-widening), 10, and 30 min. See Table 2 for the cases studied.
MA process. In addition to the pre-patterned aluminum foils, electropolished aluminum foils with no pre-patterning were also anodized in the same way for comparison. See Table 1 for the cases studied. The intermediate PW process (Figure 2) was conducted by immersing the first-anodized aluminum foil in 0.1 M phosphoric acid, at 30 °C. To examine the effect of the PW time on the stepwise anodizing processes, three different PW durations were tested in this study, including 0 (i.e., no pore-widening), 10, and 30 min. See Table 2 for the cases studied.
Characterization of Nanopore Morphology
The morphologies of the nanopore structures of the anodized alumina films were analyzed using a field-emission scanning electron microscopy (FE-SEM) system (AURIGA ® small dual-bean FIB-SEM, Zeiss, Jena, Germany). The specimens were cut into small pieces, mounted on a stage with carbon tape before the SEM imaging. To examine the vertical morphology of the nanopore structures, the specimens were bent, at 90 • C, to produce parallel cracks and to take the cross-sectional views of the AAO films. The structural dimensions of the nanopore structures were estimated by using ImageJ from the images taken in the SEM. Figure 3 shows the top surfaces and cross-sectional morphologies of the nanoporous AAO layer created by the modulation of anodizing voltage applied on a smooth (only electropolishing and no pre-pattering) aluminum substrate. See also Table 1 for the summary of the structural dimensions and morphologies. The voltage applied during anodization is one of the primary factors determining the values of D p and D int . Their dependence on the anodizing voltage (U) is linear and can be expressed as follows [38,39]:
Effect of Voltage Modulation on Electropolished Aluminum Substrate
where λ P and λ int are proportionality constants, which depend on anodizing parameters such as the type of electrolyte, the acidity of electrolyte, and anodizing temperature. The anodizing at a higher voltage (e.g., HA) typically results in greater values of D p and D int than that at a lower voltage (e.g., MA) [39]. The greater value of D int indicates the smaller number of pores at a given surface area (i.e., pore density). The porosity (φ) for the hexagonally ordered array of pores can be defined using the values of D p and D int , as follows [39]: It indicates that the porosity should not depend on the anodizing voltage but rather be constant. However, the result ( Figure 3) shows that the stepwise modulation in the anodizing voltage and their sequence made a difference in the characteristics of the pores, including D p , D int (i.e., pore density), and φ along the thickness direction of the AAO layer.
In detail, Figure 3a,b shows the results after the sequence of MA→HA. The upper layer has D p = 24 ± 4.5 nm and D int = 81 ± 10.3 nm, resulting φ = 0.087 ± 0.032. The upper layer of the AAO film grew first during the first MA step so that the dimensions and porosity mainly followed the characteristics of MA. However, Figure 3b shows that the nanopores in the lower layer, which was grown in the following HA step, have greater dimensions of D p = 40 ± 6.9 nm, D int = 139 ± 13.8 nm, corresponding to the characteristics of HA. Meanwhile, the porosity does not significantly change in the following HA step, having φ = 0.077 ± 0.025. Due to the characteristics of the larger dimensions (especially D int ) in HA than MA, around half of the nanopores formed in the upper layer during the first MA step do not grow any longer when the MA step switches to the HA step. The ceasing of the growth of nanopores from the upper layer results in the "Y"-shaped merging of the pore walls in the lower layer. It indicates that in the sequence of MA→HA, the upper and lower layers retain the main characteristics of each anodizing mode with the "Y"-shaped merging of the pore walls in between. In this case, the number of pore (i.e., pore density) decreases in the lower layer compared to that of the upper layer due to the increase in D int in the HA step from that in following the MA. than that at a lower voltage (e.g., MA) [39]. The greater value of Dint indicates the smaller number of pores at a given surface area (i.e., pore density). The porosity (ϕ) for the hexagonally ordered array of pores can be defined using the values of Dp and Dint, as follows [39]: = ( √3 p 2 )/(6 int 2 ) = ( √3 p 2 )/(6 int 2 ) It indicates that the porosity should not depend on the anodizing voltage but rather be constant. However, the result ( Figure 3) shows that the stepwise modulation in the anodizing voltage and their sequence made a difference in the characteristics of the pores, including Dp, Dint (i.e., pore density), and ϕ along the thickness direction of the AAO layer.
In detail, Figure 3a,b shows the results after the sequence of MA→HA. The upper layer has Dp = 24 ± 4.5 nm and Dint = 81 ± 10.3 nm, resulting ϕ = 0.087 ± 0.032. The upper layer of the AAO film grew first during the first MA step so that the dimensions and porosity mainly followed the characteristics of MA. However, Figure 3b shows that the nanopores in the lower layer, which was grown in the following HA step, have greater dimensions of Dp = 40 ± 6.9 nm, Dint = 139 ± 13.8 nm, corresponding to the characteristics of HA. Meanwhile, the porosity does not significantly change in the following HA step, having ϕ = 0.077 ± 0.025. Due to the characteristics of the larger dimensions (especially Dint) in HA than MA, around half of the nanopores formed in the upper layer during the first MA step do not grow any longer when the MA step switches to the HA step. The ceasing of In contrast, Figure 3c,d shows the results after the sequence of HA→MA. The upper layer has D p = 50 ± 14.7 nm, D int = 152 ± 10.1 nm, and φ = 0.105 ± 0.049. In this case, the upper layer of the AAO film grew first during the first HA step so that the dimensions and porosity shown on the upper layer mainly followed the characteristics of HA, being not much different from the characteristics shown in the lower layer in Figure 3b. Meanwhile, Figure 3d shows that the nanopores in the lower layer, which was grown in the following MA step, do not follow the characteristics of MA shown in the upper layer in Figure 3b but have the greater dimensions, such as D p = 38 ± 6.2 nm and D int = 154 ± 9.3 nm and the lower porosity such as φ = 0.058 ± 0.016. While the difference in pore diameter (D p ) resulting from the MA step in the two opposite sequences is significant (i.e., 38 ± 6.2 nm in HA→MA vs. 24 ± 4.5 nm in MA→HA), the difference in interpore distance (D int ) is more dramatic (i.e., 154 ± 9.3 nm in HA→MA vs. 81 ± 10.3 nm in MA→HA). Although MA was applied in the second step, the interpore distance (D int ) did not follow the characteristic of MA but followed that of HA, which was the first step. No new pores were grown from the bottom, although MA should typically result in the pore array with a smaller interpore distance (D int ) than HA. Instead, pores continued to grow following the pores formed in the previous HA step. It indicates that in the sequence of HA→MA, the first HA step should determine the interpore distance (D int ), and it is not significantly affected by the following MA step, resulting in a consistent interpore distance (D int ) throughout the AAO layer. In this case, only the pore diameter (D p ) and hence porosity (φ) are affected by the subsequent MA step, creating a reduction in the pore diameter and porosity at the transition from HA to MA. The anodization reaction occurs only at the barrier layer at the pore bottoms because the barrier layer is the shortest migration path for ions such as Al 3+ and O 2between aluminum and the electrolyte. In addition, since there is no electric field in the side pore wall, the ions do not move through it. Therefore, the initiation of new smaller pores in the already formed AAO film is not feasible on the barrier layer. Figure 4 shows the top surfaces and cross-sectional morphologies of the nanoporous AAO layer created by the modulation of anodizing voltage applied on a pre-patterned (via MA) aluminum substrate. See also Table 1 for the summary of the structural dimensions. Compared to the nanopore structures formed on a smooth substrate (Figure 3), the nanopore structures formed on the pre-patterned substrate are generally more ordered with more uniform dimensions. In the case of MA→HA (Figure 4a), the upper layer formed via the first MA step has Dp = 29 ± 1.7 nm and Dint = 83 ± 7.6 nm, resulting ϕ = 0.115 ± 0.020. The nanopores formed in the lower layer (Figure 4b) via the following HA step show a significant increase in both Dp and Dint, having Dp = 41 ± 7.8 nm and Dint = 150 ± 11.5 nm. Compared to those formed in the case of a smooth substrate (Figure 3a), the values of Dp and Dint for both upper and lower layers are not significantly different, resulting in the similar "Y"-shaped merging of the pore walls in the lower layer. However, their standard deviations decrease, which suggests more uniform dimensions. It is attributed to the pre-patterns created on the initial surface via MA, which should help to maintain the initially ordered arrangement of the pre-patterns during the following MA step, which further helps to keep the uniform pore arrangement in the following HA step. During the initial stage of anodizing, the pore growth commences when a potential is locally concentrated on the aluminum surface, at which point a hemispherical barrier layer is formed as illustrated in Figure 1. The initial morphology of the aluminum substrate affects the local concentration of the potential significantly. In particular, the applied potential is readily concentrated on the indented layers on the pre-patterned substrate. This is not the case for the flat smooth surface, where the distribution of the pores is relatively random and irregular, although their average dimensions are determined by the applied voltage. By contrast, the pre-patterns result in an ordered nanopore pattern during the initial anodizing stage, helping the potential become locally concentrated over the patterns. The self-ordering of the nanopores in anodizing can also be explained in terms of the mechanical stress during the expansion of aluminum in the oxidation, which results in repulsive forces between neighboring pores In the case of MA→HA (Figure 4a), the upper layer formed via the first MA step has D p = 29 ± 1.7 nm and D int = 83 ± 7.6 nm, resulting φ = 0.115 ± 0.020. The nanopores formed in the lower layer (Figure 4b) via the following HA step show a significant increase in both D p and D int , having D p = 41 ± 7.8 nm and D int = 150 ± 11.5 nm. Compared to those formed in the case of a smooth substrate (Figure 3a), the values of D p and D int for both upper and lower layers are not significantly different, resulting in the similar "Y"-shaped merging of the pore walls in the lower layer. However, their standard deviations decrease, which suggests more uniform dimensions. It is attributed to the pre-patterns created on the initial surface via MA, which should help to maintain the initially ordered arrangement of the pre-patterns during the following MA step, which further helps to keep the uniform pore arrangement in the following HA step. During the initial stage of anodizing, the pore growth commences when a potential is locally concentrated on the aluminum surface, at which point a hemispherical barrier layer is formed as illustrated in Figure 1. The initial morphology of the aluminum substrate affects the local concentration of the potential significantly. In particular, the applied potential is readily concentrated on the indented layers on the pre-patterned substrate. This is not the case for the flat smooth surface, where the distribution of the pores is relatively random and irregular, although their average dimensions are determined by the applied voltage. By contrast, the pre-patterns result in an ordered nanopore pattern during the initial anodizing stage, helping the potential become locally concentrated over the patterns. The self-ordering of the nanopores in anodizing can also be explained in terms of the mechanical stress during the expansion of aluminum in the oxidation, which results in repulsive forces between neighboring pores [40,41]. Therefore, the formation of the ordered nanopore pattern can be attributed to a balance between the repulsive forces between neighboring pores and the inhibition of irregular cell growth by the neighboring cells.
Effect of Voltage Modulation on Pre-Patterned Aluminum Substrate
In the case of HA→MA (Figure 4c), the upper layer formed via the first HA step has D p = 50 ± 3.3 nm and D int = 150 ± 4.2 nm, resulting φ = 0.103 ± 0.012. Whereas their average values are not much different from those obtained in the case of HA→MA applied on a smooth substrate (Figure 3c), their standard deviations are significantly lower. It suggests that the pore dimensions and morphology resulting from the aggressive HA process (i.e., using much greater anodizing voltage than the MA process) are not much affected by the pre-pattern prepared by the MA process. However, the significant decrease in the standard deviations compared to the case on a smooth substrate suggests that the pre-patterns should still make the pore patterns more uniform and ordered, even when the aggressive HA is applied on the pre-patterns formed by the MA process. Meanwhile, the nanopores grown in the lower layer in the subsequent MA step (Figure 4d) have similar values to those formed on the smooth surface (Figure 3d), showing D p = 41 ± 3.3 nm and D int = 150 ± 8.1 nm, resulting φ = 0.069 ± 0.011. Table 2. The PW is typically applied to increase a pore size, dissolving the oxide. Since the PW step was applied only after the first anodizing step (i.e., before the second anodizing was applied), it increased the pore diameter (D p ) of the nanopore structures formed in the first anodizing step, regardless of the anodizing modes and their sequences. Meanwhile, compared to the stepwise anodizing processes with no PW step in between (Figure 4), the intermediate PW step did not make any significant change in the interpore distance (D int ) of the nanopore structures subsequently formed in the second anodizing step. In other words, similar to the stepwise anodizing processes with no PW step in between, D int increases in MA→PW→HA but remains unchanged in HA→PW→MA. However, it should be noted that D p of the nanopore structures formed in the second anodizing step in the sequence of HA→PW→MA is significantly affected by the previous PW step, being closer to the D p of the nanopore structures defined by the PW step right after the first HA step with the increase in the PW time.
Effects of PW Time on the Stepwise Anodizing Process
Specifically, in the sequence of MA→PW→MA, the pore diameter D p of the upper layer is 42 ± 3.8 nm, while that of the lower layer is 20 ± 2.3 nm when the PW time was 10 min (Figure 5a). It allows the formation of funnel-shaped nanopores (Figure 7a). PW for 30 min thinned down the oxide pore walls more. In the following MA for 30 min, while the new nanoporous layer grew from the substrate bottom, the initial pore structures in the upper layer were transformed into pillar structures and the pillar structures were eventually aggregated at the end (Figure 6a), allowing the formation of the pillar-on-pore hybrid nanostructures [2] (Figure 7b). min (Figure 5a). It allows the formation of funnel-shaped nanopores (Figure 7a). PW for 30 min thinned down the oxide pore walls more. In the following MA for 30 min, while the new nanoporous layer grew from the substrate bottom, the initial pore structures in the upper layer were transformed into pillar structures and the pillar structures were eventually aggregated at the end (Figure 6a), allowing the formation of the pillar-on-pore hybrid nanostructures [2] (Figure 7b). In the sequence of MA→PW→HA, the PW increased the Dp in the upper layer from 29 ± 1.7 to 49 ± 2.5 and 60 ± 5.2 nm when the PW time was 10 and 30 min, respectively (Figures 5b and 6b). Compared to the sequence of MA→PW→MA, where the pores are eventually transformed into pillars with the PW for 30 min, the increase in Dp was less in this sequence without showing the transformation of the pores into pillars even with the PW for 30 min. During anodization, the oxide layer already formed undergoes chemical dissolution in the electrolyte, although the degree of dissolution is not extensive at low temperatures. It indicates that the oxide pore walls are more significantly dissolved during MA (40 V for 30 min) than HA (100 V for 0.5 min). Since the anodizing voltage is mainly responsible for the formation of nanopores, the more excessive dissolution of the oxide wall during MA than HA is attributed to the longer anodizing time (i.e., 30 vs. 0.5 min). Meanwhile, Dp and Dint of the nanopore structures formed in the lower layer are In the sequence of MA→PW→HA, the PW increased the D p in the upper layer from 29 ± 1.7 to 49 ± 2.5 and 60 ± 5.2 nm when the PW time was 10 and 30 min, respectively (Figures 5b and 6b). Compared to the sequence of MA→PW→MA, where the pores are eventually transformed into pillars with the PW for 30 min, the increase in D p was less in this sequence without showing the transformation of the pores into pillars even with the PW for 30 min. During anodization, the oxide layer already formed undergoes chemical dissolution in the electrolyte, although the degree of dissolution is not extensive at low temperatures. It indicates that the oxide pore walls are more significantly dissolved during MA (40 V for 30 min) than HA (100 V for 0.5 min). Since the anodizing voltage is mainly responsible for the formation of nanopores, the more excessive dissolution of the oxide wall during MA than HA is attributed to the longer anodizing time (i.e., 30 vs. 0.5 min). Meanwhile, D p and D int of the nanopore structures formed in the lower layer are similar to those formed without the PW step (i.e., in the sequence of MA→HA with PW = 0), regardless of the PW time, allowing the transformation of the initial bottle-shaped nanopores with Y-shaped merging with no intermediate PW step (Figure 7c) into the funnelshaped nanopores with M-shaped merging with the regulated PW time (Figure 7d). It indicates that the PW step with short time does not affect the pore dimensions of the lower layer formed in the following anodizing step if HA follows MA with the intermediate PW step. Thus, regardless of the intermediate PW step, the characteristics of HA are retained in the second anodizing step in stepwise anodizing.
In the sequence of HA→PW→HA, D p in the upper layer increased from 61 ± 7.3 to 65 ± 5.9 nm with the PW from 10 to 30 min (Figures 5c and 6c), while the characteristics of HA in the lower layer is retained (D p = 42 ± 7.4 nm and 42 ± 5.4 nm, respectively). Meanwhile, D int values in both layers are similar and unaffected by the PW step. It allows the formation of the funnel-shaped nanopores (Figure 7a) with the greater D int than the sequence of MA→PW→MA allows.
In the sequence of HA→PW→MA, the PW step shows significant effects on not only D p in the upper layer but also D p in the lower layer. The D p values in the upper layers have 61 ± 3.8 and 66 ± 9.8 nm for 10 and 30 min in PW time, respectively (Figures 5d and 6d), which are around the same as those shown in the case of HA→PW→HA (Figures 5c and 6c). However, the lower layer formed in MA following the PW step does show the characteristics of neither MA nor HA. In the sequence of HA→MA with no intermediate PW step (Figure 4d), D p in the lower layer still shows the characteristics of MA while D int follows that of HA. However, in the sequence of HA→PW→MA, D p in the lower layer show the increase with the increase in the PW time, which is more pronounced than that in the upper layer. It suggests that D p formed in the sequential MA step should be significantly affected by the D p pre-determined by the PW step. A similar trend is also shown in MA→PW→MA. It suggests that the PW process significantly affects the value of D p in the following MA step, regardless of the anodizing mode priorly applied to the PW step. Meanwhile, such effects are not shown in the cases of MA→PW→HA and HA→PW→HA. It suggests that the PW process does not significantly affect the value of D p in the next sequence if a HA follows the PW step. Due to the more significant increase in D p in the lower layer than that in the upper layer with the increase in the PW time in the sequence of HA→PW→MA, the morphology of the funnel-shaped pore walls (Figure 7a) becomes less evident with the increase in the PW time. It suggests that the extent of the funnel shape can be tuned by the modulation of the PW time if a MA step follows the PW step.
As Table 3 summarizes, the results show that stepwise anodizing with the regulation of the anodizing voltage and the intermediate pore widening step can provide the design of various types of nanopore structures for aluminum substrates. For example, the funnel-shaped nanopore structures consisting of larger openings in the upper layer than the lower layer (Figure 7a) can be achieved by the various ways, i.e., by reducing the anodizing voltage in the stepwise anodizing with no intermediate pore widening step (e.g., HA→MA), including the intermediate pore widening step with no change in the anodizing voltage (e.g., MA→PW→MA or HA→PW→HA), or decreasing the anodizing voltage with an intermediate pore widening step (e.g., HA→PW→MA). A funnel-shaped nanopore structures with the M-shaped merging of pore walls (Figure 7d) is also attainable by increasing the anodizing voltage with an intermediate pore widening step (e.g., MA→PW→HA). The scale of the pore arrangement (i.e., D int ) can be regulated by the alternation of the anodizing mode. The extent of the funnel shape can also be tuned by the regulation of the PW time. On the other hand, the opposite shape, i.e., the bottle-shaped nanopore structures (Figure 7c), is also possible by increasing the anodizing voltage in the stepwise anodizing with no aid of the intermediate pore widening step (e.g., MA→HA). Pillar-on-pore hybrid type of nanopore structures (Figure 7b) are also attainable by regulating the pore widening time and the subsequent etching of the oxide in the following anodizing step (e.g., MA→PW→MA with elongated PW time).
Conclusions
Our study demonstrates that various types of 3D hierarchically structured porous AAO films can be designed and fabricated through the combination of stepwise anodization involving voltage modulation and PW processes. Through the combination of MA, PW, and HA sequence, the pore shape could be controlled to be pillared, bottled, and funneled, in addition to the conventional cylindrical shape. It was found that through voltage change, the size of the pores and the distance between the pores could be adjusted. The variation of the pore shape can further be regulated by the PW process, while the uniformity of the pores is ensured. The proposed anodization schemes should lead to the fabrication of various multi-level and 3D hierarchically arranged pore structures. One of the primary advantages of hierarchical AAO films is that they are highly adaptable, since their geometric characteristics (i.e., D p , D int , porosity, and pore shape) can be readily tailored to meet the requirements (e.g., relationship between D p , D int , porosity, pore shape, and film thickness, among other factors) for various devices and applications, e.g., filters, sensors, and solar cells. Further, thus-fabricated AAO structures with distinct nanopores can be used as general templates for a variety of materials and also in applications such as superhydrophobic and liquid-infused surfaces with anticorrosive and antibiofouling property [20,[42][43][44][45]. The structure of the thus-produced nanoporous AAO can be tailored to obtain complex nanostructures, which may be accorded different functionalities to produce optical biosensors with high selectivity for targeted analysis [46]. This proposed stepwise anodization process, which combines the MA and HA as well the PW step, should find wide applicability in various industrial processes, since it is simple, efficient, and can be applied for the design of diverse AAO structures on metallic substrates. Its low cost and expandability make the anodization technique transferrable to the manufacturing and material-processing industries. | 8,859.2 | 2023-01-01T00:00:00.000 | [
"Materials Science"
] |
Thermodynamically Extended Symplectic Numerical Scheme with Half Space and Time Shift Applied for Rheological Waves in Solids
On the example of the Poynting–Thomson–Zener rheological model for solids, which exhibits both dissipation and wave propagation – with nonlinear dispersion relation –, we introduce and investigate a finite difference numerical scheme for continuum thermodynamical problems. The key element is the positioning of the discretized quantities with shifts by half space and time steps with respect to each other. The arrangement is chosen according to the spacetime properties of the quantities and of the equations governing them. Numerical stability, dissipative error and dispersive error are analysed in detail. With the best settings found, the scheme is capable of making precise and fast predictions.
Introduction
Numerical solution methods for dissipative problems are important and are a nontrivial topic. Already for reversible systems, the difference between a symplectic and nonsymplectic finite difference method is striking: the former can offer reliable prediction that stays near the exact solution even at extremely large time scales while the latter may provide a solution that drifts away from the exact solution steadily. For dissipative systems, the situation is harder. Methods that were born with reversibility in mind may apparently fail for a nonreversible problem. For example, a finite element software is able to provide, at the expense of large run-time, quantitatively and even qualitatively wrong outcome while a simple finite difference scheme solves the same problem fast and precisely [1].
Thermodynamics also modifies the way of thinking concerning numerical modelling. Even if quantities known from mechanics form a closed system of equations to solve numerically, monitoring temperature (or other thermodynamical quantities) for a nonreversible system can give insight on the processes and phenomena, for example, pointing out the presence of viscoelasticity/rheology, and displaying when plastic changes start [10]. In addition, temperature can also react, in the form of thermal expansion and heat conduction, even in situations where one is not prepared for this 'surprise' [11].
Furthermore, in a sense, thermodynamics is a stability theory. Therefore, how thermodynamics ensures asymptotic stability for systems may give new ideas on how stability and suppression of errors arXiv:1908.07975v1 [physics.class-ph] 21 Aug 2019 can be achieved for numerical methods. A conceptually closer relationship is desirable between these two areas.
Along these lines, here, we present a study where a new numerical scheme is suggested and applied for a continuum thermodynamical model. The scheme proves to be an extension of a symplectic method. In parallel, our finite difference scheme introduces a shifted arrangement of quantities by half space and time steps with respect to each other, according to the spacetime nature of the involved quantities and the nature of equations governing them, since balances, kinematic equations, and Onsagerian equations all have their own distinguished discretized realization. This also makes the scheme one order higher precise as the original symplectic scheme.
The continuum system we take as the subject of our investigation is important on its own -it is the Poynting-Thomson-Zener rheological model for solids. This model exhibits dissipation and wave propagation (actually: dispersive wave propagation) both, and is thus ideal for testing various aspects and difficulties. Meanwhile, its predictions are relevant for many solids, typically ones with complicated microor mesoscopic structure like rocks [12][13][14], plastics [10], asphalt etc. This non-Newtonian rheological model can explain why slow and fast measurements and processes give different results.
Solutions in the force equilibrial and space independent limit have proved successful in explaining experimental results [10]. Space dependent -but still force equilibrial -analytical solutions can model opening of a tunnel, gradual loading of thick-walled tubes and spherical tanks, and other problems [15]. The next level is to leave the force equilibrial approximation, partly in order to cover and extend the force equilibrial results but also to be utilized for evaluating measurements that include wave propagation as well. The present work is, in this sense, the next step in this direction.
Properties of the continuum model
The system we consider is a homogeneous solid with Poynting-Thomson-Zener rheology, in small-strain approximation 1 , in one space dimension (1D). Notably, the numerical scheme we introduce in the following section can be generalized to 2D and 3D with no difficulty 2 -the present 1D treatment is to keep the technical details at a minimum so we can focus on the key ideas.
The set of equations we discuss is, accordingly, ∂v ∂t where is mass density (constant in the small-strain approximation), (1) tells how the spatial derivative of stress σ determines the time derivative of the velocity field v (volumetric force density being omitted for simplicity), (2) is the kinematic relationship between the strain field ε 3 and v, and the rheological relationship (3) contains, in addition to Young's modulus E, two positive coefficientsÊ, τ.
1 Hence, there is no need to distinguish between Lagrangian and Eulerian variables, and between material manifold vectors/covectors/tensors/. . . and spatial spacetime ones. 2 The results of our ongoing research on 2D and 3D are to be communicated later. 3 In the present context, ε can be used as the thermodynamical state variable for elasticity, but not in general, see [17,18].
The Poynting-Thomson-Zener model is a subfamily within the Kluitenberg-Verhás model family, which family can be obtained via a nonequilibrium thermodynamical internal variable approach [16]. The Poynting-Thomson-Zener model looks particularly simple, after eliminating the internal variable, both in specific energy e total and in specific entropy s: On the other side, for processes much faster then the two time scales, keeping the highest time derivatives leads to that is, for stress and strain changes (e.g., for deviations from initial values), the system effectively behaves like a Hooke one, with 'dynamic' Young's modulus The corresponding effective wave equation possesses the wave speed For a more rigorous and closer investigation of these aspects, the dispersion relation can be derived. Namely, on the line −∞ < x < ∞, any (not too pathological) field can be given as a continuous linear combination of e ikx space dependences, where the 'wave number' k is any real parameter. If such a (Fourier) decomposition is done at, say, t = 0, then the subsequent time dependence of one such mode may be particularly simple: (15) with some appropriate ω -complex, in general -; the factor i in the first component is introduced in order to be in tune with later convenience. A space and time dependence e −iωt e ikx = e Im ωt e −i Re ωt e ikx = e −(− Im ω)t e ik(x− Re ω k t) expresses travelling with constant velocity Re ω k and exponential decrease (for dissipative systems like ours, Im ω < 0). In general, it depends on the number of fields and on the order of time derivatives how many ω's are possible. In our case, the relationship between compatible ω and k -the dispersion relationis straightforward to derive: In the limit |ω| → 0 (limit of slow processes), we find while in the opposite limit |ω| → ∞ (limit of fast processes), the result is Both results confirm the findings above [(11) and (14), respectively]. This is a point where we can see the importance of the Poynting-Thomson-Zener model. Namely, when measuring Young's modulus (or, in 3D, the two elasticity coefficients) of a solid, the speed of uniaxial loading, or the frequency of sound in an acoustic measurement, may influence the outcome and an adequate/sufficient interpretation may come in terms of a Poynting-Thomson-Zener model. Indeed, in rock mechanics, dynamic elastic moduli are long known to be larger than their static counterparts (a new and comprehensive study on this, [19], is in preparation), in accord with the thermodynamics-originated inequality in (14) (or its 3D version).
The numerical scheme
The classic attitude to finite difference schemes is that all quantities are registered at the same discrete positions and at the same discrete instants. An argument against this practice is that, when dividing a sample into finite pieces, some physical quantities have a meaning related to the bulk, the centre of a piece, while others have a physical role related to the boundaries of a unit. For example, (specific) extensive and density quantities would naturally live at a centre, while currents/fluxes are boundary related by their physical nature/role. When one has a full -at the general level, 4D -spacetime perspective 6 then it turns out that quantities may "wish" to be shifted with respect to each other by a half in time as well. This latter aspect is less straightforward to visualize but the structure of the equations -for example, the structure of balanceshelps us to reveal what intends to be shifted with respect to what. This is what we realize for the present system. Discrete space and time values are chosen as and discrete values of stress are prescribed to these spatial and temporal coordinates: Then, investigating (1), we decide to put velocity values half-shifted with respect to stress values both in space and time: 7 v j n at t j − ∆t 2 , x n + ∆x 2 , and discretize (1) as Next, studying (2) suggests analogously to have strain values half-shifted with respect to velocity values both in time and space. Therefore, strain is to reside at the same spacetime location as stress: 6 Traditional physical quantities are usually time-and spacelike components of four-vectors, four-covectors, four-cotensors etc. and (2) is discretized as Finally, for the Hooke model, (10) is discretized plainly as we can recognize the steps of the symplectic Euler method [20] (with the Hamiltonian corresponding to e kinetic + e elastic ). Now, a symplectic method is highly favourable because of its extremely good large-time behaviour, including preservation of energy/etc. conservation. While (27) coincides with the symplectic Euler method computationally, the present interpretation of the quantities is different, because of the space and time shifts. One advantageous consequence is that, due to the reflection symmetries (see Figure 1), our scheme makes second order precise predictions (understood in powers of ∆t and ∆x), while the symplectic Euler method makes only first order precise ones [20]. Indeed, setting and assuming that precisely, the error of the prediction for v after Taylor series expansion, cancellations, and the use of (1).
Analogously, with second order preciseness of prediction ε j+1 n can be proved. In case of the Poynting-Thomson-Zener model, we need to discretize (3). Here, both σ and its derivative, and both ε and its derivative, appear. Hence, shifting does not directly help us. This is what one can expect for dissipative, irreversible, relaxation type, equations in general. However, an interpolation-like solution is possible: where α = 1/2 is expected to provide second order precise prediction, and other seminal values are α = 1 (the explicite case, which is expected to be stiff) and α = 0 (the fully implicite case). For generic α, (32) looks implicit. However, actually, thermodynamics has brought in an ordinary differential equation type extension to the Hooke continuum, not a partial differential equation type one, and a linear one, in fact. Thus (32) can be rewritten in explicit form, assuming Second order preciseness of (33) for α = 1/2 is then straightforward to verify, in complete analogy to the two previous proofs.
Stability
One may specify a space step ∆x according to the given need, adjusted to the desirable spatial resolution. In parallel, the time step ∆t is reasonably chosen as considerably smaller than the involved time scales (e.g., τ andτ of our example system). Now, a finite difference scheme may prove to be unstable for the taken ∆x and ∆t, making numerical errors (which are generated unavoidably because of floating-point round-off) increase essentially exponentially and ruining usefulness of what we have done. Therefore, first, a stability analysis is recommended, to explore the region of good pairs of ∆x, ∆t for the given scheme and system.
We continue with this step for our scheme and system, performing a von Neumann investigation [21], where the idea is similar to the derivation of the dispersion relation. There, we study time evolution of continuum Fourier modes [see (15)], while here examine whether errors, expanded in modes with e ikx n space dependence, increase or not, during an iteration by one time step. For such linear situations as ours -when the iteration step means a multiplication by a matrix -, such a mode may simply get a growth factor ξ (that is k dependent but space independent), in other words, the iteration matrix (frequently called 'transfer matrix') has these modes as eigenvectors with the corresponding eigenvalues ξ. Then, |ξ| < 1 (for all k) ensures stability. Furthermore, |ξ| = 1 means stability if the algebraic multiplicity of ξ -its multiplicity as a root of the characteristic polynomial of the transfer matrix -equals its geometric multiplicity -the number of linearly independent eigenvectors (eigenmodes), i.e., the dimension of the eigensubspace ( [22], page 186, Theorem 4.13; [23], page 381, Proposition 2).
With boundary conditions specified, one can say more. 8 Boundary conditions may allow only certain combinations of e ikx n as eigenmodes of the transfer matrix. Consequently, this type of analysis is more involved and is, therefore, usually omitted. As a general rule-of-thumb, one can expect that |ξ| > 1 for some e ikx n indicates instability also for modes obeying the boundary conditions, while |ξ| ≤ 1 for all e ikx n suggests stability for all modes allowed by the boundary conditions 9 .
Hooke case
In the Hooke case, the 'plane wave modes' for the two bookkept quantities v, ε can, for later convenience, be written as the condition on k related to that k outside such a 'Brillouin zone' makes the description redundant.
Realizing the iteration steps (27) as matrix products leads, for the amplitudes introduced in (35), to For space dependences (35), 8 All systems require boundary or asymptotic conditions. We also specify some in the forthcoming section on applications. 9 Namely, the problem of differing multiplicities for |ξ| = 1 can be wiped out by the boundary conditions. in other words, to the eigenvalue problem Let us introduce the notation for the Courant number of our scheme for the Hooke system. Comparing the characteristic polynomial of T, with its form written via its roots, reveals, on one side, that, in order to have both |ξ + | ≤ 1 and |ξ + | ≤ 1, both magnitudes have to be 1 (since their product is 1), which, on the other side, also implies as both C and S are non-negative. If CS < 1 then the two roots, are complex, with unit modulus, and are the complex conjugate of one another. Especially simple -and principally distinguished, as we see in the next sections -is the case C = 1: then with the remarkable property that arg ξ ± are linearly depending on k -so to say, both branches of the discrete dispersion relation are linear.
In parallel, if CS = 1 then the two roots coincide, ξ ± = −1. The algebraic multiplicity 2 is accompanied with geometric multiplicity 1: only the multiples of are eigenvectors. If C = 1 then this affects only one mode, S = 1, k = π k , and if that mode is prohibited by the boundary conditions then the choice C = 1 ensures a stable scheme.
With C > 1, CS ≤ 1 would be violated by a whole interval for k [recall (37)], which may not be cured by boundary conditions so the best candidate (largest ∆t for a fixed ∆x, or the smallest possible ∆x for fixed ∆t) to have stability is C = 1.
Poynting-Thomson-Zener case
For the Poynting-Thomson-Zener system, the von Neumann stability analysis of our discretization studies the modes gives The characteristic polynomial is noŵ Three roots are considerably more difficult to directly analyse. One alternative is to use Jury's criteria [24] for whether the roots are within the unit circle of the complex plane, and another possibility is to apply the Möbius transformation ξ = η+1 η−1 on (52) and utilize the Routh-Hurwitz criteria whether the mapped roots are within the left half plane. The two approaches provide the same result. Nevertheless, one criterion provided by one of these two methods may not directly be one criterion of the other method. It is only the combined result (the intersection of the conditions) that agrees. Accordingly, it can be beneficial to perform both investigations because a simple condition provided by one of the routes may be labouring to recognize as consequence of the conditions directly offered by the other route.
Jury's criteria, for our case, are as follows. First,P(1) > 0 gives Second, (−1) 3P (−1) > 0 yields which, in light of (54), reduces to Third, the matrices a 3 a 2 0 a 3 ± 0 a 0 a 0 a 1 have to be positive innerwise 10 . The '+' branch leads to which is weaker than (56), because there the rhs is larger by (1 − α) + τ ∆t C 2 S 2 [and cf. (54)]. Meanwhile, the '−' branch induces conditionτ which we have already met in (9), as the thermodynamical requirement (6) at the continuum level, and which also induces, via (57), which is stronger than (54). This also allows to rearrange (56) and exploit it as Conditions (58)-(60) summarize the obtained stability requirements; the first referring to the constants of the continuum model only, the second relating α and ∆t of the scheme, and the third limiting ∆x (through C) in light of α and ∆t.
If, instead of Jury's criteria, one follows the Routh-Hurwitz path, on the Möbius transformed polynomialQ then, having b 3 > 0, roots lie in the left half plane if all corner subdeterminants of As expected, these conditions prove to be equivalent to the ones obtained via Jury's criteria -we omit the details to avoid redundant repetition.
Kelvin-Voigt model
Although the focus of the present paper is on the hyperbolic-like case corresponding to τ > 0, the above calculations are valid for τ = 0, the Kelvin-Voigt subfamily as well. As a brief analysis of this case, (58) is trivially satisfied withτ > 0. (59) gives the nontrivial condition α < 1/2. 11 Finally, (60) gives which looks like some mixture of a stability condition for a scheme for a parabolic problem like Fourier heat conduction and of a condition for a simple reversible wave propagation.
Beyond Kelvin-Voigt
When τ > 0 thenĈ [recall (14)] becomes important. The most interesting case is α = 1/2, where the scheme gives second order precise predictions: (59) holds trivially, and (60) can be rewritten asĈ With boundary conditions also present, we may extend this condition tô Considering the two other potentially interesting cases as well: If α = 1 then (59) induces ∆t < 2τ, which is not a harsh requirement since the time step must usually be much smaller then the time scales of the system in order to obtain a physically acceptable numerical solution. In parallel,Ĉ is limited from above by a number smaller than 1. On the other side, when α = 0 then (59) is automatically true again, and nowĈ is limited from above by a number larger than 1. Since we may need ∆t τ for a satisfactory solution, this O ∆t τ gain over 1 is not considerable.
Hooke case
It is worth looking back to the Hooke limit of (60): τ =τ = 0 (with whatever α) tells C < 1. One can see that the |ξ| < 1 stability requirement gives conservative results and does not tell us how far the obtained inequalities are from equalities.
Numerical results
The calculations communicated here are carried out with zero v, ε, σ as initial conditions, and with stress boundary conditions: on one end of the sample, a cosine shaped pulse is applied, while the other end is free (stress is zero). With τ b denoting the temporal width of the pulse, the excitation is, accordingly, For making our quantities dimensionless -suitable for numerical calculations -the following units are used: the length of the sample, c (so a Hookean wave arrives at the other end during unit time), σ b , and, for temperature, c σ . Henceforth, with respect to this time unit, 0.2 is used for τ b , 1.25 for τ, and 5 forτ, implyingĉ = 2c ≡ 2. Temperature is calculated according to the discretized form of (8), with the natural choice that temperature values reside at the same place as stress and strain but half-shifted in time (T j n at t j − ∆t 2 , x n ). When plotting, say, elastic energy of the whole sample at time t j , a simple E 2 ∑ n ε , are counted with weight 1 2 . Second, kinetic energy and thermal energy, both being based on quantities half-shifted in time, are calculated as a time average, their value at t j taken as the average of their value at t j − ∆t 2 and t j + ∆t 2 .
Hookean wave propagation
For the Hooke system, our scheme is symplectic, with very reliable long-time behaviour. This is well visible in Figure 3: the shape is nicely preserved, no numerical artefacts are visible in the spacetime picture, and the sum of elastic and kinetic energy is conserved.
Poynting-Thomson-Zener wave propagation
For the Poynting-Thomson-Zener system, we find that the principally optimal choice of α = 1/2 does outperform α = 0 (withĈ = 1). Figure 4 shows such a comparison: α = 1/2 produces a reliable signal shape quite independently of space resolution, while α = 0 needs more than N = 1000 space cells to reach the same reliability. Actually, α = 1/2 offers that realibility already at N = 50, and even N = 25 'does a decent job', as depicted in Figure 5.
With α = 1/2, the spacetime picture and total energy conservation are not less satisfactory, as visible in Figure 6. The physical explanation of the signal shape (Figures 4-5) is that the fastest modes propagate with speedĉ (recall Section 2), transporting the front of the signal, while slow modes travel with c <ĉ, gradually falling behind, and forming a little-by-little thickening tail.
In parallel, the spacetime picture shows that this tail effect is less relevant than the overall decrease of the signal, due to dissipation.
Finally, concerning the energy results, the remarkable fact is that all ingredients v, ε, σ, T are calculated via discretized time integration, therefore, total energy conservation is not built-in but is a test of the quality of the whole numerical approach.
Hooke case
The Hooke system might appear as a simple introductory task for numerics. This is actually far from true. Already the Hooke case displays both dissipation error and dispersion error if not treated with appropriate care [25]. While the greatest danger, instability, is about exponential exploding of error, dissipation error is 'the opposite': when the signal decreases in time, losing energy due to numerical artefact only. This type of error is related to |ξ| < 1 modes, which indicates that one should try to stay on the unit circle with ξ. On the other side, in addition to the modulus of ξ, its argument can also cause trouble: if arg ξ is not linear in k then dispersion error is induced, which is observable as unphysical waves generated numerically around signal fronts. These errors are present even in a symplectic scheme like ours, as illustrated in Figure 7. More insight is provided by Figure 8.
Poynting-Thomson-Zener case
In case of a dissipative system like the Poynting-Thomson-Zener one, it is hard to detect the dissipative error, i.e., to distinguish it from the physical dissipation. The dispersion error remains visible, as Figure 9 shows. Usually, one would need to set ∆t to be much smaller than τ,τ (and τ b ) to obtain a physically acceptable approximation. Rewriting the coefficients (53) as in the limit ∆t τ → 0, the characteristic polynomial reduces to with roots satisfying ξ 0 = |ξ + | = |ξ − | = 1, excluding thus dissipation error. Especially simple and distinguished is the caseĈ = 1, when the roots are providing dispersion relations linear in k and, hence, getting rid of dispersion error as well.
With slightly nonzero ∆t τ , these nice properties get detuned but only up to O ∆t τ , as shown in Figures 10-12 (prepared at dimensionless time step value 0.01; the detuning appears weaker for α = 1/2 than for α = 0).
Discussion
Choosing a good finite difference numerical scheme for a continuum thermodynamical problem is not easy. A good starting point can be a symplectic scheme for the reversible part, as done here, too. Another advantage is provided by a shifted arrangement of quantities by half space and time steps, suited to balances, to the kinematic equations, to the Onsagerian equations etc.
Even with all such preparations, instability is a key property to ensure. And when all these are settled, dissipative and dispersive errors can invalidate our calculation, which may not be recognized when the continuum system is dissipative and when it allows wave propagation as well.
In the future, the study provided here can be supplemented by comparison with analytical and finite element solutions.
Another logical continuation of this line of research is extension of the scheme to 2D and 3D spacethis is actually work in progress [25]. Figures 10-11, the roots are not exactly on the unit circle -here, ∆t dependence of |ξ 0 | and |ξ ± | is displayed, at a neutral value k∆x = π/4, forĈ = 1 and α = 1/2. Concerning the thermodynamical system to be investigated, the whole Kluitenberg-Verhás family -which the present Poynting-Thomson-Zener model is a subclass of -could be studied. The presence of second derivative of strain, and actually already the Kelvin-Voigt subfamily, brings in the aspect of parabolic characteristics so useful implications may be gained for other thermodynamical areas like non-Fourier heat conduction as well.
In conclusion, reliable numerical methods for thermodynamical systems, which avoid all the various pitfalls, are an important direction for future research. | 6,105.4 | 2019-08-21T00:00:00.000 | [
"Physics"
] |
Genes Influenced by the Non-Muscle Isoform of Myosin Light Chain Kinase Impact Human Cancer Prognosis
The multifunctional non-muscle isoform of myosin light chain kinase (nmMLCK) is critical to the rapid dynamic coordination of the cytoskeleton involved in cancer cell proliferation and migration. We identified 45 nmMLCK-influenced genes by bioinformatic filtering of genome–wide expression in wild type and nmMLCK knockout (KO) mice exposed to preclinical models of murine acute inflammatory lung injury, pathologies that are well established to include nmMLCK as an essential participant. To determine whether these nmMLCK-influenced genes were relevant to human cancers, the 45 mouse genes were matched to 38 distinct human orthologs (M38 signature) (GeneCards definition) and underwent Kaplan-Meier survival analysis in training and validation cohorts. These studies revealed that in training cohorts, the M38 signature successfully identified cancer patients with poor overall survival in breast cancer (P<0.001), colon cancer (P<0.001), glioma (P<0.001), and lung cancer (P<0.001). In validation cohorts, the M38 signature demonstrated significantly reduced overall survival for high-score patients of breast cancer (P = 0.002), colon cancer (P = 0.035), glioma (P = 0.023), and lung cancer (P = 0.023). The association between M38 risk score and overall survival was confirmed by univariate Cox proportional hazard analysis of overall survival in the both training and validation cohorts. This study, providing a novel prognostic cancer gene signature derived from a murine model of nmMLCK-associated lung inflammation, strongly supports nmMLCK-involved pathways in tumor growth and progression in human cancers and nmMLCK as an attractive candidate molecular target in both inflammatory and neoplastic processes.
Introduction
Cancer cell proliferation and migration require rapid dynamic regulation of the cytoskeleton, which is controlled by series of cytoskeleton regulatory proteins, in which myosin light chain kinase (MLCK) is a critical participant [1,2].In addition, endothelial cell paracellular extravasation and diapedesis by tumor cells is an essential step for malignant tumor metastasis and significantly influenced the activity of MLCK [3,4].Although still underestimated, MLCK started to be considered as a novel functional protein in cancer pathogenesis (initiation, proliferation, migration, and metastasis) [5,6,7].This is especially true with the more widely expressed non-muscle isoform (nmMLCK).Nonmuscle myosin light chain kinase or nmMLCK is centrally involved in driving rearrangement of the cytoskeleton, which regulates vascular endothelial barrier function, angiogenesis, endothelial cell apoptosis, and leukocytic diapedesis [8].In vivo studies implicated nmMLCK as an attractive target for ameliorating the adverse effects of dysregulated lung inflammation, including extravasation of inflammatory leukocytes [9,10], similar with the procedure of cancer cell metastasis to lung tissues [11].Deletion or silencing of nmMLCK produced greater protection against acute lung injury (ALI) and ventilator-induced lung injury (VILI) and significantly decreased alveolar and vascular permeability and lung inflammation [9].
Recently, we reported that endothelial inflammation is a key mediator of tumor growth and progression [12], supported by the fact that a molecular signature reflective of the endothelial inflammatory gene expression is predictive of clinical outcome in multiple types of human cancer [12].As nmMLCK plays a central role in regulation of endothelial cytoskeleton and endothelial inflammation, we would hypothesize that nmMLCK-related cellular signaling actively participate in the tumor progression and metastasis, although little is known regarding the effect of nmMLCK on the pathogenesis of tumor and its influence on the prognosis of human cancers.
In this present study, we would like to use nmMLCK-associated gene network (nmMLCK-deregulated gene sets) to establish a novel methodology for human cancer prognoses, by using a computational biology approach.
The purpose of this study is two-fold.The first was to identify the genes potentially regulated by nmMLCK.The second was to develop a prognostic cancer gene signature derived from the nmMLCK-associated genes.Using an experimental murine model of lung injury induced by mechanical ventilation with increased tidal volumes (the VILI model), we characterized the top differentially expressed genes between VILI-challenged wild-type (WT) mice and nmMLCK knockout (KO) mice.The mouse genes mediated by nmMLCK expression were identified.We matched the nmMLCK-mediated mouse genes to their human orthologs, which formed the basis of a multivariate molecular predictor of overall survival in several human cancers, including lung cancer, breast cancer, colon cancer, and glioma.This molecular signature predicted outcome independently of, but cooperatively with, standard clinical and pathological prognostic factors including patient age, lymph node involvement, tumor size, tumor grade, and so on.
Gene expression data
Microarray data of lung RNA from WT control, VILI-exposed WT, and VILI-exposed nmMLCK KO mice were obtained from NCBI GEO database (GSE14525) [9].We used this dataset to filter out the nmMLCK-mediated mouse genes.
The gene expression datasets representing human cancers were downloaded from publicly available repositories.These datasets were chosen based on the availability of clinical survival data and the large size of samples.For each tumor type, training and validation cohorts were constructed.The dataset for breast cancer (n = 295) was available from http://bioinformatics.nki.nl/data.php (Netherlands Cancer Institute, Computational Cancer Biology Data Repository) [13].The breast cancer patients were randomly separated into two parts (1/2 for training and 1/2 for validation).For colon cancer, we downloaded two datasets from a single study [14].One dataset was used as training cohort (n = 177; GSE17536) and the other one was used for validation (n = 55; GSE17537).For glioma, distinct datasets from two different studies were obtained for training (n = 77; GSE4271) [15] and validation (n = 50; http://www.broadinstitute.org/cgi-bin/ca?ncer/datasets.cgi)[16].Lastly, we obtained three datasets (n = 359) for lung cancer which were available from a single study [17].Two datasets were combined as training cohort (n = 161) and the other one was used as validation cohort (n = 178).The CEL files for the study are available at https://caarraydb.nci.nih.gov/caarray/publicExperimentDetailAction.do?expId = 101594523614 1280.
Statistical analysis
SAM (Significance Analysis of Microarrays) [18], implemented in the samr library of the R Statistical Package [19], was used to compare log 2 -transformed gene expression levels between WT control, VILI-exposed WT, and VILI-exposed nmMLCK KO mice.False discovery rate (FDR) was controlled using the q-value method [20].Transcripts with a fold-change greater than 2 and FDR less than 10% were deemed differentially expressed.We searched for any enriched Kyoto Encyclopedia of Genes and Genomes (KEGG) [21] physiological pathways among the differential genes relative to the final analysis set using the NIH/ DAVID [22,23].Hierarchical clustering via complete linkage rule with Euclidean distance metric was applied to visualize gene expression differences, using gplots library of R Statistical Package [19].
For each cancer training/validation dataset, we normalized the gene expression level into the scale of [21,1] by POE (probability of expression) algorithm [24,25] implemented in the metaArray library of the R Statistical Package [19].Based on the gene expression and clinical outcome data from the training dataset, we can assign a Wald statistic generated by univariate Cox proportional-hazard regression to each gene as a weight.A risk score was calculated for each patient using a linear combination of weighted gene expression as below: Here, s is the risk score of the patient; n is the number of differentially expressed genes; w i denotes the weight of gene i; e i denotes the expression level of gene i; and m i and t i are the mean and standard deviation of the gene expression values for gene i across all samples, respectively.Patients were then divided into high-score and low-score groups with the median of the risk score as the threshold value.A high score indicated a poor outcome.The weight of each gene was fixed, based on the training groups, and then tested in the validation groups [12].Overall survival was analyzed by the Kaplan-Meier method.Differences in survival were tested for statistical significance by the log-rank test.P-values of less than 0.05 were considered to indicate statistical significance.The survival library of the R Statistical Package [19] was used to conduct survival analysis on the risk score.
nmMLCK-mediated mouse genes
At the specified significance level (fold-change .2 and FDR,10%), 365 genes were found be differentially expressed between VILI-exposed WT and nmMLCK KO mice, among which 117 genes were up-regulated while 248 genes were downregulated in nmMLCK KO mice (Table S1).Several pathways were significantly enriched among these differentially expressed genes (P,0.05), such as vascular smooth muscle contraction, chemokine signaling pathway, calcium signaling pathway, ErbB signaling pathway, focal adhesion and so on (Figure 1A).
To filter out the top genes potentially associated with nmMLCK, we also compared the gene expression between WT control and VILI-exposed WT mice.981 genes were found to be differentially expressed (fold-change .2 and FDR,10%) between these two groups (Table S2).Among these genes, we retained the genes with opposite direction of differential expression comparing Table S1 and S2.In other words, only the genes with attenuated VILI-mediated gene expression in nmMLCK KO mice were considered here.This step yielded 53 mouse genes.Lastly, we excluded the genes differentially expressed between WT control and VILI-exposed nmMLCK KO mice.In total, we retained 45 mouse genes for further study.Pathway analysis demonstrated a significant linkage of this gene set to ErbB signaling pathway, Glioma, Circadian rhythm (Figure 1B), which suggests that nmMLCK signaling contributes to the development and malignancy of tumors.Expression heatmap indicates that the expression profile of the 45 mouse genes were recovered at approximately normal levels of expression in nmMLCK KO mice exposed to VILI (Figure 1C).We deemed these 45 mouse genes as nmMLCK-mediated gene set (Table 1).
Prognostic molecular signature
nmMLCK is potentially involved in the pathogenesis of cancers [3,4,26].To determine whether nmMLCK-mediated genes derived from nmMLCK KO mouse model were relevant to human cancers, we matched the 45 nmMLCK-mediated mouse genes to 38 distinct human orthologs (M38 signature) according to the definition of GeneCards [27,28] (Table 2).We hypothesized that the M38 signature would be predictive of tumor outcome in cancer patients.
We constructed a risk scoring system that combined gene expression of M38 with risk for death in the training dataset.High-score patients were defined as those having a risk score greater than or equal to the group median score.In independent validation cohorts, we tested the ability of the risk score to stratify patients into prognostic groups.We performed Kaplan-Meier survival analysis comparing the high-score and low-score groups and determined statistical significance by log-rank tests.As expected, the M38 signature was able to identify patients with poor overall survival in breast cancer (P,0.001),colon cancer (P,0.001),glioma (P,0.001), and lung cancer (P,0.001) in the training cohorts (Figure S1).In the validation cohorts, Kaplan-Meier survival analysis comparing patient groups demonstrated a significantly reduced overall survival for high-score patients of breast cancer (P = 0.002), colon cancer (P = 0.035), glioma (P = 0.023), and lung cancer (P = 0.023) (Figure 2).The association between M38 risk score and overall survival was also confirmed by univariate Cox proportional hazard analysis of overall survival in both training and validation cohorts (Table 3).In the validation cohorts, high-score patients had an increased risk for death of 3.10-fold in breast cancer, 2.96-fold in colon cancer, 2.23-fold in glioma, and 1.60-fold in lung cancer.
Independence of M38 from other clinicopathologic factors
We investigated the performance of the M38 signature in comparison with clinicopathologic variables associated with prognosis in human cancers.A multivariate Cox analysis of overall survival indicated that M38 status remained a significant covariate in relation to the standard clinicopathologic factors in the four types of human cancers, such as patient age, lymph node status, tumor size, tumor grade, and so on (Table 4).We next stratified the patients according to the factors significant on multivariate analysis.
For breast cancer, patients were stratified by age, tumor grade, and estrogen receptor (ER) status, respectively.For patients with age ,40 and $40, the high-score ones had a significantly increased risk for death of 6.36-fold (P,0.001) and 2.80-fold (P = 0.001), respectively.For patients with tumor grade ,2 and $2, the high-score patients had an increased risk for death of 2.63fold (P = 0.410) and 2.65-fold (P,0.001),respectively.For patients with negative and positive ER status, the high-score patients had a significantly increased risk for death of 2.25-fold (P = 0.025) and 4.00-fold (P,0.001),respectively.
For colon cancer, patients were grouped by age and clinical stage, respectively.For patients with age ,60 and $60, the high- score ones had a significantly increased risk for death of 2.29-fold (P = 0.025) and 2.88-fold (P,0.001),respectively.For patients with stage ,3and $3, the high-score ones had a significantly increased risk for death of 3.50-fold (P = 0.015) and 1.71-fold (P = 0.024), respectively.
Patients with glioma were grouped by age.For patients with age ,45 and $45, the high-score ones had a significantly increased risk for death of 3.46-fold (P = 0.004) and 2.00-fold (P = 0.045), respectively.
Lung cancer patients were stratified by age, lymph node status, and tumor size, respectively.For patients with age ,65 and $65, the high-score ones had a significantly increased risk for death of 2.35-fold (P,0.001) and 1.97-fold (P,0.001),respectively.For patients with and without lymph node involvement, the high-score patients had a significantly increased risk for death of 1.62-fold (P = 0.012) and 1.73-fold (P = 0.014), respectively.For patients with tumor size ,T3 and $T3, the high-score patients had an increased risk for death of 2.20-fold (P,0.001) and 1.63-fold (P = 0.180), respectively.
Kaplan-Meier survival analysis also demonstrated a significantly reduced overall survival for high-score patients in each subset grouped by each clinicopathologic factor (Figure S2-S5).Taken together, these results suggest that the expression of M38 signature is associated with clinical outcomes and is an independent prognostic factor.
Discussion
This current study confirms an internal link between nmMLCK-mediate signaling and clinical cancer mortality with novel evidence: first, we defined a group of nmMLCK-driven genes with a murine model of lung inflammatory injury under which the effects of nmMLCK are amplified.Second.This nmMLCK-centralized molecular signature reflective of lung inflammatory gene expression is highly predictive of poor clinical outcome in four types of human cancer.
MLCK (gene code: MYLK) is a Ca 2+ /calmodulin-dependent kinase that phosphorylates myosin light chains (MLCs) to promote myosin interaction with cytoskeletal actin filaments [29].It plays a key role in cytoskeleton rearrangement and contractile activities of both non-muscle tissue [30] and smooth muscle tissues [31].The non-muscle isoform, nmMLCK, has been demonstrated to be a key participant in the inflammatory response based its ability to regulate vascular endothelium integrity and leukocyte influx from circulation into the lung broncoalveolar space [9].Similar to pathogenesis in endothelial cells in ALI, cancer cell proliferation and migration require rapid dynamic regulation of the cytoskeleton, which is controlled by a group of cytoskeleton regulatory proteins, in which nmMLCK serves as a critical and central participant [1,2].In addition, trans-cellular extravasation, the essential step for malignant tumor metastasis, is well controlled by the activity of MLCK [3,4].Although still underestimated, MLCK started to be considered as a novel functional protein in cancer pathogenesis (initiation, proliferation, migration, and metastasis) [5,6,7].This is especially true with the more widely expressed nonmuscle isoform (nmMLCK).
Although little is known regarding the mechanisms of nmMLCK in the pathogenesis of tumor and its influence on the prognosis of human cancers, inflammatory response that regulated by nmMLCK in lungs is playing an active role in tumorogenesis and many successful therapies targeting chronic inflammation directly alter endothelial gene expression [32].Murine VILI model amplifies the nmMLCK-mediated gene expression and serves as a satisfactory platform to dissect nmMLCK molecular signature in lung inflammatory injury.
Compared to a previous study [12], we used a non-conventional inflammation marker nmMLCK (compared to TNFa), which is more related to endothelial inflammation, as nmMLCK is selective expressed in non-muscle tissues such as endothelium [29].Combined together, these two studies further verify the key role of ''endothelial-specific'' inflammation in cancer progression.Since nmMLCK is also expressed in other tissue types including epithelium and inflammatory leukocytes (same as TNFa), amplified molecular signature of nmMLCK by lung inflammation (M38 signature) might also involve other type of tissues in lungs, i.e., epithelium and infiltrated neutrophils.The potential contribution of M38 signature in pathogenesis in these tissues to cancer prognosis might also be important.
Our next study will focus on validation of these candidate genes filtered out in both nmMLCK and TNFa studies and generate a more accurate cancer prognosis platform with a refined gene set, which will lead to the development of cancer risk prediction/ prognosis gene array in clinical trials.MYLK is not in the M38 gene list, although the 38 genes were based on nmMLCK knockout mice.The possible complex reason might be that nmMLCK (210 Kd) is an isotype of MYLK gene product, while MYLK also produces smMLCK (108 Kd), which comprises of .80% of the MYLK gene products in lung.nmMLCK knockout does not interfere with smMLCK expression, but the microarray platform does not differentiate nmMLCK from smMLCK.This fact drives successful filtration of the 38 nmMLCK-mediated genes, but MYLK was not able to survive in the M38 gene list.To address the effect of MYLK in cancer survival prediction, we re-analyzed our datasets with the 39 genes (M38 genes plus MYLK), but no obvious improvement was found (Table S3).Nevertheless, several recent studies indicate that nmMLCK expression is indeed changed in human cancers, such as colorectal cancer [33] and prostate cancer [34].
We used a scoring system to assign a M38-based risk score to each patients.This scoring system can also be directly applied to other published cancer gene signatures.The comparison between cancer gene signatures can be simply conducted by comparing the prognostic power of the risk scores of individual gene signatures.In this study, we used the median of M38 score to divide the patents into two parts (high-score and low-score patients) to do categorized analyses (such as Kaplan-Meier analysis and log-rank test).Clinically, we can use zero as an absolute cutoff to stratify the patients into high-risk and low-risk groups, because the median of M38 score is approximately equal to zero in each dataset.
This study provides the first prognostic cancer gene signature derived from a murine model of nmMLCK-associated lung inflammation.Activation of nmMLCK-involved pathways contributes to tumor growth and progression in human cancers.These findings support the notion that nmMLCK is an attractive candidate molecular target in lung diseases.(PDF)
Supporting Information
Table S1 Differentially expressed genes between VILIexposed WT and VILI-exposed nmMLCK KO mice.(PDF) Table S2 Differentially expressed genes between WT control and VILI-exposed WT mice.(PDF) Table S3 Univariate Cox proportional hazards regression of overall survival against M38+MYLK signature status. (PDF)
Figure 2 .
Figure 2. Expression of M38 signature predicts poor clinical outcome in multiple human cancers.Kaplan-Meier survival curves for patient groups identified by M38 risk score.Red curves are for the high-score patients while blue curves are for the low-score patients.High-score patients are defined as those having a M38 risk score greater than or equal to the group median score.P-values indicate significant differences in overall survival as measured by log-rank tests.doi:10.1371/journal.pone.0094325.g002
Figure
Figure S1 Application of the M38 signature to training datasets representing four human cancers.Kaplan-Meier survival curves for patient groups identified by M38 risk score.Red curves are for the high-score patients while blue curves are for the low-score patients.High-score patients are defined as those having a M38 risk score greater than or equal to the group median score.P-values indicate significant differences in overall survival as measured by log-rank tests.(PDF) Figure S2 M38 signature adds prognostic value to clinicopathologic factors associated with survival in human breast cancer.Kaplan-Meier survival curves of patient cohorts grouped by (A) age, (B) tumor grade, or (C) ER status.Red curves are for the high-score patients while blue curves are for the low-score patients.High-score patients are defined as those having a M38 risk score greater than or equal to the group median score.P-values indicate significant differences in overall survival as measured by log-rank tests.(PDF) Figure S3 M38 signature adds prognostic value to clinicopathologic factors associated with survival in human colon cancer.Kaplan-Meier survival curves of patient cohorts grouped by (A) age or (B) clinical stage.Red curves are for the high-score patients while blue curves are for the low-score patients.High-score patients are defined as those having a M38 risk score greater than or equal to the group median score.Pvalues indicate significant differences in overall survival as measured by log-rank tests.(PDF)
a FC: fold change, which is calculated by dividing the expression in VILI-exposed WT mice by the expression in WT control mice.b FC: fold change, which is calculated by dividing the expression in VILI-exposed nmMLCK KO mice by the expression in WT VILI-exposed mice.doi:10.1371/journal.pone.0094325.t001
Table 4 .
Multivariate Cox proportional hazards regression of overall survival.Figure S4 M38 signature adds prognostic value to clinicopathologic factors associated with survival in human glioma.Kaplan-Meier survival curves of patient cohorts grouped by age.Red curves are for the high-score patients while blue curves are for the low-score patients.High-score patients are defined as those having a M38 risk score greater than or equal to the group median score.P-values indicate significant differences in overall survival as measured by log-rank tests.(PDF) Figure S5 M38 signature adds prognostic value to clinicopathologic factors associated with survival in human lung cancer.Kaplan-Meier survival curves of patient cohorts grouped by (A) age, (B) lymph node status, or (C) tumor size.Red curves are for the high-score patients while blue curves are for the low-score patients.High-score patients are defined as those having a M38 risk score greater than or equal to the group median score.P-values indicate significant differences in overall survival as measured by log-rank tests. | 4,853.8 | 2014-04-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
Polycaprolactone Nanoparticles as Promising Candidates for Nanocarriers in Novel Nanomedicines
An investigation of the interactions between bio-polymeric nanoparticles (NPs) and the RAW 264.7 mouse murine macrophage cell line has been presented. The cell viability, immunological response, and endocytosis efficiency of NPs were studied. Biopolymeric NPs were synthesized from a nanoemulsion using the phase inversion composition (PIC) technique. The two types of biopolymeric NPs that were obtained consisted of a biocompatible polymer, polycaprolactone (PCL), either with or without its copolymer with poly(ethylene glycol) (PCL-b-PEG). Both types of synthesized PCL NPs passed the first in vitro quality assessments as potential drug nanocarriers. Non-pegylated PCL NPs were internalized more effectively and the clathrin-mediated pathway was involved in that process. The investigated NPs did not affect the viability of the cells and did not elicit an immune response in the RAW 264.7 cells (neither a significant increase in the expression of genes encoding pro-inflammatory cytokines nor NO (nitric oxide) production were observed). It may be concluded that the synthesized NPs are promising candidates as nanocarriers of therapeutic compounds.
Introduction
Nanoparticles (NPs) are frequently defined as solid colloidal particles in the range of 10-1000 nm. Polymer nanoparticles (PNPs) are nanospheres and nanocapsules made of polymeric materials [1]. Nanospheres are matrix particles, i.e., particles whose entire mass is solid. Molecules may be adsorbed on the sphere's surface or encapsulated within the particle matrix. Polymer nanoparticles have recently been growing in importance and they play crucial roles in a wide range of fields, including in electronics, photonics, conducting materials, sensors, medicine, biotechnology, pollution control, and environmental technology [2,3]. Biodegradable nanoparticles are frequently used in medicine and biotechnology to improve the therapeutic value of various drugs. The nanoencapsulation of drugs in PNPs increases their bioavailability, solubility, retention time, efficacy, specificity, tolerability, and drug therapeutic index values [1,[3][4][5]. PNPs may be functionalized to achieve so-called "intelligent targeting", i.e., targeted delivery to specific cells, tissues, or organs [6][7][8]. Numerous biodegradable polymers such as polycaprolactone (PCL), polylactic acid (PLA), polyglycolic acid (PGA), and polylactide-co-glycolide (PLGA) are being tested for possible use in drug delivery systems [9,10]. Several drugs have been successfully encapsulated in PNPs to their improve bioavailability, bioactivity, and control delivery [4,11]. The main applications focus on diseases such as cancer, AIDS, diabetes, malaria, prion disease, and tuberculosis [12][13][14][15][16][17].
The features of PNPs, such as their toxicity, biocompatibility, biodistribution, and immunogenicity, play crucial roles in designing new drug delivery systems [18,19]. It is well known that the dimensions, particle charges, and surface modifications are the parameters that affect these features [20,21]. It is especially important to investigate the impacts of potential nanocarriers on immune system cells, which form the first line of defense against external risks. In particular, it is crucial to create carriers that will be invisible ("the stealth property") to phagocytic cells, such as the macrophages. Macrophages are cells of the immune system involved in the inflammatory process. Activated macrophages act like scavengers that phagocyte pathogenic molecules. Macrophage activation occurs during their exposure to pathogenic particles, which may also be nanoparticles [22]. This is why the macrophages are the first barrier in the way of drug nanocarriers to their targets, since nanoparticles are seen as alien and are subsequently absorbed and degraded by the phagocytic cells [23]. Therefore, decreasing the uptake of PNPs by the macrophages is one of the main goals in designing new drug delivery systems [24].
One of the most popular and important methods of surface modification that allows the "stealth" properties of PNPs to be increased is the immobilization of a polyethylene glycol (PEG) corona on a particle's surface. The physicochemical properties of uncharged, hydrophilic polymers, such as their water solubility, extensive hydration, good conformational flexibility, and high chain mobility, cause a steric exclusion effect that provide protein resistance of PEG-based coatings comprising NPs [25,26]. Additionally, PEG side chains enable further functionalization of NPs using specific, well-designed bio-ligands. This strategy leads to greater targeting of the action of the encapsulated drug by transporting it to the site of intended release. For example, folate-decorated NPs carrying a chemotherapeutic agent have been shown to be internalized in cancer cells through FRα-mediated endocytosis [27]. In conclusion, modification of the outer surfaces of NPs by PEG coverings is a key point in controlling the biological fate of pegylated NPs. The process influences the biodistribution, immune system recognition, transport through biological matrices (tumor extracellular matrix, mucus, bacteria biofilm), cellular uptake, and recognition of the destination [27,28].
Generally, PNPs made of polycaprolactone (PCL) attract more attention and are widely studied. To obtained the best nanocarrier designed for targeted drug delivery, it is very important to evaluate its action on many levels. This involves detailed descriptions of the interactions of the carrier with immune cells. Therefore, the present study focuses on investigating PCL NPs and their interactions with phagocytic cells. Copolymers of hydrophilic PEG and hydrophobic PCL typically display high biocompatibility and biodegradability [29]. Grossen at al. [29] reviewed the synthesis, production, description, and application of PEG-PCL-based nanomedicines [29]. In this study, we assess the usefulness of PCL NPs as a potential drug delivery system by describing their interactions with cells of the immune system, namely the RAW 264.7 mouse murine macrophage cell line. Two types of PCL nanoparticles (non-pegylated (PCL) and pegylated (PCL-PEG)) are tested. We investigate the toxicity, immunological influence, and efficiency of macrophage endocytosis. Although the surface PEG length and PEG density are difficult to control, they influence the "stealth" properties of pegylated nanoparticles, as well as their biological behaviors [13,[16][17][18][19]. It is well known that NPs pegylated using PEG 5000 demonstrate higher circulation times in the plasma and better distribution in tumor tissue. Moreover, enhanced accumulation in tumor cells has been shown. Qhattal et al. described a nanocarrier that was able to severely inhibit the MDA-MB-231 tumor growth and prolong the survival time of mice harboring B16 tumors [30] In our study, we use unmodified NPs as well as NPs grafted with a PEG layer. The covalent bonding of polyethylene glycol (PEG) is intended to prevent opsonization by antigens and serum proteins on NP surfaces, thereby increasing the half-life of NPs in the bloodstream by reducing the immune response [6,11]. PEG chains are also more hydrophilic and form a hydration layer around pegylated NPs, making them less efficiently phagocytized by the macrophages. Reducing the NP uptake by the macrophages is one of the main goals in designing new drug delivery systems [8].
PCL and PCL-PEG Nanoparticle Preparation
Polycaprolactone (PCL) and pegylated-polycaprolactone (PCL-PEG) nanoparticles were prepared by the nanoemulsion templating method [31]. The nanoemulsion was made using the phase inversion composition (PIC) technique. The phase inversion was achieved by stepwise addition of water to toluene with dissolved polymers (PCL with or without the addition of PCL-b-PEG and a mixture of surfactants (TWEEN ® 20/Span ® 20) at constant room temperature). Following the preparation of the stable nanoemulsion, toluene (a toxic organic solvent) was evaporated and the surfactants were removed by dialysis. To prepare fluorescently labeled nanoparticles, a fluorescent dye (cumarine-6) was encapsulated in the formed nanoparticles. Coumarin-6 was dissolved in toluene (0.5 mg/mL) before the emulsification process. Drug-loaded nanoparticles can also be synthesized, as described previously [31].
PCL and PCL-PEG Nanoparticle Description
The sizes (the hydrodynamic diameters) of the formed nanoemulsion and synthesized nanoparticles were determined using the dynamic light scattering (DLS) technique using a Zetasizer Nano ZS from Malvern Panalytical Instruments (Warsaw, Poland). The values were estimated as averages of at least three subsequent measurements with 20 runs. Additionally, the sizes and concentrations of the synthesized nanoparticles were measured via nanoparticle tracking analysis (NTA) using an NS500 NanoSight instrument from Malvern Panalytical Instruments (Warsaw, Poland). UV-Vis absorption spectra of the loaded nanoparticles, as well as empty ones, were acquired to confirm coumarin-6 encapsulation using the UV-1800 spectrophotometer (Shimadzu) Thermo Fisher Scientific, Warsaw, Poland. To evaluate the stability of the obtained nanoparticles, their sizes were detected over time during storage. The experiments were conducted at 25 • C in the preparation buffer (water), as well as in cell culture media (DMEM containing 10% FBS).
Cell Culture
The mouse murine macrophage cells (RAW 264.7) were cultured in a DMEM medium supplemented with 1% L-glutamine, high glucose, and 10% FBS at 37 • C in a humidified incubator (BINDER (VWR, Radnor, PA, USA) with 5% CO 2 atmosphere. Two days be-fore the MTT, LDH, and NO experiments, cells at a density of 3 × 10 4 cells/well were seeded in appropriate 96-well plates. Similarly, two days before flow cytometry and qPCR experiments, the cells were seeded at a density of 3 × 10 5 cells /well in 6-well plates.
2.6. Cell Viability and Cytotoxicity Assays 2.6.1. MTT Reduction Test To evaluate the RAW 264.7 cell viability, an MTT reduction assay was performed as described previously [32]. For the assay, different types of PCL NPs (pegylated and non-pegylated; doses of ca. (circa) 16,500, 7000, and 3500 NPs per cell) were used. NPs resuspended in a complete fresh medium were added to the appropriate wells and incubated with the RAW 264.7 cells for 24 h. Then, after medium aspiration the cells were treated under standard culture conditions with 50 µL of 0.5 mg/mL MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) resuspended in serum-free medium. After 4 h the MTT reagent was eliminated and the cells were incubated and shaken for 10 min with 100 µL of DMSO (dimethyl sulfoxide). The metabolically active cells converted the yellow tetrazolium salt into purple formazan. Data were obtained from absorbance measurements at the wavelength of 570 nm (TECAN Infinitive M200 Pro, TECAN, Männedorf, Switzerland). The RAW 264.7 cells in the control were incubated only with fresh medium containing 0.015 M NaCl. Six replicates were performed for each experiment condition. The final results represent the average cell viability from five independent experiments.
LDH Cytotoxicity Detection Kit
The cytotoxicity detection kit (LDH) was adopted to measure the toxicity of the used nanomaterials as described previously [32]. LDH was detected in the culture medium when the cell membrane was destabilized or destroyed. The assay was run according to the manufacturer's instructions. Briefly, the RAW 264.7 cells were treated for 4 h with different types of PCL NPs (doses of ca. (circa) 16,500, 7000, and 3500 NPs per cell). Then, after centrifugation (250× g, 7 min), 50 µL of the supernatant was incubated for 30 min in the dark at room temperature with 50 µL of the reaction mixture. Finally, the absorbance was recorded at 490 nm (with the reference wavelength at 610 nm) (TECAN Infinitive200, TECAN Männedorf, Switzerland). Spontaneous LDH release was detected from untreated cells (negative control). The maximum LDH level, which corresponds to cell death, was determined after Triton X-100 cell treatment. Six replicates were performed for each experimental condition. The final data exhibiting the cytotoxic potential of the synthesized nanomaterials come from five independent experiments.
Flow Cytometry
The flow cytometry technique was adopted to determine the quantitative cellular uptake as described previously [32]. PCL NPs fluorescently labeled with coumarin-6 were resuspended in fresh full culture medium and added to the cells (doses of ca. (circa) 100, 500, 1000, 2000, 3500, and 5000 NPs per cell). After a 2 h incubation period (37 • C in 5% CO 2 atmosphere), the medium with PCL NPs was removed and the cells were washed four times with cold phosphate-buffered saline (PBS), pH 7.4). Finally, the cells were suspended in 300 µL of cold PBS and kept on ice until the measurements. The following inhibitors were used to determine the NP endocytosis pathway: chlorpromazine (CPZ 8 µg/mL), filipin III (1 µg/mL), and amiloride (50 µM). Before the experiments, the cells were pre-incubated for 1 h under standard culture conditions with the above-mentioned inhibitors. The PCL NP uptake was determined using a BD FACSCalibur (Becton, Dickinson and Company, San Jose, CA, USA) flow cytometer and CellQuestPro software Becton, Dickinson and Company, San Jose, CA, USA). In total, 10,000 events per sample were acquired. The background fluorescence corresponding to cell autofluorescence was evaluated for the RAW 264.7 cells treated with 0.015 M NaCl or non-fluorescent PCL NPs. Two replicates for each type and dose of the obtained PCL NPs dose were performed. The final data reflect the average of four independent experiments.
NO Determination Test
The Griess Reagent Kit for Nitrite Determination was used to measure the NO release as a result of a 4 h RAW 264.7 treatment with PCL NPs (doses of ca. (circa) 16,500, 7000, and 3500 NPs per cell). The assay was performed according to the manufacturer's instructions. After 4 h, 75 µL of the medium was transferred to a fresh 96-well plate containing 10 µL of Griess reagent per well. The mixture was incubated for 30 min in the dark at room temperature. The photometric reference sample constituted 10 µL of the Griess reagent and 140 µL of deionized water. The amount of nitrite was evaluated using a spectrophotometric measurement of absorbance at the 548 nm wavelength (TECAN Infinitive200TECAN Männedorf, Switzerland). Untreated cells served as the control. Six replicates for each PCL NP dose were performed. The obtained results represent the average of three independent experiments.
Visualization Studies
Fluorescence microscopy was used for the visualization of the RAW 264.7 cells after 2 h PCL NP (fluorescently labeled with coumarin-6) treatment (ca. (circa) 2500 NPs per cell). Two days before the experiment, the RAW 264.7 cells were seeded on 10 mm plates at a density of 1 × 10 5 cells per well. Images were acquired using an EVOS fluorescence microscope (Life Technologies, Carlsbad, CA, USA) (Thermo Fisher Scientific, Warsaw, Poland) with 480 nm excitation and 520 nm emission.
Quantitative Real-Time PCR Experiments
Two days before the experiment, the RAW 264.7 cells were seeded on 6-well plates at a density of 3 ×
Results and Discussion
PCL nanoparticles (both non-pegylated (PCL) and pegylated (PCL-PEG)) were prepared using the nanoemulsion templating method as described previously [31]. At the beginning, a stable nanoemulsion containing selected polymers and actives was prepared using the PIC technique. The nanoemulsion was formed via drop-by-drop addition of water to the polymer solution, containing PCL (3 mg mL −1 ) or PCL with PCL-b-PEG (2.88 mg mL −1 and 0.13 mg mL −1 , respectively) and a mixture of non-ionic surfactants (TWEEN ® 20/Span ® 20) in toluene. The formed nanoemulsion contained: 20% (v/v) oil phase, 5% (v/v) TWEEN ® 20/Span ® 20 (HLB = 13.5), and 75% (v/v) water. The mean size of the nanoemulsion droplets containing PCL and PCL/PCL-b-PEG (PCL-PEG) measured as by the dynamic light scattering (DLS) technique was~250 nm with a polydispersity index (PdI) value < 0.2, as shown in Figure 1A. In the end toluene, was evaporated with a rotary evaporator, which led to the formation of polymeric nanoparticles. The mixture of surfactants (TWEEN ® 20/Span ® 20) was removed using dialysis. The average size of the prepared PCL and PCL-PEG nanoparticles measured by DLS or NTA was~90 nm, with a Pdi value of below 0.2 ( Figure 1B,C).
The encapsulation of the model active substance, i.e., the fluorescent dye coumarin-6, was confirmed by UV-Vis spectrophotometry analysis. The comparison of the spectra for empty polymeric and coumarin-6-loaded nanoparticles provided evidence of successful encapsulation of the drug. A characteristic peak at 460 nm in the UV-Vis spectra of the nanoparticle suspension containing coumarin-6 was observed (Figure 2). The final concentration of PCL and PCL-PEG nanoparticles as measured using the NTA technique was 2 × 10 11 nanoparticles/mL. A nanosystem's biocompatibility and long-term stability are important parameters for its potential biomedical application. The stability of nanoparticles in DMEM containing 10% fetal bovine serum (FBS) was evaluated, and we found that they retained their size without showing any significant changes for at least 48 h. Synthesized nanoparticles (PCL and PCL-PEG) were formed with bio-acceptable components, except for toluene, which was evaporated after preparation. The determination of the interactions between nanocarriers and certain model cell lines is a crucial step in designing a nanoparticulate system for controlled drug delivery. Estimating the possible toxicity of nanomaterials is very important and must be taken into consideration in the first step of the investigation. Therefore, to evaluate changes in cell viability following incubation with the obtained PCL NPs, various assays were conducted. The results obtained using the MTT and LDH tests were consistent ( Figure 3A,B). They indicated a safer action profile of pegylated PCL NPs. In both cases (PCL with or without the PEG layer), we observed no changes related to cell membrane disruption, which probably resulted from the negative surface charge of the used PCL NPs. A previous study indicated the contribution of positively charged molecules to the toxicity of nanomaterials due to membrane disruption [33]. As indicated by the previous studies, NPs are quickly eliminated from the bloodstream after injection [23,34]; therefore, we investigated the interactions of different types of synthesized PCL nanoparticles with phagocytic cells (the RAW 264.7 cell line). It is well known that cells of the mononuclear phagocytic system (MPS) such as macrophages have the potential to recognize and remove NPs before they reach their destination. Plasma protein adsorption on the surfaces of NPs plays a key role in this process [19].
This phenomenon leads to a reduction of the circulating half-life, and hence affects the capacity of nanomaterials to be efficient nanovehicles in a controlled drug transportation system. Moreover, it has been shown that the macrophage response depends on the particle size and surface charge [19,35]. It appears that the NP size affects the cellular uptake rate and the internalization mechanism [36]. Modification of nanocarrier surface properties by "stealth" polymers, e.g., by PEG, leads to a deceleration of the opsonization process, which consequently increases the half-life of the NPs in the bloodstream. Moreover, "stealth" polymers also protect NPs by suppressing their uptake by the macrophages [37]. It has been shown that the higher protein adsorbability of hydrophobic compared to hydrophilic surfaces enables the uptake of more hydrophobic particles by the phagocytes in vitro, as well as the quick clearance of hydrophobic particles in vivo [38]. A significant role in prevention of protein adsorption and cell membrane disruption can be achieved through modification of the surfaces of NPs by covering well-hydrated PEG chains. The process masks the original surfaces of NPs and provides steric hindrance. This effect is correlated with the PEG properties. Proper pegylation of the particle surfaces is a crucial step, as the PEG quality, chain size, number of chains, density, and the way they are arranged have huge impacts on the interactions with the target cells and biodistribution of the nanocarrier in the body [39]. Wang et al. studied the effects of surface PEG length on in vivo delivery of PCL NPs, finding that NPs with a PEG surface length of 13.8 nm (MW = 5000 Da) significantly decreased the absorption of serum protein and interactions with macrophages, which finally translated into increased blood circulation time, enhanced tumor accumulation, and improved antitumor efficacy [26].
Visualization using fluorescent microscopy allowed observation of the presence of PCL NPs in the cytoplasm of the RAW 264.7 cells, showing that internalization of PEG-modified NPs differed remarkably in comparison to unmodified NPs (Figure 4). The first set of uptake experiments sing flow cytometry conducted at 4 • C confirmed that the internalization process was active, as significant inhibition of the endocytosis process was recorded. The following experiments show the efficiency of PCL NP internalization by the RAW 264.7 cells. For both types of PCL NPs, a positive correlation between the endocytosis level and the administered PCL NP dose was determined ( Figure 5A). Non-pegylated PCL NPs were internalized more effectively and we observed the saturation of cells from a dose of 1000 NPs per cell. In the case of PCL-PEG NPs, we observed a significant reduction of endocytosis ( Figure 5A). As mentioned above, prevention of opsonization is crucial in order to reduce the efficiency of the recognition and capture of nanocarriers by the cells of the immune system. The obtained results indicate that PCL NP coating with a layer of highly hydrated PEG chains significantly slows down the absorption of PCL NPs by the macrophage cells. This is definitely a desired effect, especially considering the fact that the potential carrier should be less visible to the cells of the immune system. The presented results are consistent with our previous studies carried out using polymeric nanocapsules of another type [32,[39][40][41][42]. Recent studies [43] showed that doxorubicin (DOX)-loaded PCL-PEG NPs modified by collagenase IV (ColIV) and clusterin (CLU) were effectively accumulated in MCF-7 tumor cells and at the same time they overcame the phagocytosis by RAW264.7. Moreover, the interaction of DOX-loaded mPEG-PCL NPs grafted by 2-hydroxyethyl cellulose (HEC) with macrophages cells indicates that such particles are not recognized as foreign bodies [44]. The available literature data point to the engagement of various endocytosis pathways in the NPs internalization process [45]. Therefore, in the present study the internalization mechanism of the synthesized PCL NPs was investigated. Before the experiment, the RAW 264.7 cells were pre-incubated with specific agents that abolish the defined endocytosis pathway. Chlorpromazine (CPZ) causes the formation of clathrin-coated pits [46] and was used to inhibit clathrin dependent endocytosis. Amiloride activity is connected with blocking of the Na + /H + exchanger and the prevention of membrane ruffling, and thus it inhibits micropinocytosis [47,48]. Caveolae-dependent internalization is diminished by filipin III, an inhibitor that binds to cholesterol and distorts the structure and functions of cholesterol-rich membrane domains [47]. Data obtained in the study indicate that the clathrin-mediated endocytosis pathway ( Figure 5B,C) is engaged in the internalization of PCL NPs.
The physicochemical parameters of nanomaterials affect their behavior in organism systems. Therefore, in the present study, we aimed to determine if the presence of both types of PCL NPs induces inflammation in the RAW 264.7 cell line. To exclude the proinflammatory activity of PCL NPs, we performed qPCR experiments, which detect the levels of interleukin 6 (IL-6), tumor necrosis factor (TNF-α), induced nitric oxide synthase (iNOS), and inhibitory protein for the factor NF-kB (IkBα). GAPDH (3-phosphoglycerate dehydrogenase) was the reference gene. The outcomes were analyzed against a reference sample, which constituted non-NP treated cells. As we observed no expression (except TNF-α) of the investigated genes in the reference sample, data estimation based on the Rq method was impossible and the results of the experiments are shown using the Cq value, which reflects the fluorescence signal originating from particular genes exceeding the threshold value. A recap of the experiment is presented in Figure 6A,B. It is evident that for both kinds of PCL NPs, regardless of the used doses, the levels of two genes (TNF-α and IkBα) increased. TNF-α is a key factor that is indispensable in the proper functioning of macrophages and other immune cells. The cytokine plays an important role in proliferation, maintenance of cellular homeostasis, and tumor progression. Its importance in modulating immunological processes is also well known [49]. Therefore, the similar levels of detection for both NP treated and untreated cells were not surprising. The second gene that was expressed in the cells was the gene for IkBα. A major function of IkBα is to bind with the NF-kB, and thus to suppress its pro-inflammatory activity. The result obtained for IL-6 is similarly significant. This well-known major pro-inflammatory cytokine was not detectable in our experiments. To conclude, bearing in mind the influence of PCL NPs on the expression of immune mediators in the RAW 264.7 cells, its use as a nanovehicle for active compounds may be considered. Nitric oxide (NO) is well known for its various functions. Its engagement in the immunological response has been widely described. Therefore, we additionally focused on the determination of the NO level as a result of the interaction of PCL NPs with the RAW 264.7 cell line. We did not observe a significant increase of the NO level in the performed experiments (Figure 7). Similar results were observed by Abamor et al. for J774 macrophages cells [50]. Taken together with the qPCR experiments, the obtained results suggest low ability of the tested PCL NPs to induce an immune response.
Conclusions
PCL NPs have been used in a few studies, however they must be thoroughly characterized before they can be used in vivo. Therefore, any new experimental design that allows better insight into the interactions of NPs with cells could be considered novel. There are many studies on novel nanomedicine applications, especially in the field of nanocarriers, however very often the studies are focused on investigating the activities of drugs released in target destinations. For example, in some tumor tissues there have been very good descriptions of how these drugs act, their interactions with tumor cells, and how efficient the delivery process is, however unfortunately these studies lack insight into how the loaded nanocarriers behave in the whole organism, which in our opinion is very important. One has to consider the following issue-what will we achieve if we destroy cancer cells and cause an excessive immune response that threatens a patient's life?
The determination of the interactions between nanocarriers and living model cell lines is a prerequisite to designing a nanoparticulate system for controlled drug delivery. Our studies showed that both of the synthesized PCL nanoparticles (non-pegylated (PCL) and pegylated (PCL-PEG)) passed the first in vitro quality assessments as potential drug nanocarriers. The tested nanoparticles did not affect the viability of the tested cells and did not elicit a response from the immune system (in experiments with the RAW 264.7 cells). Non-pegylated nanoparticles were preferentially uptaken by the tested cell line rather than the pegylated ones. Based on our results, it may be concluded that the synthesized PCL and PCL-PEG nanoparticles are promising candidates for nanocarriers of therapeutic compounds. Their potential clinical use should be the subject of future studies.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to no requirements in this respect on the part of the research financing project. | 6,045.2 | 2021-02-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Identification of new candidate biomarkers to support doxorubicin treatments in canine cancer patients
Background Both human and veterinary cancer chemotherapy are undergoing a paradigm shift from a “one size fits all” approach to more personalized, patient-oriented treatment strategies. Personalized chemotherapy is dependent on the identification and validation of biomarkers that can predict treatment outcome and/or risk of toxicity. Many cytotoxic chemotherapy agents, including doxorubicin, base their mechanism of action by interaction with DNA and disruption of normal cellular processes. We developed a high-resolution/accurate-mass liquid chromatography-mass spectrometry DNA screening approach for monitoring doxorubicin-induced DNA modifications (adducts) in vitro and in vivo. We used, for the first time, a new strategy involving the use of isotope-labeled DNA, which greatly facilitates adduct discovery. The overall goal of this work was to identify doxorubicin-DNA adducts to be used as biomarkers to predict drug efficacy for use in veterinary oncology. Results We used our novel mass spectrometry approach to screen for adducts in purified DNA exposed to doxorubicin. This initial in vitro screening identified nine potential doxorubicin-DNA adduct masses, as well as an intense signal corresponding to DNA-intercalated doxorubicin. Two of the adduct masses, together with doxorubicin and its metabolite doxorubicinol, were subsequently detected in vivo in liver DNA extracted from mice exposed to doxorubicin. Finally, the presence of these adducts and analytes was explored in the DNA isolated from dogs undergoing treatment with doxorubicin. The previously identified nine DOX-DNA adducts were not detected in these preliminary three samples collected seven days post-treatment, however intercalated doxorubicin and doxorubicinol were detected. Conclusions This work sets the stage for future evaluation of doxorubicin-DNA adducts and doxorubicin-related molecules as candidate biomarkers to personalize chemotherapy protocols for canine cancer patients. It demonstrates our ability to combine in one method the analysis of DNA adducts and DNA-intercalated doxorubicin and doxorubicinol. The last two analytes interestingly, were persistent in samples from canine patients undergoing doxorubicin chemotherapy seven days after treatment. The presence of doxorubicin in all samples suggests a role for it as a promising biomarker for use in veterinary chemotherapy. Future studies will involve the analysis of more samples from canine cancer patients to elucidate optimal timepoints for monitoring intercalated doxorubicin and doxorubicin-DNA adducts and the correlation of these markers with therapy outcome. Supplementary Information The online version contains supplementary material available at 10.1186/s12917-021-03062-x.
Background
Traditionally, cancer has been treated as a homogenous disease with chemotherapeutic treatment decisions based on tumor location, histopathologic findings, and expected biologic behavior [1]. However, genetic variations in patients can result in different responses to therapy and varying degrees of toxicity, despite phenotypically similar diseases [2,3]. For these reasons, cancer chemotherapy is currently shifting from the concept of "one size fits all" to more personalized, patient-oriented approaches, with the goal of optimizing individual therapeutic protocols to increase treatment success and/or decrease undesired side effects [1].
Personalized chemotherapy is based on the ability to identify and target a patient subpopulation, predict drug efficacy, patient response, and likelihood of toxicity. The identification and validation of predictive biomarkers, robust chemical or molecular indicators of the outcome selected, is essential for identifying those patients who will most likely benefit from a drug regimen or will need a dose modification from the standard dosage [4,5]. For example, a drug dose or a combination drug protocol may be adapted as a result of biomarker measurement to allow for less unwanted side effects without compromising treatment success.
There are multiple reports of identification and use of predictive biomarkers with traditional chemotherapeutics in a variety of human cancer types including, but not limited to, colorectal, breast, pancreatic and lung cancers [6][7][8][9]. Clinically, however, biomarkers are most commonly used to select patients for treatment with targeted therapies including monoclonal antibodies and small molecule inhibitors, but have not yet been implemented to guide treatment with traditional cytotoxic chemotherapy [10][11][12].
Patient-oriented treatment approaches have recently become of interest for use with veterinary patients, where the treatment goal is to provide a good quality of life while extending patient survival [13]. In veterinary medicine, there is sparse information regarding biomarker development and use, and there are no predictive biomarkers used routinely. Similarly, the use of personalized chemotherapy in veterinary patients is limited, the closest example of which is the use of a receptor tyrosine kinase inhibitor, toceranib (Palladia), for treatment of canine cutaneous mast cell tumors (cMCTs) [14]. Palladia works, in part, by inhibiting the receptor tyrosine kinase KIT resulting in an antiproliferative effect in cancer cells [15]. A large minority of canine cMCTs possess a mutation in the c-kit gene, and in one study, cMCTs with activating mutations in the c-kit gene were approximately twice as likely to respond to treatment with toceranib than those with wildtype c-kit [14].
Biomarker development and application of personalized chemotherapy approaches in veterinary medicine is of particular interest in guiding the practice of dose escalation of routinely used chemotherapeutic drugs [16,17]. By identifying predictive biomarkers of patient response, dose escalation strategies can be modified for each individual to benefit both those who are more likely to respond to the drug used and those who are likely to have a poor response or higher risk of treatment-associated side effects. One example of clinically used dosing strategies to minimize risk of treatment-associated side effects is in treatment of dogs with mutations in the ABCB1 (MDR1) gene. This gene encodes for the drug efflux pump, p-glycoprotein, dysfunction of which can lead to severe adverse drug reactions to many commonly used medications, including multiple chemotherapeutics, due to increased central nervous system exposure to the drug [18]. There is not a dosing strategy proven to be effective in decreasing this risk for dogs with MDR1 gene mutations, and therefore, either a dose reduction of the chemotherapy drug or choosing an alternate chemotherapeutic that is not a substrate for p-glycoprotein is recommended [18]. Research has investigated the pharmacokinetics of chemotherapeutics in relation to the risk of myelotoxicity [19,20], but these strategies have not been clinically adopted for use in personalized veterinary chemotherapy.
Doxorubicin (DOX, Fig. 1), a member of the anthracycline group of compounds, has good anticancer activity against a wide spectrum of tumors including hematopoietic neoplasia, sarcomas, and carcinomas.
It is currently one of the most extensively used chemotherapeutic drugs in canine clinical settings [21,22]. Treatment with DOX is not universally effective and may lead to adverse events, including dose-dependent cardiotoxicity. The intensity of these adverse events varies from patient to patient [21,22]. Given its extensive use in cancer therapy, the development of predictive biomarkers is of particular relevance for management of DOX chemotherapy. The key component of the mechanism of action (MOA) of DOX is the poisoning of topoisomerase II through intercalation into DNA, but other cellular responses have been shown to contribute to its MOA, including the formation of DNA modifications (adducts) [23,24].
DNA adducts from anticancer DNA alkylating drugs have been shown to be good candidate predictive biomarkers of drug efficacy [25]. Monitoring these adducts as predictive biomarkers has the advantage of providing an integrative measure of patient-specific responses, since they account for an individual's absorption, distribution, metabolism, elimination and DNA repair [25]. Furthermore, drug-DNA adducts may be more suitable biomarkers, as compared to non-drug related metabolites because of their specificity [25]. The direct interaction of DOX with DNA creates an excellent opportunity for evaluating DOX-DNA adducts as predictive biomarkers. Previous in vitro studies have characterized a single DOX-DNA adduct generated in the presence of formaldehyde [26,27], but to our knowledge, this or any other DOX-DNA adducts have yet to be detected in vivo.
Detection of DNA adducts in chemotherapy patients can be especially challenging because adducts develop at low levels beyond the typical detection limits achieved by traditional low-resolution spectral detection and high analytical flow rates liquid chromatography-mass spectrometry (LC-MS) methods [28,29]. We previously developed a nanoLC-MS 3 DNA adductomics approach that allows for the screening of potentially every adduct in a hydrolyzed DNA sample. This method is based on high-resolution/accurate-mass (HRAM) data-dependent constant neutral loss monitoring of the 2′-deoxyribose or one of the four DNA bases (guanine (G), adenine (A), thymine (T), and cytosine (C) [30,31]. The accurate mass measurement of an observed DNA adduct can be used for determining its elemental composition, whereas the triggered MS 2 and MS 3 fragmentation spectra provide structural information of the modified base. In addition, the use of nanoflow (300 nL/min) and nanoelectrospray increases sensitivity by providing increased ionization and sampling efficiency [30,31]. The goals of our study were to optimize our adductomics approach to screen for DOX-DNA adducts in vitro and in vivo and to identify candidate predictive biomarkers of DOX efficacy for future investigation in clinical studies.
Screening for DOX-DNA adducts in vitro
Initial screening for DOX-DNA adducts was performed by reacting DOX in the presence of formaldehyde with DNA from calf thymus (CT-DNA) and with DNA extracted from E.coli bacteria. In order to facilitate adduct detection, we implemented a new strategy based on the use of 15 N-isotope-labeled DNA, generated in E.coli bacteria, to be paired with 14 N unlabeled E.coli bacterial DNA. Both DNA species are subjected to the same DOX exposure and sample preparation protocols, and then the two samples ( 14 N-and its counterpart 15 N-bacterial DNA) are combined in a 1:1 ratio prior to LC-MS analysis. In this resulting combined sample, DNA adduct detection is based on the selection of only masses that triggered an MS 3 fragmentation event in the drugexposed DNA samples, and were present as a matching pair of 14 N-DNA and 15 N-DNA adducts, resulting in coeluting peaks when extracted in the full scan chromatogram (this concept is explained in Fig. 2A).
Fig. 1 Doxorubicin and Doxorubicinol
A total of nine DNA adduct masses was detected in CT-DNA and 14 N-and 15 N-bacterial DNA exposed to DOX (Table 1 and Fig. 2B). None of these adducts were detected in the untreated controls.
These masses were detected upon neutral loss of dR, G, A or C. The most frequent neutral loss observed was dR, followed by G, observed five and three times, respectively. Overall, the majority of the adducts were evenly distributed over the 44-minute long chromatographic gradient (Fig. 2B). Two pairs of masses, however, eluted at the same retention time (RT, in Table 1 No. 2 and 3, and No. 7 and 9), suggesting that the pair belongs to the same molecule, and the lower mass is most likely the product of in-source fragmentation of the higher mass in the mass spectrometer.
In the first pair of masses, m/z 531.2062, was detected by neutral loss of dR resulting in a fragment ion of mass m/z 415.1577, which in turn triggered two MS 3 fragmentation events upon neutral loss of guanine and dR, suggesting that this adduct is a crosslink comprising two dR moieties. Indeed, masses m/z 531.2062 and 415.1577 were assigned to a previously detected crosslink formed by a deoxyguanosine, formaldehyde, and deoxyadenosine (dG-CH 2 -dA) [32].
The second pair of masses, m/z 809.2622 and 680.1830 (Fig. 3A) differed by 129.0792 amu, which corresponds to the exact mass of the aminosugar of DOX (Fig. 1).
The data supports that m/z 680.1830 results from in-source fragmentation of the aminosugar from m/z 809.2622. Interpretation of the resulting MS 2 and MS 3 spectra suggests that this is a nucleoside adduct involving guanine, and that the aminosugar moiety of DOX, which is partly lost in the MS source, is not the moiety that reacts with DNA, as reported previously (Fig. 3B) [23]. Literature reports that the reduction of the quinone to a semiquinone results in a radical that adds to either the C4-, C5-, C8-or, to a much lesser extent, to the C2-position of guanine [33]. However, our in vitro system had no metabolic capacity and therefore these masses could originate from decomposition products of the drug. The chemical synthesis and characterization, and the matching of identical fragmentation spectra, is necessary for unequivocal adduct identification and will be the focus of future studies.
Time course of formation and persistence of DOX-DNA adducts in vivo
The presence of the DOX-DNA adducts detected in vitro was then investigated in vivo using a targeted MS/MS approach in DNA extracted from liver samples harvested from mice exposed to two different DOX regimens and followed over time. In the first regimen, mice were acutely exposed to DOX, whereas in the second regimen, mice received a low dose of DOX once a week for 3 weeks. The samples from these studies were used to assess the kinetics of formation of the DOX-DNA adducts and their persistence over time, considering various time points after drug administration.
In addition to the DOX-DNA adducts, our in vitro screening of hydrolyzed DNA samples revealed the presence of a very intense full scan peak with m/z 544.1813. This mass corresponds to the molecular ion of DOX (calculated m/z of 544.1813), suggesting that DOX is still intercalated in the DNA after sample purification using chloroform/isoamyl alcohol. In light of this finding, our targeted MS/MS approach also included the masses of DOX and, to account for metabolism, DOX's major metabolite doxorubicinol (DOXol, m/z 546.1970, Fig. 1) [21]. DOX was detected in hydrolyzed DNA extracted from the liver of mice exposed for 24, 48, and 96 h, whereas DOXol was also detected in all three samples, but at an intensity that was about 150-to-350-fold lower than DOX, assuming similar ionization efficiency and recovery (Fig. 4).
Interestingly, DOX was still present in hydrolyzed DNA isolated from mouse liver one and three weeks post-drug exposure (Fig. 4), and at levels that were more than 2000times above our limit of detection (LOD) of 33.3 fmol oncolumn (measured by triplicate injection of decreasing concentrations of DOX in matrix).
Due to the presence of DNA-intercalated DOX and DOXol in these samples, we considered the possibility that leftover DNA-intercalated drug is reacting with DNA bases during hydrolysis and sample cleanup, resulting in adduct formation in situ. We first attempted to remove the intercalated DOX from the DNA by performing 5 or 10 liquid/liquid extractions with phenol/ chloroform/isoamyl alcohol, but this was shown not to be sufficient at removing the drug from the DNA (data not shown). Subsequently, 15 N-labeled DNA in amounts equal to what was extracted from the mouse livers was added to the samples prior to DNA hydrolysis to check for adduct formation during sample preparation.
Two of the previously detected DOX-DNA adduct masses (m/z 680.1830 and 809.2622 in Table 1) were detected in liver DNA of mice exposed to DOX (Fig. 5A, plot a and c). Interestingly, their 15 N-labeled counterparts were also present, suggesting that these two masses are also formed during sample processing (Fig. 5A, plot b and d). The time course of formation over 96 h exposure of these two masses (and their 15 N-labeled counterparts) showed similar levels and trends for each drug exposure duration, suggesting that this adduct is mostly formed during sample processing rather than in vivo during drug exposure (Fig. 5B). These two DOX-DNA adduct masses were not detected in the liver DNA samples of mice after one or three weeks from treatment with DOX.
Detection of DOX-adducts, DOX and DOXol in DNA isolated from blood of canine cancer patients
Previously detected DOX-DNA adducts (Table 1), DOX and DOXol were targeted (MS/MS) for detection in DNA isolated from three canine patients receiving DOX as part of a multiagent chemotherapy protocol called CHOP (Cyclophosphamide, Hydroxydaunorubicin (DOX), Oncovin (vincristine), and Prednisone). A single blood sample (about 3 mL) was collected from each canine patient one week post-treatment with DOX, when dogs returned to the clinic for a postchemotherapy complete blood count (CBC) per routine protocol at the hospital (Table 2).
Blood samples collected from two dogs who did not receive DOX were used as a negative control. Extracted DNA amounts ranged from 90 to 200 μg. None of the previously observed DOX-DNA adducts were detected in the samples. DOX was detected in the DNA isolated from all three exposed dog blood samples, whereas DOXol was detected in the DNA of two out of three samples. Figure 6 is a typical example of the extracted ion chromatograms for DOX and DOXol in canine patients.
Discussion
In this study, we applied our LC-MS 3 adductomics approach to screen for DNA adducts induced by the anticancer drug doxorubicin (DOX) both in vitro and in vivo. The main findings of this study are 1) a novel LC-MS 3based approach that detects DOX-DNA adducts, DOX, and DOXol; 2) a list of DOX-DNA adduct masses detected in vitro and in vivo; 3) information about the persistence over time of DNA-intercalated DOX and doxorubicinol, in mice receiving DOX and canine cancer patients undergoing DOX treatment; and 4) identification of promising analytes to be developed as predictive biomarkers to support DOX treatment and to be validated for future use in veterinary oncology.
In cancer chemotherapy, precision medicine-based approaches using biomarkers of efficacy are being developed to predict a patient's response to the treatment. Previous studies have shown promise for the use of predictive biomarkers as an alternative to more conventional dose-determining methods. However, in veterinary medicine, there are limited examples that have been demonstrated, but are not commonly used clinically. For example in cats undergoing chemotherapy, a biomarker-based personalized approach for treatment with carboplatin better predicted myelosuppression than dosing based on body surface area [20]. The serum concentration-time curve for DOX has also been found to be Table 1 DOX-DNA adduct masses detected by untargeted screening from reaction of DOX with purified DNA in the presence of formaldehyde. Only the masses that triggered an MS 3 fragmentation event in the DOX-exposed samples, but not in the negative control samples (unreacted DNA and buffer and enzymes used for the DNA hydrolysis) are reported. dR: 2′-deoxyribose, A: adenine, G: guanine, C: cytosine predictive of the reduction of total white blood cell and neutrophil counts in dogs [19].
In the case of DNA adducts as predictive biomarkers, various studies have investigated in humans the relationship between DNA adducts and patient treatment outcome [25]. One study found that by measurement of the interstrand DNA cross-link G-NOR-G, it was determined that Fanconi anemia (FA) patients are hypersensitive to the anticancer drug cyclophosphamide and require a lower dose of the drug compared to non-FA patients prior to hematopoietic cell transplantations [34]. Another study found that out of seven patients being treated for multiple myeloma, the three with the lowest levels of DNA adducts in TP53 and N-ras gene sequences did not respond to treatment with melphalan [35].
With regards to platinum-based chemotherapy, higher levels of platinum-DNA adducts have been observed in isolated leukocyte DNA in patients with good clinical outcome when being treated for ovarian and testicular [36][37][38]. Platinum-DNA adduct formation has also been found to correlate significantly with patient response following treatment for non-small-cell lung cancer with cisplatin [39]. In a study that investigated Oxaliplatin, it was observed that platinum-DNA adduct levels in peripheral blood mononuclear cells correlated significantly to mean tumor volume change [40]. Finally, carboplatin-DNA adduct levels following diagnostic microdoses have been investigated for their potential to predict patient response prior to treatment with the therapeutic dose [41].
Our in vitro screening approach, which resulted in the detection of nine DOX-DNA adduct masses (Table 1), was improved by using a novel strategy involving the pairing of 14 N-and 15 N-labeled DNA ( Fig. 2A). This novel strategy facilitates adduct detection and can be applied Early studies reported the detection, in vitro, of a DNA adduct formed at the N 2 -position of guanine that involved formaldehyde to link the DNA to the aminosugar of DOX [23]. This adduct was not detected in vitro by our approach. We hypothesize that the previously reported poor stability of this adduct in DNA [23] makes its detection, in hydrolyzed samples and after using our approach and current conditions, challenging. In an effort to make this adduct more stable, we performed a reduction using sodium cyanoborohydride [42], however the adduct was not detected in its reduced form (data not shown).
Additionally, an interesting finding from our in vivo adduct detection was the formation of adducts during sample preparation. We hypothesize that the release of DOX in the solution, as a consequence of the DNA being hydrolyzed, results in its reaction with free nucleosides to form DNA adducts. To our knowledge, there is no information currently available about the reactivity of DOX with free nucleosides, suggesting that the nature of this reaction as well as the persistence of DNA-intercalated DOX needs further characterization. If adduct formation is greater when DOX is released from the DNA (such as during enzyme hydrolysis), it is possible that in vivo adduct formation takes place in the course of DNA replication, during which the double helix opens up to allow for the synthesis of a new DNA strand and the intercalated DOX is released. Furthermore, adduct formation during sample processing seems to be solely a characteristic of those drugs that are able to intercalate to DNA, but not of drugs whose structure does not allow for such intercalation. Indeed, a complete removal from the DNA of the anticancer drug cyclophosphamide is possible when using similar sample preparation protocols and in situ formation of adducts during DNA hydrolysis is not observed (data not shown). Understanding if this is a feature of all drugs or molecules that intercalate to DNA will be the focus of future work.
To verify the presence of the DOX-DNA adducts, DOX, and DOXol in a sample type that would be available for biomarker monitoring in the clinic, we analyzed DNA isolated from blood collected from dogs undergoing chemotherapy treatment that included DOX (seven days post-treatment). Because none of the previously observed adducts were detected in these samples, we hypothesize that too much time has passed between sample collection and treatment, and therefore levels of adducts were most likely below the limit of detection of our approach. On the other hand, DOX and DOXol were detected in the DNA extracted from these samples (Fig. 6). The ability of our approach to measure DOX in DNA from patient samples using as little as 3 mL of blood demonstrates the feasibility of using intercalated DOX as a potential predictive biomarker of efficacy. A different study reported an assay for quantification of DOX intercalated with DNA in tumor and tissues using HPLC [43]. In comparison, our LC-MS DNA adductomics approach has the advantage of providing a combined measurement of DOX-DNA adducts, DOX and DOXol, as well as structural information through fragmentation spectra, which can be used to confirm the structure of anticipated molecules, identify the structures of new ones, and facilitate peak assignment in the absence of an isotope-labeled internal standard.
Conclusions
The adoption of personalized approaches in veterinary oncology has the potential not only for increased treatment success, but also to be more cost-effective as cancer chemotherapy for animals can be expensive. Our study provides new insights on promising potential DNA markers to be developed as predictive tools in canine cancer treatment with DOX. To our knowledge, this is the first study that uses a DNA adductomics screening approach for the combined analysis of a clinically used drug and its derived DNA adducts. We demonstrated the ability of our method to monitor DOX in DNA isolated from blood collected from canine cancer patients seven days post-treatment, suggesting that DNA-intercalated DOX may be developed as a predictive biomarker of drug efficacy. Future efforts will focus on measuring intercalated DOX to select veterinary patients that will benefit from chemotherapy and to develop personalized chemotherapy protocols aimed at improving quality of life of canine cancer patients.
Reagents and chemicals
Cell lysis, Proteinase K, and RNase A solutions were purchased from QIAGEN. DNA purified from calf thymus (CT-DNA) was purchased from Worthington Biochemical Corporation, C 3 H 8 O and CH 3 OH were purchased from Honeywell, and CH 2 O (37%), MgSO 4 , and CaCl 2 were purchased from Thermo Fisher Scientific. All other chemicals, materials and enzymes were purchased from Millipore Sigma. All solvents used for chromatography and mass spectrometry analyses were of the purest commercially available grade. 15 N-labeled bacterial DNA was generated by growing E.coli (MG1655 strain) in M9 minimal medium (standard) fortified with 15 NH 4 Cl. 98% DNA labeling was achieved by growing the bacteria for at least three generations. Briefly, 10 μL of bacterial stock culture in 25% glycerol were inoculated in 5 mL M9 minimal media starter culture and incubated overnight in a thermoshaker (37 °C, 200 rpm). Afterwards, 50 μL of cells from the starting culture were added to 1 L M9 minimal medium containing 15 NH 4 Cl and further incubated in the thermoshaker (37 °C, 200 rpm) until an optical density (measured by absorbance at 600 nm) of 1.2 absorbance units was reached. The culture was then split in 50 mL volumes, and the cells were pelleted by centrifugation at 4000 x g for 10 min. Cell pellets were stored at − 80 °C. The same protocol was performed in parallel for generating bacterial DNA that did not contain the 15 N-isotope.
Extraction of bacterial DNA
Cell pellets were vortexed and re-suspended in the remaining liquid. Three 50 mL Eppendorf tubes containing 15 N-DNA were combined into one 50 mL Eppendorf tube and 25 mL of cell lysis solution was added. Next, 150 μL Proteinase K (20 mg/mL) were added followed by overnight incubation in the shaker at room temperature. A total of 7.5 mL of protein precipitation solution was added and vortexed for 20 s followed by incubation on ice for 10 min. The solution was then centrifuged (4000 x g for 10 min) and the remaining supernatant was divided evenly into two parts (~ 16.25 mL) and each were poured into clean Eppendorf tubes containing 17 mL cold isopropanol (IPA) to allow the DNA to precipitate. The precipitated DNA pellet was transferred in a clean silanized glass vial and subsequently washed using 3 mL 70% IPA and 3 mL 100% IPA. Pellets were air-dried and subsequently combined into one 50 mL Eppendorf tube.
The DNA was re-suspended in 10 mL 10 mM PIPES/5 mM MgCl 2 . A total of 150 μL RNAseA solution (4 mg/mL) was added followed by incubation at 37 °C for 2 h. A total of 5 mL protein precipitation solution was added followed by 20 s of vortexing, 5 min incubation on ice, and centrifugation for 10 min at 4000 x g. DNA precipitation was performed by addition of 2 mL cold IPA to each vial. The precipitated DNA was removed from the sample, placed in a clean, silanized glass vial, and washed twice with 1 mL 70% IPA and 1 mL 100% IPA. DNA pellets were air-dried and stored at − 20 °C.
Reaction of calf thymus DNA (CT-DNA) or isotope-labeled bacterial DNA with DOX
DOX (100 μL, 0.6 mg/mL) in Tris-HCl buffer (10 mM, pH 7.4) was added to a reaction mixture containing formaldehyde (500 μL, 300 μM) in water and either CT-DNA (400 μL, 2.5 mg/mL), 14 N-bacterial DNA (500 μL, 1 mg/ mL) or 15 N-bacterial DNA (500 μL, 0.8 mg/mL) in Tris-HCl buffer (10 mM, pH 7.4). The reaction mixtures were incubated at 37 °C for 24 h. The same reaction mixtures without DOX were used as negative controls. Isolation of DNA was performed by IPA precipitation. Briefly, 2 mL cold IPA were added to each vial. The precipitated DNA was removed from the sample, placed in a clean, silanized glass vial, and washed twice with 1 mL 70% IPA and 1 mL 100% IPA. The DNA pellet was dried under a nitrogen stream. All of the steps of this procedure were performed in silanized glass vials.
Animal ethics
All procedures involving live vertebrates, including both mouse and canine patients, were reviewed and approved by the Institutional Animal Care and Use Committee (IACUC) at the University of Minnesota and were carried out in accordance with relevant guidelines and regulations. The IACUC protocols for the rodent study were 1807-36187A and 2006A38206, and the IACUC protocol for the canine patients was 1702-34548A. Additionally, all animal studies, both murine and canine, were performed in compliance with the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines [44].
Mouse treatment Single dose
Adult male C57BL/6 J mice (n = 6) were administered with a 10 mg/kg intraperitoneal injection of DOX or sterile saline vehicle. This dose was selected upon literature evaluation of similar studies involving an acute administration of DOX [45][46][47][48]. Mice were then sacrificed 24, 48, or 96 h following DOX injection (n = 2/time point). Control mice (n = 2) were sacrificed 48 h following vehicle injection. The liver and blood were harvested and stored at − 80 °C.
Weekly dose
Five week old male C57BL/6 N mice (n = 3/group) were administered once a week with DOX 4 mg/kg/week or equivalent volume of sterile saline vehicle by intraperitoneal injection for 3 weeks as we previously reported [49]. Animals were sacrificed at designated time points (1 or 3 weeks) after the last injection. Liver and blood samples were collected and stored at − 80 °C.
Isolation of DNA from liver tissue samples
Genomic DNA from mice exposed to DOX was extracted with the QIAGEN Gentra Puregene Tissue Kit (Qiagen Sciences) following the manufacturer's instructions with minor modifications. In brief, frozen liver tissues (270-390 mg) were minced with a razor blade while on dry ice.
The minced tissues were lysed with 3 mL cell lysis solution and incubated for 5 min on ice to allow for degradation. The tissue was then homogenized using a tissue homogenizer set at low-medium speed for no more than 1 min. Additional 3 mL of cell lysis solution were added and mixed by inverting 25 times. Next, 30 μl of Proteinase K (20 mg/mL) were added and tubes were mixed by inverting 25 times and incubated overnight in a shaker at room temperature. A total of 30 μl RNase A solution (4 mg/mL) was added to each lysate and mixed before incubation for 2 h in a shaker at room temperature. Then, 2 mL of protein precipitation solution were added and tubes were vortexed vigorously for 20 s prior to centrifugation (2500 x g for 15 min). Supernatants were added to cold IPA, and DNA was precipitated and washed as previously described, with the only difference being the DNA pellets were air-dried. The DNA pellets were stored at − 20 °C. The amounts described above were reduced by a factor of 4 when using 50 mg of liver tissue.
Recruitment and sample collection from patients undergoing chemotherapy with doxorubicin
Dogs with spontaneously arising tumors of various histologies undergoing treatment with a DOX-based chemotherapy protocol at the University of Minnesota Veterinary Medical Center were recruited. Dogs eligible for enrollment had a constitutional clinical signs score of 0 or 1 according to the Eastern Cooperative Oncology Group performance scale [50], body weight ≥ 10 kg, and adequate hematologic, renal, and hepatic function. Following written informed consent of each dog owner, blood (6-10 mL, depending on dog's size) was collected via routine venipuncture into a potassium EDTA tube 7 days post-treatment with doxorubicin when dogs returned for their post-chemotherapy CBC per routine protocol at our institution.
Isolation of DNA from blood tissue samples
Genomic DNA was extracted with the QIAGEN Gentra Puregene Blood Kit following the manufacturer's instructions for DNA Purification from Whole Blood with minor modifications. In brief, 3 mL of whole blood were lysed with 9 mL red blood cell (RBC) lysis solution and mixed by inverting 10 times followed by 5 min of incubation at room temperature. Next, the solution was centrifuged for 2 min at 2000 x g to pellet the white blood cells. The supernatant was then discarded leaving approximately 200 μL of residual liquid. The pellet was resuspended in the residual liquid by vortexing vigorously. A total of 3 mL of cell lysis solution was added and tubes were vortexed. 30 μl RNase A solution (4 mg/ mL) was added to each lysate and mixed by inverting 25 times followed by 15 min of incubation at 37 °C, which was followed by 3 min of incubation on ice. Then, 1 mL of protein precipitation solution was added and the tubes were vortexed vigorously for 20 s prior to centrifugation (2000 x g for 5 min). Supernatants were added to cold IPA, and DNA was precipitated and washed as previously described, with the only difference being the DNA pellets were air-dried. The dried pellets were stored at − 20 °C. The amounts described above were reduced by a factor of 6 when using about 0.5 mL of whole blood.
DNA clean-up, hydrolysis and sample enrichment
Prior to hydrolysis and adduct enrichment, purified DNA samples and mouse liver DNA from the acute treatment study were dissolved in 2 mL 10 mM Tris + 1 mM EDTA (pH 7.0). Then, 2 mL of chloroform/isoamyl alcohol (24:1, purified DNA samples) or phenol/chloroform/ isoamyl alcohol (25:24:1, mouse liver DNA samples) was added and the solution was vortexed vigorously for 60 s followed by centrifugation (2000 x g for 10 min), and the upper layer was collected and transferred into a clean 5 mL Eppendorf tube. The extraction was performed twice. After the second extraction, 200 μl 5 M NaCl were added. DNA was precipitated using cold IPA as previously described. The dried pellets were stored at − 20 °C until further use. The extraction was performed in an attempt to remove leftover drug from the samples.
Prior to DNA hydrolysis, DNA was re-dissolved in a 10 mM Tris-HCl/5 mM MgCl 2 buffer (pH 7.4) solution. Initial digestion of DNA was performed overnight at room temperature by addition of 124 U/mg DNA (CT-DNA and bacterial DNA) or 600 U/mg DNA (liver and blood DNA) DNase I (recombinant, from Pichia pastoris). Then, an additional 124 or 600 U/mg DNA, 6.6 mU/ mg DNA (CT-DNA and bacterial DNA) or 20 mU/mg DNA (liver and blood DNA) phosphodiesterase I (type II, from Crotalus adamanteus venom) and 46 U/mg DNA (CT-DNA and bacterial DNA) or 240 U/mg DNA (liver and blood DNA) of alkaline phosphatase (recombinant, from Pichia pastoris) were added and samples were incubated at 37 °C for 70 min. and followed by overnight incubation at room temperature. Enzymes were removed by centrifugation using a Centrifree ultrafiltration device (MW cutoff of 30,000, Millipore Sigma) at 2000 x g for 45 min. A 10-15 μL aliquot was removed from each sample for dGuo quantitation.
Samples were desalted and enriched using a Strata-X solid phase extraction (SPE) cartridge (33 μm, 30 mg/1 ml, Phenomenex). Briefly, the cartridge was pre-conditioned and equilibrated with 3 mL CH 3 OH and 1 mL H 2 O. Samples were loaded, and the cartridge was washed with 3 mL H 2 O and 1 mL 10% CH 3 OH in H 2 O. The two eluting fractions collected were 1 mL 100% CH 3 OH and 1 mL CH 3 OH + 2% formic acid. The fractions were evaporated until dry and stored at − 20 °C. prior to LC-MS analysis, samples were reconstituted in 500 μL (CT-DNA), 250 μL (bacterial DNA) or 10 μL (liver and blood DNA) 5% CH 3 OH in LC-MS grade water. For the DNA samples extracted from mouse liver and dog blood, the two SPE fractions were pooled together prior to LC-MS analysis.
dGuo quantitation by HPLC-UV analysis
Quantitation of dGuo was carried out on an UltiMate 3000 UHPLC System (Thermo Fisher Scientific) with a UV detector set at 254 nm. A 250 × 0.5 mm Luna C18 100A column (Phenomenex, Torrance, CA) at 40 °C was used with a flow rate of 15 μl/min and a gradient from 5 to 25% CH 3 OH in H 2 O over the course of 10 min followed by an increase to 95% CH 3 OH in 3 min and a hold at 95% CH 3 OH for 5 min. The column was re-equilibrated to initial conditions for 8 min.
LC-MS parameters
Samples were injected onto an UltiMate 3000 RSLCnano UPLC (Thermo Fisher Scientific) system equipped with a 5 μL injection loop. Liquid chromatography (LC) separation was performed on a capillary column (75 μm ID, 20 cm length, 10 μm orifice) created by hand packing a commercially available fused-silica emitter (New Objective) with 5 μm Luna C18 bonded separation media (Phenomenex). Gradient conditions were 1000 nL/min for 5.5 min at 5% CH 3 CN in 0.05% formic acid aqueous solution, then decreased to 300 nL/min followed by a linear gradient of 1%/min over 44 min for the untargeted screening and over 30 min for the targeted MS/MS analysis. Column wash was performed with a flow rate of 300 nL/ min at 98% CH 3 CN for 5 min (untargeted screening) or at 95% CH 3 CN for 2 min (targeted MS/MS analysis). Reequilibration was performed with a flow rate of 1000 nL/ min at 5% CH 3 CN for 5 min (untargeted screening) or for 1 min (targeted MS/MS analysis). The injection valve was switched at 5.5 min to remove the sample loop from the flow path during the gradient. All MS data was acquired on an Orbitrap Fusion Tribrid Mass Spectrometer (Thermo Fisher Scientific). Positive mode electrospray ionization and nanospray (300 nL/min) were used on a Thermo Scientific Nanoflex ion source with a source voltage of 2.2 kV, a capillary temperature of 300 °C, a S-Lens RF level set at 60%, and EASY-IC lock mass (m/z 202.0777) enabled.
Constant neutral loss (CNL)-MS n data-dependent acquisition (DDA)
CNL-MS n DDA was performed by repeated full scan detection followed by MS 2 acquisition and constant neutral loss triggering of MS 3 fragmentation. Full scan (range 200-2000 Da) detection was performed by setting the Orbitrap detector at 60,000 resolution with 1 microscan, automatic gain control (AGC) target settings of 2.0E5, and maximum ion injection time set at 50 ms. The most intense full scan ions were fragmented over a 2 s cycle. The MS 2 fragmentation parameters were as follows: quadrupole isolation window of 1.6, HCD collision energy of 20% ± 10%, Orbitrap detection at a resolution of 7500, AGC of 2.0E5, 1 microscan, maximum injection time of 50 ms, and EASY-IC lock mass (m/z 202.0777) enabled. Data-dependent conditions were as follows: triggering intensity threshold of 2.5E4, repeat count of 1, exclusion duration of 30 s, and exclusion mass width of ±5 ppm. The MS 3 fragmentation parameters were as follows: HCD fragmentation, 2 amu isolation window, collision energy of 20% ± 10%, Orbitrap detection at a resolution of 7500 upon the observation of neutral losses | 8,999.2 | 2021-03-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Targeting of Mutant p53 and the Cellular Redox Balance by APR-246 as a Strategy for Efficient Cancer Therapy
TP53 is the most frequently mutated gene in cancer. The p53 protein activates transcription of genes that promote cell cycle arrest or apoptosis, or regulate cell metabolism, and other processes. Missense mutations in TP53 abolish specific DNA binding of p53 and allow evasion of apoptosis and accelerated tumor progression. Mutant p53 often accumulates at high levels in tumor cells. Pharmacological reactivation of mutant p53 has emerged as a promising strategy for improved cancer therapy. Small molecules that restore wild type activity of mutant p53 have been identified using various approaches. One of these molecules, APR-246, is a prodrug that is converted to the Michael acceptor methylene quinuclidinone (MQ) that binds covalently to cysteines in p53, leading to refolding and restoration of wild type p53 function. MQ also targets the cellular redox balance by inhibiting thioredoxin reductase (TrxR1) and depleting glutathione. This dual mechanism of action may account for the striking synergy between APR-246 and platinum compounds. APR-246 is the only mutant p53-targeting compound in clinical development. A phase I/IIa clinical trial in hematological malignancies and prostate cancer showed good safety profile and clinical effects in some patients. APR-246 is currently tested in a phase Ib/II trial in patients with high-grade serous ovarian cancer.
TP53 is the most frequently mutated gene in cancer. The p53 protein activates transcription of genes that promote cell cycle arrest or apoptosis, or regulate cell metabolism, and other processes. Missense mutations in TP53 abolish specific DNA binding of p53 and allow evasion of apoptosis and accelerated tumor progression. Mutant p53 often accumulates at high levels in tumor cells. Pharmacological reactivation of mutant p53 has emerged as a promising strategy for improved cancer therapy. Small molecules that restore wild type activity of mutant p53 have been identified using various approaches. One of these molecules, APR-246, is a prodrug that is converted to the Michael acceptor methylene quinuclidinone (MQ) that binds covalently to cysteines in p53, leading to refolding and restoration of wild type p53 function. MQ also targets the cellular redox balance by inhibiting thioredoxin reductase (TrxR1) and depleting glutathione. This dual mechanism of action may account for the striking synergy between APR-246 and platinum compounds. APR-246 is the only mutant p53-targeting compound in clinical development. A phase I/IIa clinical trial in hematological malignancies and prostate cancer showed good safety profile and clinical effects in some patients. APR-246 is currently tested in a phase Ib/II trial in patients with high-grade serous ovarian cancer.
Keywords: APR-246, mutant p53, apoptosis, thioredoxin reductase, glutathione, redox balance, clinical trial, cancer therapy iNTRODUCTiON Recent DNA sequencing of 3281 human tumors within The Cancer Genome Atlas (TCGA) has confirmed the high frequency of TP53 mutations in cancer. At least 42% of the cases of 12 common human tumor types carry mutant TP53 (1). In high-grade serous (HGS) ovarian cancer, the fraction of tumors with mutant TP53 is almost 95%. No other gene is mutated at such high frequency in cancer. See also TP53 databases p53.iarc.fr and p53.free.fr. The second and third most frequently mutated genes are PIK3CA that encodes the p110 alpha catalytic subunit of PI3 kinase and PTEN, a lipid phosphatase that regulates Akt kinase activation, which are mutated in 17.8 and 9.7% of the cases of the 12 common tumor types, respectively (1).
Wild type p53 protein induces cell cycle arrest, senescence, and apoptosis in response to cellular stress by upregulating target genes such as p21, Bax, Puma, and Noxa (2). p53 can also regulate cell metabolism and redox status through target genes such as TIGAR and GLS2 (3)(4)(5). It remains unclear exactly how p53 mediates potent tumor suppression. In vivo studies in mice have shown that certain engineered p53 mutants that fail to transactivate pro-arrest and pro-apoptosis target genes can still prevent tumor development (6,7). Similarly, mice lacking the p53 target genes p21 that mediates p53-dependent cell cycle arrest and Puma and Noxa that mediate p53-dependent apoptosis do not show increased tumor incidence (8). These findings argue that other p53 transcriptional targets, for instance those involved in regulation of metabolism, are critical for p53-mediated tumor suppression.
Oncogenic stress as a result of oncogene activation or loss of cell cycle control characterizes early stages of tumor evolution. This leads to aberrant DNA replication, which triggers a DNA damage response (DDR) involving activation of ATM, Chk1 and Chk2 kinases, and p53, and induction of senescence or apoptosis (9). Activation of DDR and p53 upon oncogenic stress serves to eliminate incipient tumor cells and forms a critical barrier against tumor development. DDR inactivation by mutation in ATM or TP53 allows cell survival and tumor progression. Many TP53 mutations are missense mutations resulting in amino acid substitutions in the DNA-binding core domain and disruption of p53-specific DNA binding and transcriptional transactivation (10). Loss of wild type p53 is associated with increased resistance to chemotherapy.
The high frequency of missense TP53 mutations in human tumors and the fact that mutant p53 often accumulates at high levels in tumor cells make mutant p53 a potential target for improved cancer therapy. Pharmacological reactivation of mutant p53 would restore p53-dependent senescence and apoptosis, and presumably also p53-mediated regulation of metabolism and other processes, and thus eliminate tumors in vivo. Indeed, studies in various mouse models have demonstrated that restoration of wild type p53 expression in vivo leads to rapid tumor elimination (11)(12)(13). This suggests that restoration of functional p53 can trigger tumor cell death and lead to tumor clearance even if a tumor carries multiple genetic alterations that drive tumor growth.
PHARMACOLOGiCAL ReACTivATiON OF MUTANT p53
A growing number of small molecules that can reactivate mutant p53 have been identified over the past 15 years, using either chemical library screening or rational drug design. These include CP31398, PRIMA-1Met/APR-246, PK-083, PK-5174, SCH529074, and NSC319726 (ZMC1). We have previously reviewed this field (14). This review is focused on PRIMA-1 (APR-017) and the structural analog PRIMA-1Met, now named APR-246, both of which were identified in our laboratory. As will be discussed below, both compounds are prodrugs that form the active moiety MQ. We will also highlight the clinical development of APR-246.
We identified PRIMA-1 in a screen of a small structurally diversified chemical library from NCI (Diversity set) for compounds that could induce cell cycle arrest or cell death preferentially in cells expressing mutant p53 (15). Cell growth and viability were assessed by the WST1 assay. PRIMA-1 showed the strongest preference for mutant p53-expressing cells and was selected for further studies. Experiments with antibodies specific for correctly folded wild type p53 (PAb1620) or unfolded mutant p53 (PAb240) revealed that PRIMA-1 could induce refolding of mutant p53 and enhance mutant p53 DNA binding in gel shift assays. PRIMA-1 treatment of tumor cells carrying various mutant p53 resulted in upregulation of p53 target genes such as p21, Bax, and Mdm2, and induction of cell death by apoptosis. Systemic administration of PRIMA-1 in mice carrying Saos-2-His273 tumor xenografts demonstrated significant inhibition of xenograft tumor growth in vivo (15). In parallel, our analysis of available data in the NCI database confirmed that PRIMA-1 preferentially targets tumors cells carrying mutant p53 and has an activity profile that is entirely distinct from those of commonly used chemotherapeutic drugs such as cisplatin and 5-FU (16). Subsequently, the structural analog PRIMA-1Met or APR-246 that has superior permeability properties was identified. APR-246 was shown to synergize with chemotherapeutic drugs, e.g., adriamycin and cisplatin (17).
TARGeTiNG MUTANT p53 BY MiCHAeL ADDiTiON
Our data clearly showed that PRIMA-1 and APR-246 were able to reactivate various forms of mutant p53 and trigger tumor cell apoptosis, but their molecular mechanism of action remained obscure. However, we found that both compounds are converted to methylene quinuclidinone, MQ, a Michael acceptor that can react with soft nucleophiles such as thiols in proteins (Figure 1). The p53 core domain has 10 cysteine residues. Mass spectrometry demonstrated that MQ binds covalently to the p53 core domain (18). Several findings support the notion that MQ binding to p53 is critical for the effect of PRIMA-1 and APR-246. N-acetylcysteine (NAC), a thiol group donor, blocks PRIMA-1induced apoptosis and PRIMA-D (APR-320), a structural analog that cannot be converted to MQ, has no effect on tumor cells at concentrations corresponding to those used for PRIMA-1 and APR-246. Moreover, transfer of MQ-modified mutant p53 protein into p53 null tumor cells induces expression of p53 target genes and cell death by apoptosis (18). These results demonstrate that MQ is the active compound and that MQ-modification of mutant p53 per se is sufficient to induce tumor cell death. Thus, PRIMA-1 and APR-246 are prodrugs that form the biologically active compound MQ (Figure 1). This conversion is spontaneous and occurs over a time frame of a few hours at physiological pH (18). Since MQ is reactive, its administration as a prodrug is probably critical in order to avoid adduct formation with various extracellular targets.
It is interesting to note that MIRA-1, another compound identified in our screen of the NCI Diversity set, is a maleimide with known Michael acceptor activity. Moreover, Kaar and Fersht and their colleagues identified a series of Michael acceptors that bind covalently to both wild type and mutant p53 core domains, resulting in increased protein melting temperature. Analysis of the reactivity of the cysteines in p53 by mass spectrometry . Both compounds form the Michael acceptor methylene quinuclidinone (MQ), which is the active moiety. MQ binds covalently to thiols in mutant p53. MQ also targets thioredoxin reductase (TrxR) and glutathione (GSH). MQ binding to TrxR converts the enzyme to an active oxidase, which generates ROS, and MQ binding to glutathione depletes intracellular free glutathione, which also induces ROS.
revealed preferential reaction with C124 and C141, followed by C135, C182, and C277, and then C176 and C275 (19). These results further support the idea that adduct formation at cysteines can stabilize the native conformation of p53.
Among the 10 cysteine residues in p53's core domain, four -Cys182, Cys229, Cys242, and Cys277 -are exposed on the surface of the protein and accessible for modification in correctly folded p53 (20,21). Presumably, additional cysteines are exposed in unfolded mutant (or wild type) p53, allowing more extensive thiol modification (18). Computational analysis of structural p53 models identified a binding pocket between the L1 loop and S3 sheet in the p53 core domain, containing cysteines C124, C135, and C141 (22). Docking analysis indicated that MQ, as well as other thiol-targeting compounds including MIRA-1, can bind to the L1/S3 pocket. These results were validated in living cells by introduction of a C124A substitution in R175H mutant p53. Indeed, C124A substitution abolished the apoptotic effect of PRIMA-1 in Saos-2 osteosarcoma cells expressing R175H mutant p53.
Thus, APR-246/MQ, MIRA-1 and the compounds identified by Kaar et al. (19) share a common chemical property and presumably promote refolding of mutant p53 by a similar mechanism. The ability to modify cysteines in mutant p53 distinguishes APR-246/MQ from compounds like PK-083 and PK-7088 that bind to a crevice in the Y220C mutant p53 protein and raise its melting temperature (23). APR-246 also has a different mechanism of action than the compound NSC319726 (ZMC1), a zinc chelator that refolds His175 mutant p53 as well as several other mutant p53 proteins (24). Clearly, mutant p53 refolding and reactivation can be achieved by various molecular strategies. Some strategies work for specific mutant forms of p53, whereas other strategies are applicable to a range of mutant p53 proteins.
APR-246 ReACTivATeS MUTANT FORMS OF p53 FAMiLY MeMBeRS p63 AND p73
p53 is a member of a protein family with two other members, p63 and p73 (25). In contrast to TP53, neither the TP63 nor TP73 genes are mutated at any significant frequency in human tumors. However, TP63 missense mutations occur in certain developmental syndromes such as the Ectrodactyly-ectodermal dysplasia-cleft (EEC) syndrome (26). All three proteins share a high degree of sequence similarity in the DNA-binding core domain (25). The 10 cysteines in the p53 core domain are all conserved in both p63 and p73. This raises the question as to whether APR-246 can affect mutant p63 and/or p73 folding and activity. We first examined the effect of APR-246 on human tumor cells carrying exogenous temperature-sensitive missense mutant TP63 and TP73. APR-246 induced the expression of p53/p63/p73 target genes, cell cycle arrest, and cell death by apoptosis in these cells (27). To assess the effect of APR-246 on mutant p63 in a more physiological context, we used human keratinocytes derived from EEC patients carrying R204W or R304W mutant TP63. These two TP63 mutants correspond to the tumor-associated hot spot TP53 R175H and R273H mutants. Treatment with APR-246 led to increased expression of p63 target genes and at least a partial rescue of keratinocyte differentiation (28). Similarly, APR-246 rescued corneal differentiation in iPS cells from EEC individuals (29). Thus, the targeting of mutant versions of the two structurally related transcription factors p63 and p73 by APR-246 leads to entirely different biological responses that recapitulate the normal functions of each protein. These results argue convincingly that the biological effects of APR-246 are mediated by direct binding to mutant p53 or p63 and refolding the mutant proteins into an active conformation.
APR-246/MQ TARGeTS COMPONeNTS OF THe CeLLULAR ReDOX SYSTeM
The observation that MQ can bind to thiols suggested that it might also target thiol-containing redox regulators such as glutathione and thioredoxin. Indeed, we found that APR-246 is a potent inhibitor of thioredoxin reductase (TrxR1), a selenocysteinecontaining enzyme that catalyzes the reduction of thioredoxin (30). APR-246 inhibits the activity of TrxR1 both in vitro and in living cells. This effect is presumably mediated through modification of the selenocysteine residue in TrxR1 by MQ. MQ binding converts TrxR1 into an NADPH oxidase that contributes to ROS production and cell death induced by APR-246 (30). Methylene quinuclidinone has also been shown to bind to cysteines in glutathione (GSH), leading to a decrease in free intracellular glutathione concentrations and increased ROS levels (31,32). Since glutathione can mediate resistance to platinum drugs by conjugation and export, this effect of MQ may at least in part account for the strong synergy between APR-246 and platinum drugs (see below). APR-246 did not inhibit GCLM (regulatory subunit of γ-glutamyl cysteine-synthase) or GSS (glutathione synthetase) in the GSH synthesis pathway, indicating that the observed GSH depletion is not caused by decreased synthesis (32).
Thus, accumulating data on the effects of PRIMA-1/APR-246 on the cellular redox balance demonstrate that these compounds have a dual mechanism of action that targets two Achille's heels of tumor cells: mutant p53 and the redox balance (Figure 2). The targeting of these two pathways may allow more efficient elimination of tumor cells and lower the probability of resistance development. This dual mechanism provides an explanation for reported mutant p53-independent effects of APR-246.
eFFeCTS ON MUTANT AND wiLD TYPe p53
Methylene quinuclidinone can bind to both wild type and mutant p53 (18), and it is conceivable that MQ binding can induce refolding of misfolded wild type p53 in tumor cells. However, available data so far indicate that APR-246/MQ has little toxicity in normal cells. Wild type p53 is expressed at low levels in most normal cells and tissues in the absence of stress, whereas many tumor cells express high levels of unfolded mutant p53. Also, normal cells have a higher capacity to cope with oxidative stress as compared to tumor cells (33). While MQ binding to mutant p53 can restore p53-dependent apoptosis (18), MQ binding to other cellular proteins may not necessarily have major effects on cell growth and survival, except for binding to TrxR and GSH and possibly other anti-oxidative proteins, as discussed above. The benign safety profile of APR-246 observed in the first clinical study (34) is consistent with the lack of major toxicity in normal cells.
Interestingly, the response of wild type and mutant TP53carrying tumor cells to MQ is enhanced by hypoxia (35). Hypoxia (≤1% oxygen) increased the sensitivity of SKBR3 cells (R175H mutant TP53) to PRIMA-1 treatment. In MCF-7 cells (wild type TP53), chemical hypoxia induced by CoCl2 led to accumulation of unfolded wild type p53, as assessed with the monoclonal antibody PAb240, and enhanced sensitivity to PRIMA-1. Presumably, this is due to MQ binding and refolding of unfolded "mutantlike" wild type p53 into an active conformation. The finding that hypoxia can potentiate the efficacy of PRIMA-1 has important clinical implications. Due to insufficient blood supply, rapidly growing tumors in vivo are often hypoxic, and it is conceivable that this could enhance the therapeutic efficacy of APR-246, both in wild type and mutant TP53-carrying tumors.
SYNeRGY wiTH CONveNTiONAL CHeMOTHeRAPeUTiC DRUGS AND NOveL eXPeRiMeNTAL DRUGS
A major hurdle for achieving efficient elimination of tumors and long-term cancer cure is the rapid development of therapy resistance. There are numerous mechanisms for such resistance, including enhanced DNA repair and increased efflux of chemotherapeutic drugs from the tumor cell (36). The problem of resistance is relevant not only for conventional chemotherapeutic drugs but also for targeted drugs, as exemplified by resistance development in CML upon treatment with the novel drug imatinib (Gleevec) that inhibits the BCR-ABL kinase (37). Therefore, it is important to explore possible synergies between APR-246 and conventional anticancer agents, novel targeted drugs, and experimental drugs.
The DNA damage caused by chemotherapeutic drugs such as cisplatin and doxorubicin induces tumor cell death to a large extent via wild type p53 activation and p53-induced apoptosis. Accordingly, tumor cells carrying mutant p53 or completely lacking p53 are often more resistant to conventional chemotherapy. This suggests that restoration of wild type p53 function by APR-246 might synergize with, for example, cisplatin. Indeed, we and others have demonstrated strong synergy between APR-246 and chemotherapeutic drugs such as cisplatin, 5-fluorouracil (5-FU), and doxorubicin in mutant p53-carrying lung, ovarian, and esophageal cancer cells (17,31,38,39). Synergistic effects have also been observed in vivo upon systemic administration (17,39).
There are several possible reasons for the observed synergy (Figure 2). First, as alluded to above, restoration of wild type function to mutant p53 by APR-246 might increase sensitivity to chemotherapeutic drugs that depend on wild type p53 for induction of efficient tumor cell apoptosis. Second, treatment with cisplatin, adriamycin, or 5-FU leads to accumulation of mutant p53 (17,39), which is expected to enhance the effect of APR-246. Third, we and others found that APR-246, via MQ, depletes intracellular GSH levels (31,32). Since formation of adducts with GSH and extracellular export is one mechanism of cisplatin resistance, MQ-mediated GSH depletion is likely to sensitize tumor cells to cisplatin. Fourth, MQ-mediated inhibition of TrxR and conversion of the enzyme to an active oxidase (30) should induce ROS levels, which will further enhance DNA damage and p53-dependent cell death. Inhibition of TrxR will also negatively affect the activity of ribonucleotide reductase, needed for providing deoxyribonucleotides for DNA replication and repair (40).
In contrast to the mutant p53-dependent synergy of APR-246 with cisplatin and 5-FU, the synergy between APP-246 and epirubicin was p53-independent in esophageal cancer cells (39). This could be due to the redox effects of APR-246, including inhibition of TrxR and/or GSH depletion. Cisplatin and 5-FU, but not epirubicin, induced the expression of mutant p53 (39). Synergy has also been observed between APR-246 and the experimental compound RITA in AML cells. This synergy could arise from increased levels of mutant p53 upon induction of DNA damage by RITA (41). In addition, APR-246 synergized with daunorubicin in AML cells carrying wild type p53 (41). Synergy in the absence of mutant p53 may be due to APR-246-mediated redox effects. However, as discussed above, MQ can also bind to wild type p53 and restore an active conformation under hypoxic conditions. There is evidence suggesting that wild type p53 may occur in a misfolded conformation in some tumors, e.g., B-CLL (42) and AML (43). This raises the possibility that refolding of wild type p53 by APR-246 may be responsible for synergy with chemotherapeutic agents in AML cells.
PRIMA-1 at 50 μM induced G2/M phase accumulation of parental mouse L1210 leukemia cells carrying mutant p53 but had only a minor effect on the cell cycle distribution of Y8 cells, a subline of L1210 that lacks p53. A striking synergistic induction of necrosis was observed in L1210 cells upon combination treatment with PRIMA-1 and the cyclin-dependent kinase inhibitor flavopiridol. However, in Y8 p53 null cells, combination of PRIMA-1 and flavopiridol caused a synergistic increase in apoptosis (44). Thus, combination treatment with PRIMA-1 (or presumably APR-246) can lead to cell death through alternative routes, depending on the presence or absence of mutant p53.
Mutant p53 reactivation by APR-246 leads to induction of the p53 target and antagonist Mdm2 (15), which promotes p53 degradation by the proteasome. Therefore, it is conceivable that inhibition of Mdm2-p53 binding and Mdm2-mediated p53 degradation might potentiate the effect of APR-246. Indeed, strong synergy was observed between PRIMA-1 and the Mdm2 inhibitor Nutlin-3 in pancreatic cancer cells (45). Moreover, gene therapy with the tumor suppressor gene FHIT (fragile histidine triad), whose gene product has been shown to inactivate Mdm2 (46), resulted in synergistic inhibition of tumor growth in combination with APR-246 (47). Since several compounds that disrupt p53-Mdm2 binding are now being tested in the clinic, these results may have profound implications for the future clinical use of both APR-246 and inhibitors of p53-Mdm2 binding.
CLiNiCAL DeveLOPMeNT
APR-246 has been tested in a first-in-man phase I/IIa clinical trial in patients with hematological malignancies or hormonerefractory prostate cancer (34). The main aim was to determine maximum tolerated dose (MTD) of APR-246 and to assess safety and pharmacokinetic properties. Patients were not preselected based on TP53 mutation status. The treatment regimen was 2-h infusion of APR-246 for 4 days. Overall, the study showed that APR-246 is well tolerated and only relatively minor and transient side effects were observed, including dizziness, fatigue, headache, nausea, and confusion. MTD was defined as 60 mg/ kg. Plasma concentrations of APR-246 reached 250 μM, well above concentrations required for robust induction of tumor cell apoptosis in cell culture experiments. Analysis of isolated patient leukemic cells by FACS revealed induction of p53 targets Bax and Puma upon APR-246 treatment, and microarray analysis showed substantial alterations in gene expression, including genes associated with cell cycle regulation and cell death, consistent with the proposed mechanism of action. Furthermore, one AML patient carrying V173M mutant TP53 showed a significant reduction in bone marrow blasts, and one patient with a TP53 splice site mutation had a minor response according to CT scan. Thus, APR-246 is safe and shows signs of clinical activity. APR-246 is currently being tested in combination with carboplatin and pegylated doxorubicin in a phase Ib/II clinical study in HGS ovarian cancer, a tumor type with a 95% frequency of TP53 mutations (see www. clinicaltrials.gov).
FUTURe PeRSPeCTiveS
The development of efficient mutant p53-reactivating anticancer drugs is expected to have a major impact on public health globally, given the high frequency of TP53 mutations in a wide range of human tumors. In certain tumor types, TP53 is mutated in the great majority of the cases. In general, clinical studies have shown that mutant TP53-carrying tumors respond less well to conventional chemotherapeutic drugs and have worse prognosis than wild type TP53-carrying tumors. The ongoing phase Ib/II clinical study with APR-246 will provide solid data on clinical efficacy in combination with standard chemotherapy. Importantly, the mechanism of action of APR-246 -i.e., dual targeting of both mutant p53 and the cellular redox system -suggests that APR-246 will synergize with many DNA-damaging chemotherapeutic drugs, and such synergy has been confirmed in a number of published studies. An important goal for further studies is to assess clinical efficacy in combination with relevant chemotherapeutic and targeted drugs in various tumor types. Ultimately, APR-246 may allow greatly improved therapy of a wide range of tumors that carry mutant TP53.
AUTHOR CONTRiBUTiONS
KW contributed to writing the manuscript and preparing the figures, communicated with the journal editor, and submitted the manuscript. VB, QZ, MZ, SC, and LA contributed to writing the manuscript and preparing the figures.
FUNDiNG
We thank the Swedish Cancer Fund (Cancerfonden), The Swedish Research Council (Vetenskapsrådet), Radiumhemmets Forskningsfonder and Karolinska Institutet for generous support. KW is a recipient of a Distinguished Professor Award (DPA) from Karolinska Institutet. | 5,640.6 | 2016-02-03T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
ICT Adoption and Stock Market Development: Empirical Evidence Using a Panel of African Countries
: The aim of this study was to examine the impact of adopting information and communication technologies (ICT) on the development of African stock exchanges. The study examined a panel of 11 African stock exchanges for the period 2008–2017 and employed the generalised method of moments (GMM) to estimate the results. The results of the study documented that ICT adoption had a positive impact on stock market development in African countries. Firstly, it was found that the stock market traded volume and mobile–telephone user variables were positively related. Secondly, a positive relationship was also proven between the stock market traded volume and the broadband user variable. Thirdly, a positive relationship was documented between the stock market capitalisation variable and the fixed telephone user variable. Fourthly, the research findings confirmed a positive relationship between the stock market turnover ratio and the fixed telephone user variable. The findings of this study imply that policymakers should be more resolute when formulating ICT policies. ICT adoption can spur stock market development which in turn can propel economic growth, resulting in the economic prosperity of the African countries. Moreover, ICT adoption could enhance the integration of African stock exchanges, further buttressing the drive towards the common market areas in various regions.
Introduction
Developments in ICT have affected livelihoods and various other aspects of human activities and interactions in the last few decades. The world has transformed into what could aptly be described as an information society (IS) in virtually all human activities, with ICT as the main driver of the transformation process (Cortés and Navarro 2011). Further, they contended that the transformation force has permeated all strata of human settings, such as households, firms and governments at the local, regional, national and international levels. This assertion by Cortés and Navarro (2011) clearly indicates that ICT plays a very vital role in every sphere of our daily lives. The role that the Internet and mobile phones currently play in every human interaction can never be overemphasised.
Among the aspects of human activities and interactions that have been tremendously affected by ICT is the development of African stock markets. This was reflected in a study of stock markets development and integration in South African Development Community (SADC) member countries, carried out by Bundoo (2017), that highlights the importance of ICT in stock market development. Furthermore, Bundoo (2017) concluded that the SADC member states' stock exchanges must work towards a greater integration so that they can attract more capital portfolio flows. Moreover, Bundoo (2017) also concluded that greater foreign direct investment (FDI) flows, which are much needed for the financial and economic development of the SADC countries in particular and Africa in general, will be attracted through greater stock market integration. Therefore, by implication, the study emphasised the importance of ICT adoption for the development of African stock markets. Solarin et al. (2019) assert that stock markets are considered as one of the most crucial aspects of a market economy, in the sense that, on the one hand, they make it possible for firms to gain access to capital. On the other hand, stock markets enable investors to have a share of ownership in the listed firm, based on the firm's expected performance in the future. According to Adu et al. (2013), the stock market is considered as one of the most crucial aspects of a financial system. This is in light of the fact that, through the stock market, listed firms can elicit capital by issuing their shares and at the same time bring about an environment through which the same issued shares can be freely traded by market participants. Hence, more recently, a growing strand of literature has focused on stock market development as the main factor for economic growth (see for instance Tsaurai 2018;Bundoo 2017;Okwu 2016). More recently, studies in this realm have identified ICT as one of the determinants of stock market development.
Notwithstanding the gains enjoyed by African stock markets in the last decade, African stock exchanges still face the challenge of integration, especially in the wake of the newly signed African free trade agreement. Moreover, there is need for building the technical requirements and developing institutional capacity to resolve the problem of low liquidity faced by most African stock exchanges (Yartey and Adjasi 2007). Moreover, Schwab (2019) documented the following as some of the major pillars of global competitiveness: ICT adoption, macroeconomic stability and the financial system. Further, Schwab (2019) reported that sub-Saharan Africa recorded an increase of 15.8% in ICT adoption, while Europe and North America had an increase of 3.7% in ICT adoption. However, the corresponding increase in macroeconomic stability was 3.7% for sub-Saharan Africa while Europe and North America had an increase of 0.9%. Arguably, ICT adoption has a bearing on the financial system and by extension on global competitiveness. Investors, policymakers and market participants require adequate studies on the role of ICT adoption in stimulating stock market activity.
African stock exchanges have undergone several reforms in order to attract more portfolio flows over the years. By and large, they all have made ICT adoption one of the major developmental factors of the reforms they experienced over the period of their existence. The adoption of ICT technologies has mainly included the adoption of automated trading systems by the stock exchanges. Some of the stock exchanges have also become integrated, in a sense allowing the dual or multiple listing of shares. On the demand side, there has been an adoption of technologies such as mobile phones, Internet and broadband by consumers. Arguably, this has led to an increase in stock market transactions, as consumers are now able to transact in the comfort of their homes and at the click of a button. In essence, ICT adoption by consumers confers convenience, which make them purchase shares easily. Therefore, the above foregoing demonstrates the importance of ICT to the sustenance of a stock exchange.
Notwithstanding, extant studies have focused on the stock market development and economic growth nexus. There is dearth in literature that has examined the role of ICT in fostering stock market development. The theoretical foundations of the study are anchored on the Schumpeterian growth models (which focuses on the finance-growth linkage) and the ICT adoption theories (the technology acceptance model, diffusion of innovations, the unified theory of acceptance and use of technology, the model of the IT implementation process and the information systems success model). Against this backdrop, the present study sought to examine the impact of ICT adoption on stock market development in Africa. The following hypotheses were empirically tested in this study: H 0 : ICT adoption has no significant impact on stock market development in Africa H A : ICT adoption does not have a significant impact on stock market development in Africa The rest of the paper is organised as follows: Section 2 reviews the related literature of this study. Section 3 presents an overview of the stock markets in Africa. Section 4 describes the research methodology employed in this study. Section 5 presents and discusses the research findings and Section 6 concludes the paper.
ICT Adoption and Financial Markets
Information and communications technology (ICT) is an umbrella term that includes any communication device or application such as radio, television, cellular phones, computer, satellite systems, network hardware and software as well as the various devices and applications associated with them such as videoconferencing and distance learning (Okwu 2016). ICT is centred on computer applications, telecommunications equipment and infrastructure to generate, process, store, retrieve, transmit and manipulate data or information in the context of businesses or other transactions.
There is an abundance of studies that have examined the role and application of ICT in economic activity. Parida et al. (2009) contended that ICT is an effective tool that can be used for improving external communications and delivering quality service to customers. According to Fulantelli and Allegra (2003), ICT offers a wide range of possibilities for improving their competitiveness and provides mechanisms for getting access to new market opportunities and specialised information services in organisations.
With the development and spread of ICT, and its application to various fields and activities, several studies have been carried out in order to test existing theories and ensure better understanding about its diffusion, adoption, acceptance and usage (Mun et al. 2006;Venkatesh et al. 2003;Rogers 2003). These include the technology acceptance model (TAM), diffusion of innovations (DOI), the unified theory of acceptance and use of technology (UTAUT), the model of the IT implementation process and the information systems success model theories. Chinn and Fairlie (2010) used panel data analysis techniques to explore the determinants of cross-country disparities in personal computer and Internet penetration in developed and developing countries. The results showed evidence that income, human capital, youth dependency ratio, telephone density, legal quality and banking sector development are associated with technology penetration rates. They found the main factors responsible for low rates of technology penetration in developing countries to include disparities in income, telephone density, legal quality and human capital.
Owusu-Agyei et al. (2020) employed a panel of 42 sub-Saharan African (SSA) countries for the period 2000-2016 to investigate the relationship between ICT adoption, human capital development, economic freedom and financial development. They found that Internet use had a positive impact on different measures of financial development. Further, their results revealed that subsamples of SSA countries differ on their levels of human capital development and economic freedom. Chien et al. (2020) investigated the effects of information and communication ICT diffusion on financial development for 81 countries over the period 1990-2015. They found that, comparing the different effects of ICT on financial development between the highincome group and the middle-and low-income groups, telephone and Internet positively influenced both groups' financial development, whereas mobile cellular caused a negative effect in high-income countries, but a positive effect in middle-and low-income countries. Secondly, they documented that the growth of the Internet and telephones raises the financial development in all regions, while mobile cellular growth positively affects financial development only in Africa. Cheng et al. (2021) examined the relationship between financial development, ICT diffusion and economic growth by considering the interlinkage of finance and ICT. They employed a panel of 72 countries from 2000 to 2015 and found that financial development was always unfavourable for economic growth. Secondly, the results of their study documented that ICT diffusion can improve economic growth in high-income countries, but for middleand low-income countries, only mobile growth could raise economic growth, whereas increasing Internet could not. Finally, the results of Cheng et al. (2021) documented that the interaction effects between ICT and financial development are positive in both income level countries. Mignamissi (2021) analysed the influence of the digital divide on the new IMF financial development index on a panel of 34 African countries for the period 2005-2017 and found that the ICT divide was a severe handicap for the financial systems development in Africa. Furthermore, we found that the digital divide between countries is also a severe handicap for the financial development of countries lagging behind. In essence, it was found that countries with a technological lead have relatively developed financial systems. Ejemeyovwi et al. (2021) empirically investigated the interaction of ICT adoption and innovation, and the role of this digitalisation interaction on financial development in Africa and across the subregions. The results of their study documented that ICT-innovation interaction shock positively drives financial development. They reasoned that this implies that, for multinational corporations (MNCs) and other economic agents, the ICT-innovation interaction should be strongly applied across all sectors to drive financial development, since all sectors require finances to improve performance.
ICT Adoption and Stock Market Development
Measuring the relevance of ICT adoption to stock market development has become important not only for industrial and marketing purposes but also for policymakers, who should formulate effective measures to overcome the existing, and even growing, digital inequalities. Notwithstanding, extant studies have invariably examined ICT adoption, either in relation to economic growth or some other features of finance and banking development. Therefore, it will be pertinent to note that ICT has been studied relative to various aspects of human activity. An examination of the literature implies that the effects of ICT on stock market development in Africa is not yet well researched and documented.
Although there is a growing strand of literature that has examined the link between ICT in relation to stock market development and performance, research that focuses on emerging countries is very limited. Amongst others, Ngassam and Gani (2003) explored the link between ICT and stock market development in emerging markets and high-income economies. The study found that personal computers and Internet hosts have strong effects on stock market development. Credit to the private sector and market capitalisation were also found to exert significant positive effects on stock market development. Irving (2005) explored a historical and descriptive approach, as well as progress and prospects perspectives, to assess the possibility of remedying the situation via regional cooperation and integration of stock exchanges in eastern and southern African regions. Identifying ICT-induced diversified risks, efficiency and competition, higher returns, liquidity and cross-border capital flow as potential benefits of such networking, the study adduced that the exchanges stand to benefit more from closer cooperation by encouraging more cross-border and information/technology sharing. Lattemann (2005) set out to investigate whether the process of communication and interaction between investors and boards of companies had an impact on stock market return by examining the actual penetration of ICT into the 'external' corporate governance of Germany's publicly listed companies. The findings suggested that stock market returns were negatively related to ICT usage. Yartey (2006) investigated the role of financial development in explaining ICT diffusion during the 1990-2003 period. The study documented that financial development is an important determinant of ICT development. Ezirim et al. (2009) examined the effects of information technology on the growth and development of the capital market in Nigeria for the period 1998-2007. They found that the level of ICT-facilitated interaction between stockbrokers and investors significantly affected the growth of market capitalisation and the volume and value of shares traded. However, it was found that information technology does not significantly affect a number of listings and government bonds. Whereas Farhadi and Rahmah (2011) employed a sample of industrialised countries for the period 1990-2008 to analyse the effects of ICT on economic growth and documented a positive relationship. Bhunia and Ghosal (2011) investigated the impact of ICT on the growth of the Indian Stock Exchange and found that most of the stock market development indicators were significantly affected by ICT adoption, especially the number of stockbrokers. In a rela-tively more broad-based study, Zagorchev et al. (2011) employed a panel cointegration methodology to examine the dynamic relationship among financial development, ICT and GDP per capita in 86 sample countries. They found that personal computers and GDP per capita increase the liquidity, size and activity of financial systems. They also established that Internet and GDP per capita improve the liquidity, size, stock trading and activity of the financial markets. Dolatabadi et al. (2013) employed correlational and regression analysis techniques to study the impact of information technology development on stock market development in the world's leading capital markets. Their analysis showed that market capitalisation, turnover ratio and values of shares traded have a direct relationship with the ICT adoption components in the study, but they found no relationship between the ease of access to local markets and ICT development. Farid (2013) sought to ascertain whether African stock markets can improve their informational efficiency by formally harmonising and integrating their operations on a common platform. The study showed that institutional deficiencies and openness to trade have a negative impact on economic growth. Further, the study documented that those African economies that were more open to international capital flows did not seem to grow faster than the rest.
Okwu (2015) set out to investigate the effect of ICT adoption on financial markets by employing two leading stock exchange markets in Africa, namely the Nigerian and Johannesburg Stock Exchanges, as a unit of analysis. The findings of the study showed that ICT adoption had heterogeneous effects during the study periods. The use of the Internet had negative effects on the market indices, except for capitalisation. Further, Okwu (2016) explored the ICT and stock market nexus in Africa using evidence from Nigeria and South Africa. The study reported mixed findings. Specifically, the effect of a mobile telephone on all market indicators was found to be positive and significant.
In a somewhat broader dimension to the study of ICT in relation to business and economic outcomes, Donwa and Odia (2010) set out to determine the impact of the Nigeria stock market on socio-economic development for the 1981-2008 period. They found that the market indicators significantly enhanced economic growth. Marszk and Lechman (2021) explored the linkages between ICT penetration and the development and expansion of financial innovation on stock exchanges in ten European countries for the period from 2004 to 2019 and found that ICT spreads evenly in all the countries, laying solid foundations for the development of innovative financial products. Further, the results of their study documented that ICT positively influenced the diffusion of ETFs, regardless of the other possible determinants considered. Igwilo and Sibindi (2021) examined the causal relationship between ICT adoption and stock market development by employing a panel of 11 African stock exchanges for the period 2008-2017. They applied the panel ARDL bounds testing procedure to test for cointegration and examine the causal relationship between ICT adoption and stock market development. The results of their study documented that ICT adoption and stock market development were cointegrated in the long term. Further, the results of their study documented a bidirectional causal relationship (complementarity) between ICT adoption and stock market development. Igwilo and Sibindi (2021) also established a causal relationship running from financial freedom to stock market development.
The review of the empirical studies leads us to distil the key findings as documented in Table 1. These are the basis of our variable selection in the research methodology section. (2015) Personal computers and GDP per capita increase the liquidity, size and activity of financial systems.
An Overview of the African Stock Markets
The African stock markets have witnessed sustained growth over the years. This section presents the key metrics of eleven African stock exchanges which form the unit of analysis of this study. The key metrics are presented in Table 2. Suffice to highlight that, on the one hand, JSE is the most developed whilst, on the other hand, the BRVM is the least developed, as evidenced by their market capitalisation.
The JSE was formed in 1887. It is sub-Saharan Africa's oldest stock exchange. Further, the JSE is the most highly developed in sub-Saharan Africa. The JSE was formed during the gold rush of the late 1800s. This is notable, as Johannesburg is also called gold city and the gold capital of South Africa. Furthermore, in the early 1990s, the JSE upgraded its trading platform to an electronic trading system. Then, in 2005, the JSE demutualised and also listed on its own exchange (Johannesburg Stock Exchange 2019).
The notable ICT-driven improvement towards stock market development, regional cooperation and integration among the 14 South African Development Community (SADC) member states initiated by the JSE was aptly documented by Irving (2005). The initiatives include the harmonised stock exchange listing requirements. In 2000, based on the 13 principles of the JSE's listing requirements, the JSE's electronic trading system, known as the Johannesburg Equities Trading (JET) system, was installed. In 2002, the JSE adopted the London Stock Exchange's trading system technology, which is known as the Stock Exchange Electronic Trading System (SETS). In addition, the London Stock Exchange (LSE) provided technical support and trading system upgrades and enhancements that enabled brokers in both South Africa and the United Kingdom to access one another's stock markets. The Namibia Stock Exchange (NSX) was founded in 1904 during the diamond rush at that time. However, within six years the diamond rush ended and the stock exchange was closed. In 1992, NSX was relaunched with funds contributed by 36 leading businesses in Namibia. The companies contributed USD 10,000 each, as start-up capital for the exchange. Moreover, the Namibia Stock Exchange (NSX) in 1998, via a telecommunications link to the JSE, and in 2002 joined the JSE in adopting the LSE's trading system technology, which is known as Stock Exchange Electronic Trading System (Namibia Stock Exchange 2019).
The Nigerian Stock Exchange (NSE) was founded in 1960, and like most African stock exchanges it went through several reforms from inception. The NSE is the largest stock exchange in West Africa and serves the largest African economy (Nigerian Stock Exchange 2019). Among the several ICT-related developments and reforms in the Nigerian Stock Exchange (NSE) are the introduction, in 1997, of the automated clearing, settlement and delivery system-the Central Securities Clearing System (CSCS)-to ease transactions and foster investors' confidence in the stock exchange. Further, performance information on the NSE was linked to the Reuters International System for the timely dissemination of relevant market information to subscriber investors (Obiakor and Okwu 2011). The CSCS enables shares to exist in electronic form in a central depository and, thus, helps eliminate risks of the loss, mutilation and theft of certificates, as well as reduce errors and delivery delays. Other ICT adoptions include the CSCS trade alert, phone-in-service, e-bonus and e-dividend payments (Ezirim et al. 2009).
Measures of ICT Adoption and Stock Market Development
This study focused on the impact of ICT adoption and stock market development in Africa. A panel of eleven stock exchanges for the period from 2008 to 2017 was employed in this study. The ICT and stock market development data were sourced from the International Telecommunications Union and the World Bank Global Financial Development databases, respectively, whilst the data for the control variables of GDP and financial freedom were sourced from the World Bank Global Financial Development and Heritage Foundation databases, respectively. The variables employed in this study are described in Table 3. These are the ICT adoption variables, the stock market development variables as well as the control variables. Panel data techniques were used to analyse the data.
Empirical Model Specification and Estimation Techniques
This study adopted and modified the model of Okwu (2015) on ICT adoption and stock markets. The following models were specified to test the relationships between the stock market and ICT adoption variables: The Equations (1)-(4) specified above pose an issue when estimated using the ordinary least squares (OLS) method. There is the problem of endogeneity. To ensure that the estimated results were robust, the system-GMM and feasible generalised least squares (FGLS) estimators were also applied in estimation.
Generalised Method of Moments
The dynamic model was specified as follows: where: Y = Stock market development proxies, proxied by the number of listed firms (NLC), stock market capitalisation (SMC), the stock market value of shares traded (SMTV), the stock market turnover ratio (SMTR) and the stock market development index (FINDEX); X = A vector of explanatory variables (other than lagged stock market development); µ = An unobserved country-specific effect; ε = The error term, and the subscripts i and t represent the country and the time period, respectively.
Taking the first difference of Equation (5), it can be parameterised as follows: The GMM, Equation (6), is therefore specified as follows: α 0 , θ 0 , γ 0 and β 0 = Each model's intercept, respectively; α i , θ i , γ i and β i , where i = 1, 2, 3 and 4 and represent the coefficient of the model explanatory variables, while the time invariant country specific effects are captured by µ i , whilst ε it the error term and ∆ is the difference operator.
Research Findings and Discussion
This section presents the research findings of this study. It first presents the summary statistics of the variables employed in the study. Secondly, it analyses the correlations amongst the variables employed in the study. It progresses to discuss the diagnostic tests that were undertaken to ensure that the models estimated were well specified. Lastly, it presents the panel regression results and then discusses the inferences thereof. Table 4 presents the summary statistics of the key variables. First, considering the variables of stock market development, the number of listed companies (NLC) in African countries has a mean of 8.28. This means that there are eight listed companies for every ten thousand persons on average among all the countries adopted for this study. Stock market capitalisation to GDP, on the other hand, has a mean of 49.12 for the sample of African countries, which indicates that on average the stock market capitalisation to GDP of the selected countries is 49.12, and when compared to that of USA, which is 148, it becomes clear that there is growth potential in African stock markets. The stock market total value traded to GDP assumed a mean of 9.98, which indicates that on average the total value of shares traded as a percentage of GDP was 9.98%, while the minimum was 0.14% and the maximum was 123.25% for the African stock exchanges selected for the period of the study. The stock market turnover ratio as a percentage assumed a mean of 12.36%, which means that on average African stock exchanges' value of shares traded in relation to the stock market capitalisation was 12.36% at a particular period. Furthermore, the ICT adoption variables has the following: the number of broadband users (NBU) has a mean of 507,576, which indicates that on average the number of broadband users in the countries selected for this study was 507,576. The number of fixed telephone users (NTFU) assumed a mean of 1,448,491, which showed that on average the number of fixed telephone users in the African countries selected for this study was 1,448,491, the minimum number of users among the selected countries was 35,000 and the maximum number of users that any of the selected countries had was 11,900,000 users. Internet users as a percentage of population (UI) had a mean of 27.66%, indicating that on average the African countries selected for the study had an Internet penetration level of 27.66%, while the minimum Internet users were 1.9% and the maximum users in the countries of interest was 61%, while the standard deviation was 16.96%. The control variables had the financial freedom index mean of 0.52, which indicates that the African countries selected for this study had a level of financial freedom of 52%. The minimum or lowest financial freedom index ranking for the selected countries was 30% and the highest ranked country on the financial freedom index had 70%. The GDP assumed a mean of USD 121,000,000,000, which is the average GDP of the selected countries. The lowest GDP among the countries was USD 8,490,000. The highest GDP value was USD 568,000,000,000.
Correlation Analysis
The correlation matrix is presented in Table 5. There are a number of relationships that are noteworthy. By and large, the stock market development measures are positively associated with ICT adoption measures as well as the financial freedom variable. This is in line with a priori expectations. Firstly, the stock market capitalisation variable (SMC) exhibits positive association with all four measures of ICT adoption. This implies that the higher the level of ICT adoption, the higher the stock market capitalisation. The highest degree of association of the stock market capitalisation variable is observed in its relationship with the number of fixed telephone users variable, with the level of association of 37.2% and which is highly significant. The stock market capitalisation variable is also positively associated with the financial freedom measure. This means that the higher the degree of financial freedom, the higher the stock market capitalisation. Secondly, the stock market turnover ratio (SMTR) variable is positively associated with a number of ICT adoption measures, namely broadband, fixed telephone and mobile-telephone.
Its degree of association is highest with the fixed telephone variable, explaining 85.3% in the relationship, whilst the broadband variable has a 53.2% explanatory power. Thirdly, the stock market total value traded variable is positively associated across all four measures of ICT adoption. This implies that the higher the level of ICT adoption, the higher the value of the transactions traded on the stock exchanges. Fourthly, the number of listed companies variable is positively associated to the Internet users variable, and the financial freedom variable. This is in line with expectations. Lastly, all the stock market development variables are positively correlated to the gross domestic product variable.
All the associations are highly statistically significant. This lends credence to the view that stock market development fosters economic growth.
Diagnostic Tests
In examining the impact of ICT adoption on stock market development in Africa, a battery of diagnostic tests were conducted to choose the most fitting estimator to run each model. We took a cue from Magwedere (2019) and Makoni (2016) in the estimation and applied a number of diagnostic tests. These tests encompassed the following: a test for poolability of data by employing the applied Chow (1983) test; the Breusch and Pagan (1980) LM test for random effects; the Hausman (1978) specification test and the modified Wald test for group-wise heteroscedasticity; the Sargan-Hansen test for over-identifying restrictions and the Arellano and Bond (1991) (AR) test for autocorrelation. These tests enabled us to ensure that the estimated model was not mis-specified and that the estimations were consistent.
The pre-estimation tests conducted in estimating the four models affirmed the poolability of the data, the presence of random effects and favoured the use of the fixed effects over the random effects estimator. The tests also confirmed the presence of group-wise heteroscedasticity. As such, the estimation was conducted within the framework of the generalised method of moments, which are efficient in the presence of heteroscedasticity. The Sargan-Hansen and Arellano-Bond tests were relied on to ensure that the estimated models were stable. The diagnostics tests for estimating the four models are appended as Appendix A in Tables A1-A4. As such, three estimators were used to test the relationship. The fixed effects model was the base estimator. For inference, the system-GMM and FGLS estimation results are used, as these yield consistent standard errors in the presence of heteroscedasticity.
The results imply that the higher the number of broadband and mobile users, the higher the demand for shares, which results in increased trading volumes. The findings are consistent with that of Ashraf and Joarder (2009) and Mwalya (2010), who documented a positive relationship between ICT and the stock market traded volumes. In a study on the influence of ICT on the returns on stock and volume of trade on the Nairobi stock exchange, Mwalya (2010) documented that the adoption of information and communication technology increased the mean of daily trade volume and return. The estimation results reveal a negative relationship between the stock market volumes traded variable and the economic growth variable. This is contrary to the presumed relationship.
The second model (Model 2) that was estimated was on the relationship between stock market capitalisation and ICT adoption and control variables. Similarly, the stock market capitalisation variable is persistent over time as it is highly positively related to its lagged value. The results of the estimation system-GMM and FGLS estimators establish that stock market capitalisation is positively related to the fixed telephone user variable. This lends credence to the notion that ICT adoption has a significant and positive effect on stock market development. This finding also resonates with the findings of Leff (1984) as well as that of Aker and Mbiti (2010), who reported a positive relationship between the ICT and stock market capitalisation variables. Aker and Mbiti (2010), in their study, examined the mediating relationship between the spreading of ICT and economic development by investigating specifically how the spreading of ICT, such as broadband internet services and mobile telephone, have influenced a country's market capitalisation. Their results indicated that the number of Internet users, mobile cell subscriptions and fixed broadband subscriptions per 100 individuals each have a statistically strong and positive effect on market capitalisation.
The stock market turnover ratio was employed as the dependent variable in the estimations of the third model (Model 3). The only noteworthy finding is that the number of fixed telephone users variable had a positive and significant effect on this metric. This is similar to the finding on the estimation of the second model.
The fourth model (Model 4) that was estimated was on the impact of ICT adoption measures on stock market development proxied by number of listed companies per 10,000 people. The estimations did not yield any significant results to report on. It could be that the number of listed companies is not a good proxy for stock market development. The other salient finding to report on is that the financial freedom variable does not seem to have a significant effect on the stock market development. The financial freedom variable measures the extent of regulation of these markets. The a priori expectation was that highly regulated markets were bound to stifle portfolio flows and thus impede the development of stock markets. However, in the case of African stock markets, this seems not to be a deterrent.
Conclusions
The primary aim of this study was to examine the impact of ICT adoption on stock market development in Africa. On the one hand, stock market development was proxied by four measures, namely stock market capitalisation, the number of listed companies, the stock market value traded and the stock market turnover ratio variables. On the other hand, ICT adoption was proxied by four measures, which included the fixed telephone user, mobile-telephone user, broadband user, and Internet user variables. By and large, a positive relationship was established between ICT adoption and stock market development measures. Firstly, it was documented that a positive relationship subsisted between the stock market traded volume and mobile-telephone user variables. Secondly, a positive relationship was also proven between the stock market traded volume and broadband user variables. Thirdly, it was established that the stock market capitalisation variable was positively related the fixed telephone user variable. Fourthly, the research findings confirmed a positive relationship between the stock market turnover ratio and fixed telephone user variable. The results of this study did not find any tangible evidence on the effect of the level of regulation of financial markets on stock market development in Africa. As such, it could be reasoned that, notwithstanding the highly regulated stock markets in many African countries, this does not seem to stifle stock market activity.
The contribution of the study lies in that, hitherto, no panel study had been conducted to examine the link between ICT adoption and stock market development. This study has demonstrated the impact of ICT adoption on stock exchanges and on the economy in general. Specifically, the study documented that ICT adoption has a positive and significant impact on stock market development in Africa. As such, policy makers must continue to create an enabling environment for ICT adoption and investment in order to induce economic growth. Therefore, if African governments promulgate ICT policies that promote investments in the improvement of Internet services, broadband and telephone (mobile and fixed) infrastructure, this will spur stock market development. For example, African governments can remove restrictions on the repatriation of dividends or profits on ICTrelated investments by foreign investors. Further, they can avail other incentives such as tax holidays for specific periods or allowing tax credits for companies that invest in ICT-related infrastructure.
There are two main limitations and caveats to this study that we need to highlight. Firstly, it was beyond the scope of this study to test the effects of the business cycle (namely the 2007-2009 global financial crises and the COVID-19 pandemic) on the relationship between ICT adoption and stock market development. Secondly, the dataset employed in the study only extended up to 2017. As such, for robustness checks, future studies could extend the study period and also examine the effect of business cycles on the relationship between ICT adoption and stock market development. Further research in this realm could also investigate the impact of ICT adoption on the development of African stock markets, especially in this transformational regime of regional cooperation birthed by the signing of Africa Continental Free Trade Agreement by most major African countries. Arguably, this could serve as a catalyst for the integration of stock exchanges and the attendant benefit of increasing the international competitiveness of African stock markets in general.
Conflicts of Interest:
The authors declare no conflict of interest. Prob > chi2 0.0620 | 8,547 | 2022-01-19T00:00:00.000 | [
"Economics",
"Computer Science",
"Business"
] |
Head pose estimation & TV Context: current technology
With the arrival of low-cost high quality cameras, implicit user behaviour tracking is easier and it becomes very interesting for viewer modelling and content personalization in a TV context. In this paper, we present a comparison between three common algorithms of automatic head direction extraction for a person watching TV in a realistic context. Those algorithms compute the different rotation angles of the head (pitch, roll, yaw) in a non-invasive and continuous way based on 2D and/or 3D features acquired with low cost cameras. These results are compared with a reference based on the Qualisys motion capture commercial system which is a robust marker-based tracking system. The performances of the different algorithms are compared function of different configurations. While our results show that full implicit behaviour tracking in real-life TV setups is still a challenge, with the arrival of next generation sensors (as the new Kinect one sensor), accurate TV personalization based on implicit behaviour is close to become a very interesting option.
Introduction
The analysis of people interest is crucial for number of applications such as advertising, museums and public spaces displays, gaming technologies, TV experience, etc.Here we focus on context-aware TV experiences which can highly benefit from knowledge about viewer interest.
Viewer interest can be extracted in various ways.In computer vision there are two families of methods: one is marker-based and the other markerless.Here we focus on markerless face direction (or head pose) estimation techniques which begin to provide results for real-world applications at reasonable distances and illumination conditions in a non-invasive and transparent way.
Moreover, more and more TV and home setups (such as the XBOX) come with cameras.The acceptance of such sensors inside homes watching the viewers is higher and higher and people see less ethical issues in being observed by sensors if they have enhanced experience in return and if the data is processed locally in real-time without any recording and audio/visual data transmission.The conjunction of the arrival of new efficient low-cost sensors and of their high degree of acceptance open new potential applications in the TV domain.In this paper we thus focus on TV and present a state for the art of the main face direction estimation methods which can be used in TV setups.In addition to the gesture/voice recognition which can be now found on a lot of "smart" TVs, those setups can help enhancing viewer TV experience by modelling viewers behaviour to provide them with personalized media or enrichments in a single or multi-screen environment.
In section 2, we present the different techniques for head pose estimation and we describe markerless face direction methods which are used in this study.In section 3, we present the Qualisys system which was used to validate these techniques along with the experimental validation setup.Section 4 shows the validation results in our setup and presents a discussion on the state-of-the-art methods which allows to choose a method depending on the viewing situations (in terms of illumination changes or distance from the sensor for example) or analysis results (in terms of precision and framerate constraints).We finally conclude in section 5 on the usability of the current and near future methods.
Head pose estimation
Head pose estimation and head movements are mainly captured with physical sensors and optical analysis as we can see in the animation industry.
Physical sensors such as accelerometers, gyroscopes and magnetometers are placed on the head to compute the head rotation [1] [2].
Another way consists in marker-based optical motion capture systems that are able to capture the subtlety of the motion.In these methods, markers are placed on the head of the actor and they are tracked through multiple cameras.The markers are often coloured dots or infrared reflective fiducials and the cameras depend on the markers type.Accurate tracking requires multiple cameras and specific software to compute head pose estimation.These systems are very complex and expensive, they need calibration and precise positioning of markers (Optitrack [3], Qualisys [4]) and they remain invasive.While they cannot be used in real-life TV setups, we use marker-based methods to evaluate the markerless methods which can be low-cost, transparent (no calibration needed) and non-invasive.More precisely we use the Qualisys motion capture system.
Markerless tracking is another approach to face motion capture and a wide range of methods exists.Some markerless equipment use infrared cameras to compute tracking of characteristic points.For example, FaceLAB gives the head orientation and the position of lips, eyes and eyebrows [5].But there are also algorithms using only a webcam.We can cite FaceAPI [6] from the same company as FaceLAB.Markerless systems use classical cameras or infrared cameras to compute tracking of characteristic points.We choose several freely accessible methods in this paper for a fair comparison in a real-world TV context.
The first method that we use is based on the Microsoft Kinect SDK [7].The Kinect SDK is free, easy to use and contains multiple tools for user tracking and behaviour modelling such as face tracking and head pose estimation.These tools combine 2D and 3D information obtained with the Kinect sensor.
Secondly, a head pose estimation solution based on 2D face tracking algorithm using the free library OpenCV [8].The face tracking part of this method was developed by Jason Saragih and it is known under the name of "FaceTracker" [9].The head pose estimation part was developed separately and it is explained in a study of this method for computer uses [10].
Finally we use a fully 3D method for real time head pose estimation from depth images [11] based on a free library called PCL (Point Cloud Library) [12].
MS Kinect solution (KinectSDK)
The Kinect sensor developed for the Xbox360 is a low-cost depth and RGB camera.It contains two CMOS sensors, one for the RGB image (640 x 480 pixels at 30 fps) and another for the infrared image from which the depth map is calculated, based on the deformation of an infrared projected pattern (λ = 830nm).The depth sensor has an optimal utilisation in a range of 1.2 meter (precision better than 10 mm) to 3.5 m (precision better than 30 mm) [13] and can be perturbed by other sources of infrared light.
Microsoft provides a Face Tracking module with the SDK which works with the Kinect SDK since the version 1.5.These SDKs can be used together to "create applications that can track human faces in real time" To achieve face tracking, at least the upper part of the user's Kinect skeleton has to be tracked in order to identify the position of the head.
The Get3DPose method returns two tables of three float numbers.The first one contains the Euler rotation angles in degrees for the pitch, roll and yaw as described in Figure 1, and the second contains the head position in meters.All the values are calculated relatively to the sensor which is the origin for the coordinates.[14].All head motion can be obtained by combining these three basic movements.
The technique used to estimate the rotations and facial features tracking of the head (Figure 2) is not described by Microsoft, but the method uses the RGB image and depth map.The head position is located using 3D skeleton only on the depth map.The head pose estimation itself is mainly achieved on the RGB images.Consequently, the face tracking hardly works in bad light conditions (shadow, too much contrast, etc.).
By using the SDK, we obtain a head orientation measuring tool at 30 fps (frames per second).The experiment computer is a laptop with an Intel i7 2.40GHz, 8GB of RAM and running Windows 8.For this paper, this method will be called "KinectSDK".
Webcam solution (Facetracker)
This method is a combination of FaceTracker and a head pose estimation based of the features extraction from the face tracking part.
FaceTracker allows the identification and localization landmarks on a RGB image.These points can be assimilated to a facial mask allowing to track facial features like the edge of lips, facial contours, nose, eyes and eyebrows (Figure 3).Based on this, we apply the perspective-n-point (PNP) methods [11] to find the rotation matrix and 3D head pose estimation.
FaceTracker is a CLM-based C/C++ API for real-time generic non-rigid face alignment and tracking.The approach is an instance of the constrained local model with the subspace constrained mean-shifts algorithm as an optimization strategy [10].
The advantage is that FaceTracker does not require specific manipulation before the utilization and the algorithm makes an automatic detection of the user face based on a pre-trained model on database.FaceTracker is based on the OpenCV library [9].It is compatible with any camera.In our setup we use a 480X640 pixel webcam.The initialization of the algorithm is based on the Haar classifiers [15], thus the face tracking is optimal if the face is centred in front of the camera and straight.We can also observe significant perturbations when an object starts occluding some landmarks or when head rotation is rapidly done with a wide angle.
To find the Euler angles of the rotation of the head we use 2D Points from Facetracker, 3D points from a 3D head model and we compute the rotation matrix based on the perspective-n-point method.A Set of 7 points are taken among the 66 points from FaceTracker.These points were chosen because they are far enough and stable regardless of the expressions and movements of the face.In parallel to this, we use a 3D head model on which we extract 3D points corresponding to 2D previous points.
Once the seven 2D and 3D coordinates are set, and the camera matrix found, we can calculate the matrix of rotation and translation of 3D model by reporting the data from the face tracking (Figure 4).The pitch, roll and yaw can directly be extracted from the rotation matrix in real time about 24fps (from 19 to 28fps).The computing time per frame is about 50ms by single thread on a Linux OS with Intel Core i7 2.3GHz and 8GB of RAM.For the next steps of this analysis, this method is named "Facetracker".
Use of 3D point clouds (3DCloud)
The method used here is based on the approach developed in [16] [17].The implementation used in this case [18] is the version based on the PCL library.The main differences between the Fanelli method and the PCL implementation are the parameters of the algorithm and the training.The PCL implementation was online earlier, it is further maintained and was used for head pose estimation in TV context [11].This solution relies on the use of random forest regression applied on a 3D cloud.This cloud is obtained with a RGB-D camera, such as MS Kinect or Asus Xtion.Random forests [19] are capable of handling large training sets, of generalization and fast computing time.In our case the random forests are extended by using a regression to simultaneously detect faces but also to estimate their orientations on the depth map.
The method consists of a training stage during which we build the random forest and an on-line detection stage where the patches extracted from the current frame are classified using the trained forests.The training process is done once and it is not requested for any user.The training stage is based on the BIWI dataset [20] containing over 15000 images of 20 people (6 females and 14 males).This dataset covers a large set of head pose (+-75 degrees yaw and +-60 degrees pitch) and generalizes the detection step.A leaf of the trees composing the forest stores the ratio of face patches that arrived to it during training as well as two multi-variate Gaussian distributions voting for the location and orientation of the head.A second processing step consists in registering a generic face cloud over the region corresponding to the estimated position of the head.This refinement can greatly increase the accuracy of the head tracker but requires more computing resources.A real-time mode is possible to use but it works at around 1 fps that is why we decided to run the system off-line (Figure 5).This allows a full processing of data corresponding to a recording of 20 fps with the refinement step.The advantage of such a system is that it uses only geometric information from the 3D point cloud extracted by a RGB-D sensor, and is independent of the brightness.It can operate in the dark, which is rarely possible with face tracking systems working on colour image which are highly dependent on the illumination.This approach was chosen because it fits well in the scenario of TV interaction [11].In addition, the use of 3D data will simplify the integration of future contextual information about the scene.For the analysis, this method is named "3DCloud".
A comparison: experimental setup
In this section we will first describe how we obtained the reference values with the Qualisys system.
System description
Every result of the experiments presented in this study was compared with an accurate measurement of the head movements.This ground truth was obtained thanks to an optical motion capture system from Qualisys [4].The setup consists of eight cameras, which emit infrared light and which track the position of reflective markers placed on the head.Qualisys Track Manager Software (QTM) provides the possibility to define a rigid body and to characterize the movement of this body with six degrees of freedom (6DOF: three Cartesian coordinates for its position and three Euler angles -roll, pitch and yaw -for its orientation).
We used seven passive markers: Four markers were positioned on the TV screen and three markers were fixed to a rigid part of a hat (the three markers were placed with a distance of 72mm, 77mm and 86mm between them) (Figure 6 and Figure 7).Both TV screen and hat were defined as rigid bodies in QTM.The framerate tracking is constant at 150 fps, so it gives the values of the 6 degrees of freedom (DOF) each 0.007 seconds.
System calibration
Before each recording session, a calibration procedure was made: the subject, who wears the hat, sat in front of the screen and QTM nullified the 6DOF values for this head position.By this means, all the head movements were measured relatively to this initial starting position.To check the quality of the tracking data, QTM computes the different residuals of the 3D points compared to the rigid body definition.Over all the experiments, the average error of each head marker about 0.62mm.
Experimental setup
Qualisys produces marker-based accurate data in real-time for object tracking at about 150 frames per second.The infrared light and marker do not interfere with RGB image and with infrared pattern from the Kinect.The choice of Qualisys as reference has been done especially in order to compare markerless methods without interferences.We perform the recording of the KinectSDK and the Facetracker during the same time under normal conditions and correct face lighting.And we have chosen to perform the 3DCloud method separately from the first record because interferences are observed between 2 running Xbox360 Kinects heading in the same direction.This positioning is shown on Figure 8.The angles computed from the different methods are the Euler angles.
We made several recordings with 10 candidates.Each one does head movement sequence at 5 different distances from the screen: 1.20m, 1.50m, 2m, 2.5m and 3m.Movements performed are conventional rotations when we are facing a screen (pitch, roll, and yaw; combination of these movements; slow and fast rotation).
Six of them had light skin, others have dark skin.Three of them wear glasses and six of them wear beard or mustache.Table 1 summarizes these facial characteristics.
A preliminary test showed that the optimal position of the camera for Facetracker and KinectSDK is on top of the screen, while for 3DCloud which uses the shape of the jaw, is at the bottom.We thus decided to keep this two different positions in the following tests to maximize each method results.This does not change anything to the distances or viewing conditions and both positions could be valid in a real-life setup.However, we can notice that the top position would be more practical to avoid obstacles in people room such as objects on a table, etc.
Experiments results
After having synchronized the results obtained by all systems and the reference (temporal alignment and start offset suppression), as the sampling frequencies are different, we have interpolated the reference values to obtain similar data sampling for the different systems that we compare.To make the comparison between systems and the reference computed with Qualisys, we use two metrics: the Root Mean Square Error (RMSE) and the correlation score.
The Root Mean Square Error is given by: With the predicted values obtained by one system ypred, the values from the reference yref and the total number of values N.
Raw data visualization
The Figures below show the results with the superposition of values from the different algorithms with the reference for one random recording session.The first series of three graphics show the KinectSDK, the Facetracker and the reference for pitch, yaw and roll (Figure 9).The second series show the 3DCloud method compared with the reference (Figure 10).
Each recording session contains a head movement sequence at 5 different distances.Figure 11 shows part of a session for the pitch, roll and yaw at the distance of 1m20 for KinectSDK.The sequence is: first a yaw movement follows by a pitch and a roll movement.Next movements are combination of previous basic movements.
The holes in the green plots on the Figure 11 come from loss of tracking by the KinectSDK.The algorithm provides no point when tracking is lost.The 3Dcloud method does not provide point in case of loss of tracking as the KinectSDK, but the Facetracker based method gives results even in case of loss of tracking, based on the latest detection (Figure 12).
We observe tracking losses in large and rapid angular movements.This is often due to the fact that part of the head is less visible or the brightness is reduced, and therefore the tracking based on features points from RGB image is more difficult to do. Figure 13 below shows angular errors with KinectSDK and the Facetracker Reference is in blue and KinectSDK is green.In addition to errors on large angles, we observed that 3DCloud achieves significant errors in the roll movement (Figure 14).This is caused by the shape of 3D head model used in this method.The model is mainly flat with nose prominent and it is good for pitch and yaw but less good for the roll rotation.
Correlation and RMSE function of the distance
The correlation is a good indicator used to establish the link between a set of given values and its reference.It is interesting to analyse the correlation value obtained for each distance, with average for all candidates, to know which methods are better correlated with the reference data.
If the correlation value is equal to 1, the two signals are totally correlated.If the correlation is between 0.5 and 1, we consider a strong dependence.The 0 value shows that the two signals are independent and de -1 value correspond to the opposite of the signal.Figure 15 shows the correlation for pitch, Figure 16 On Figure 15, we observe that the pitch (up-down movement) of the KinectSDK has a good correlation (0.84) at a distance of 1m20.The Facetracker and 3DCloud are lower with values about 0.6.We observe that the facetracker stays stable with the distance between 0.5 to 0.73.But KinectSDK and 3Dcloud decrease with the distance under the correlation value of 0.5 for KinectSDK at 2m50 with 0.32, and for the 3DCloud at 2m with 0.34.For the second angle, the yaw, corresponding to a rightleft movement, we have on the Figure 16 good results for the KinectSDK with values upper than 0.9 for 1m20, 1m50 and 2m.Then de values decrease from 0.85 for 2m50 to 0.76 for 3m.The plot of the Facetracker is similar but less good with values around 0.75.3DCloud achieves the worse performance with 0.61 at the beginning and less after.
As mentioned in Section 4.1, the 3DCloud provides bad values for the roll.The KinectSDK have good correlation as for the yaw curve (0.93 to 0.7).Facetracker correlation is also good but with lower result than KinectSDK with about 0.65 (Figure 17).After watching the correlation values, it is also interesting to look at the mean error made by each system.Indeed, a method with a big correlation and low RMSE is considered very well for head pose estimation.Figure 18 shows the RMSE for pitch, Figure 19 for yaw and Figure 20 for roll.
We observe a RMSE similar for the pitch about 10 to 15 degrees for each method (Figure 18).But the KinectSDK is good at 1m20 with 5.9 degrees.The error logically grows with the ditance.In the case of roll, the RMSE is similar for Facetracker and KinectSDK (around 10 degrees with a smaller error at 3m for KinectSDK).The error of 3DCloud is arround 13 degrees (Figure 20).This error can be put in perspective because the correlation for the roll was poor.
Correlation and RMSE function of the viewer
After watching the values of the root means square error and correlation according to the different distances, it is interesting to look at the average values of these two indicators for each individual to link some observation to candidates facial features previously described in Table 1.Below, we have all three graphs (Figures 21, 22 and 23) for the test according to the correlation, followed by the three graphs of the RMSE (Figures 24, 25 and 26).In Figure 21, we observe that the correlation for each individual is about 0.6.All these values are similar.But a correlation about 0 is observed for the candidate number 5 for the 3DCloud method which means that the pitch did not work at all.In Figure 22, the KinectSDK gives a coorelation higher than 0.75 for each candidate followed by Facetracker with values higher than 0.5.3DCloud method gives the worse correlation with values between 0.1 and 0.64.For this method, candidate number 5 gives also the worse correlation for the 3DCloud.In Figure 23, we observe that the KinectSDK and the Factracker method give good values higher than 0.5, with a better coorelation for KinectSDK.Results for the 3DCloud are worse, as already seen on other graphics regarding the roll (Figure 17).24.We observe that the error on the pitch for the Facetracker method is higher for candidates 5,7,8 and 9, these candidates have darker skin (Table 1).The KinectSDK has more homogenous results.In Figure 25, the 3DCloud gives worse results than KinectSDK and Facetracker.We also observe bigger error for darker skin for the Facetracker method.Again KinectSDk seems to be less sensitive to the viewer skin color.On this roll graph (Figure 26), the error is about 10 degrees for KinectSDK and Facetracker and greater for the 3DCloud method.
Face direction methods analysis
After analyzing all data obtained by the three different methods we are able to establish the advantage and the drawbacks for each method in a TV context.
These results show that the better correlation values are obtained with the KinectSDK.The Facetracker based method also gives good result.We also have similar errors for these methods.A previous study has shown that the Facetracker method gives very good result for a distance under 1 meter [10].At this distance the KinectSDK is not able to track the head because on one hand the sensor had a blind zone up to 60cm [21] and on the other hand the field of view is too small and it is hard to detect correctly a user under a distance of 1 meter.Concerning the third method, 3DCloud, the RMSE and the correlation are worse than the two other methods and do not work at a distance of more than 2m from the screen.The estimation of roll is also of poor quality.Concerning skin color, KinectSDK seems to be the most homogenous methods while the two others (mostly FaceTracker) might work less well in case of dark skin.
For all these methods, errors are mainly due to face tracking errors and tracking losses.If we cut all sections with bad detection of the head and the characteristics point of the face, the RMSE will decline significantly and the correlation will increase.But in our context, we want to get results without post-processing corrections.We can also say that from a distance of 1.50m, an error of 10 degrees generates a gaze tracking error on the screen of 26cm ( 150sin(26°) ).This is quite acceptable for whether a person looks at a screen, or any other object.However, this error let us hope to be able to detect the screen which is attended and not precisely what region of the screen is attended.
About the benefits of these different methods, we can say that the Facetracker method requires a basic camera while the two other work with a 3D sensor.The advantage of the 3D sensor for the KinectSDK is in the robust people and head tracking.Thanks to this, KinectSDK rarely loses the head position, provided being able to detect and track the user skeleton.The 3DCloud method allows head pose estimation in all kind of illumination and also in darkness because it works only on the point cloud obtained by the 3D sensor.Facetracker and KinectSDK work in real-time while the 3DCloud requires about 1 second per frame.
The pitch and the yaw are the two important rotations in a context on TV watching because we are generally straight face at the TV, so roll is generally close to 0. In this case the Pitch describes the up-down movement.This movement is important to know if the viewer looks at the main TV screen or if he watches a second screen on his knees like a smartphone or a tablet.The yaw corresponds to a left-right movement, usable to know if the viewer watches the main TV screen or if his attention is drawn on the sides of the screen, for example to talk with somebody else.Combination of pitch and yaw indicate the direction of the face allowing to know where the user look on the TV, but given the error of 26 cm at a distance of 1m50, by using the current technique one can hardly get usable screen position information and only the attended screen can be extracted in real TV setups.
Although our tests present the viewer properly sited on a classical chair, we produced some preliminary tests in much more relaxed position on a sofa and daylight in the back (Figure 27).The first results show that while 3DCloud and FaceTracker perform poorly, the KinectSDK performs less well, but still the data extracted using the facial mask makes sense.
Head pose estimation in a TV context: a conclusion
This study aims to show the advantages and weaknesses of three markerless head pose estimation methods in a TV context.This assessment is achieved using a highly accurate marker-based MOCAP system (Qualisys).These three methods were chosen because they are easy to use, low-cost and the codes are freely available.
The study of accuracy is made on individuals with different facial characteristic.As we work in the context of TV watching for user attention detection, we worked over distances from 1.20m to 3m and we analyzed the rotation of the head along three angles: pitch, roll and yaw.
This study focuses on Facetracker a method operating on RGB image, the 2D-3D method from the KinectSDK and a full 3D method based on Point Cloud Library (3DCloud).The results proved that the most accurate method is the KinectSDK with the best correlation and the smaller mean error.These accuracy is due to the 3D user and skeleton detection which provides precisely the head position.Based on this robust head position, the estimation of angles of rotation is made easier.The second best result is obtained by the Facetracker method.The error is a bit higher and correlation slightly lower than KinectSDK due to wrong face detection.These two methods have weaknesses in face illumination variations and occlusions.Concerning the full 3D method we observed the worse results.But this method has a major advantage because it works only on the point cloud and it is insensitive to brightness changes and also works in complete darkness.We also notice that the methods are sensitive to facial characteristic for head pose estimation.Glasses and beard create minor errors.Only the color of the skin has a slight effect with the method for face tracking which was less stable.
The choice of one of these methods is therefore based on the context of use.If the illumination is bad or if it must operate in the dark, the chosen method will be 3DCloud.This method however has the disadvantage that requires more computation time while the other two methods work perfectly in real time.In the case of a classical TV setup, the user attention is better computed by the KinectSDK.If we are interested in head pose estimation with straight face in front of a computer screen (like a webcalm computer setup) the KinectSDK is better if it is possible to track the user skeleton (not too close to the camera).Otherwise the FaceTracker will be the best method for computer uses.
Our tests show that the current technologies can provide a first prototype of implicit viewer behavior in the context of a TV setup.However, reaching good extraction quality in real-life setups with natural positions and lightning are only possible by using a robust sensor as a RGB-D camera.Nevertheless, with the arrival of second generation RGB-D sensors as the Kinect one (second version of the Kinect sensor which provides better depth sensor, better RGB definition and operates in more complex illumination conditions), the implicit viewer behavior acquisition in real-life TV setups becomes possible.
The head pose estimation allows to know the user interest (or disinterest) on the media displayed on the screen which is of crucial importance in TV content personalization.In addition to one viewer head pose estimation, other features as body movements, postures or joint attention can be extracted from the skeleton to provide additional features to the TV viewer behavior analysis.Joint attention appears when two individual share the focus on the same object, in this case the object is the screen.
Figure 1 .
Figure 1.Three different degrees of freedom: pitch, roll and yaw [14].All head motion can be obtained by combining these three basic movements.
Figure 2 .
Figure 2. The Microsoft Kinect SDK provides facial features tracking and head pose estimation thought pitch yaw and roll.
Figure 3 .
Figure 3. FaceTracker detects in real-time a set of 66 points.Points 0 to 16: lower facial contours, 17 to 21 and 22 to 26: right and left eyebrows, 27 to 35: nose, 36 to 41 and 42 to 47: right and left eyes, 48 to 65: edge of lips.
Figure 4 .
Figure 4. We have the projection of the 3D head model correctly superposed on the points from the face tracking.
Figure 5 .
Figure 5. 3D rendering of the system.We can observe the 3D point cloud obtained with the depth camera and the application of the head pose estimation algorithm.When a face is detected, we retrieve a vector of the head direction.
Figure 6 .
Figure 6.Qualisys Track Manager displays tracking of two rigid bodies (TV screen in blue on the left and head in red on the right).
Figure 7 .
Figure 7. Infrared reflectors on viewer hat sitting in front of the TV.
Figure 8 .
Figure 8. Kinect for KinectSDK in green, Webcam for the facetracker in red, Kinect for 3DCloud in blue, 2D camera synchronized with Qualisys in yellow.
Figure 10 .
Figure 10.Reference: blue, 3DCloud: red.First row: pitch, second row: yaw, third row: roll.Tracking is lost for a distance greater than 2 meters for 3DCloud.
Figure 11 .
Figure 11.Head movement sequence for a distance of 1m20.Errors appear like holes in the green plots.Reference is in blue and KinectSDK is green.
Figure 12 .
Figure 12. Results given by Facetracker (red) in case of loss of tracking.
Figure 13 .
Figure 13.Errors are observed with some angular movement due to loss of tracking.The reference is blue, KinectSDK is green and Facetracker is red.
Figure 14 .
Figure 14.Errors observed on Roll: 3DCloud in red, reference in blue.
Figure 15 .
Figure 15.Mean correlation for the pitch function of the viewer distance from TV (in m).
Figure 16 .
Figure 16.Mean correlation for the yaw function of the viewer distance from TV (in m).
Figure 17 .
Figure 17.Mean correlation for the roll function of the viewer distance from TV (in m).
Figure 18 .
Figure 18.Mean RMSE (in degrees) for the pitch function of the viewer distance from TV (in m).
Figure 19 .
Figure 19.Mean RMSE (in degrees) for the yaw function of the viewer distance from TV (in m).
Figure 20 .
Figure 20.Mean RMSE (in degrees) for the roll function of the viewer distance from TV (in m).
Figure 21 .
Figure 21.Mean correlation for the pitch for each candidate.
Figure 22 .
Figure 22.Mean correlation for the yaw for each candidate.
Figure 23 .
Figure 23.Mean correlation for the pitch for each candidate.
Figure 24 .
Figure 24.Mean RMSE (in degrees) for the pitch for each candidate.
Figure 25 .
Figure 25.Mean RMSE (in degrees) for the yaw for each candidate.
Figure 26 .
Figure 26.Mean RMSE (in degrees) for the roll for each candidate.
Figure 27 .
Figure 27.Preliminary test in relaxed positions on a sofa with a tablet as second screen (KinectSDK).
Table 1 .
Facial characteristics for the 10 candidates. | 7,737.8 | 2015-06-02T00:00:00.000 | [
"Computer Science"
] |
Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data
In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine), the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.
Introduction
Information on groundwater are scarce in many parts of the world, particularly where they are essential for ensuring sustainable socio-economical and ecological growing of economies.This requires collecting data in remote or inac-Correspondence to: U. Mallast (ulf.mallast@ufz.de)cessible areas where any a priori information on groundwater is beneficial.Planning adequately and time-effectively a field campaign and/or finding suitable sampling sites is also required.Nevertheless, while difficult to obtain, a priori information need to be reproducible, objective and reliable.
After Hobbs (1904) introduced the term lineament, it has been used in different fields (e.g.petrology, geology and hydrogeology) as indicator for remote structural interpretation of the respective areas of interest.O' Leary et al. (1976) define lineaments "as a mappable, simple or composite linear feature of a surface whose parts are aligned in a rectilinear or slightly curvilinear relationship and which differ from the pattern of adjacent features and presumably reflect some sub-surface phenomenon."In terms of groundwater, it is apparent that fractures and faults have surface expressions and thus can serve as indicator for water flow-paths (Anisimova and Koronovsky, 2007;Dinger et al., 2002;Fernandes and Rudolph, 2001;Meijerink et al., 2007).To prove the applicability of lineaments as indicators for either fault systems or assumable preferential groundwater flow-paths, many authors compared both features and revealed a strong correlation in shallow and deeper aquifers (Fernandes and Rudolph, 2001;Oguchi et al., 2003;Salvi, 1995;Sander et al., 1997).
Classical approaches to lineament extraction are conducted manually.Based on experience and expert knowledge they can be evaluated as objective but irreproducible and inefficient in terms of time and labour, especially when focused on a macroscale (Costa and Starkey, 2001;Hung et al., 2005;Sander, 2007;Vaz et al., 2008).Thus, in order to obtain reproducible and objective criterions as well as efficient procedures, automated and semi-automated algorithms (e.g.segment tracing algorithm (STA), Hough Transform, PCI LINE) were developed (Karnieli et al., 1996).Although being efficient, most of the automated algorithms seem to U. Mallast et al.: Derivation of groundwater flow-paths be black boxes, where certain filtering or linking parameters need to be adjusted without visualizing intermediate results.This limits efficiency particularly for inexperienced user.
Inaccuracies emerge mostly from input data as multispectral or aerial images contain semantic linear features not originating from geological, but rather mainly from anthropogenic structures.To overcome this problem a largely manually conducted "correction-step" is introduced in many cases excluding non-geologic lineaments (Hung et al., 2005;Kocal et al., 2004).
By using digital elevation models (DEM), extracted lineaments solely rely on elevation information.Hence, only accounting for this aspect, DEMs represent a promising basis for semi-automatic lineament extraction for topographical, hydrological or geological purposes (e.g.Gloaguen et al., 2007;Jordan and Schott, 2005;Wladis, 1999).
For any purpose the ground sampling distance (GSD) of the DEMs plays an important role.Generally, the better the GSD of the DEM, the more topographical and groundwater relevant information can be extracted.High resolution DEMs from e.g.LiDAR are therefore more appropriate, but have disadvantages of being price-intensive, not always available and containing numerous anthropogenic overprinting structures.In contrast, medium resolution DEMs have the advantage of being globally and freely available (e.g.SRTM, ASTER GDEM, 2009) and most likely containing only natural topographic information.The disadvantage of a coarser resolution is potentially hindering extraction of small-scale topographical lineaments.
Despite the above-mentioned promising basis for lineament extraction, different authors point out critical aspects.Lineaments derived from elevation data could also have a non-tectonical origin (Arenas Abarca, 2006;Jordan and Schott, 2005).Reasons can be related to morphology and lithology causing identical linear topographic expressions that must be treated carefully.
The objective of this study is therefore to elaborate a robust method in order to access information on groundwater in remote and politically sensitive regions where data availability is scarce.We present a semi-automatic and transparent approach of extracting lineaments, which are evaluated in terms of hydrogeological significance and finally used to derive possible groundwater flow-paths.The method is applied exemplary along the western catchment of the Dead Sea (Israel/Palestine).In order to evaluate and validate our findings we intensively compare them to structural characteristics and groundwater modelling results of this area.
Pre-processing
The study is based on the ASTER GDEM (ERSDAC, 2009) with a GSD of 30 m, which is adequate for medium-scale studies (Abdullah et al., 2009;Hung, 2007) (Fig. 1).Since a validation report of this elevation product clearly states that it "does contain residual anomalies and artefacts (ASTER GDEM Validation Team: 19)" it is necessary to apply a smoothing filter before further analysis.Investigating different filter sizes revealed that artefacts are entirely removed by using filter sizes of 30 × 30 pixels.This filter size contains the risk of suppressing smaller structures with a pulse function having the size of less than one-half of the filter size (Pratt, 2007).However, the advantages outweigh the disadvantages as it eliminates artefacts inherited in the DEM that could not be eliminated with smaller filter sizes.
Another important decision is the type of smoothing filter.Following Arias-Castro and Donoho ( 2009) it is advisable to use Median filter since it preserves topographic structures like ridges, ramps, peaks and steps which usually have sizes above one-half of the matrix or do not represent a pulsefunction (Quackenbush, 2004;Yang and Huang, 1981).Using Mean or Gaussian filter that also eliminate artefacts and noise would not preserve edges in a comparable manner.This is especially true for areas where we would assume lowlevel noise, as it is the case in e.g.sparsely to non-forested areas.More information on smoothing filter and their effects can be found in Mather (2004).
After preparing the DEM, a set of 2nd order Laplacian linear filters in all main directions (N; NW; W; SW) introduced by Pratt ( 2007) is applied using a 5 × 5 matrix to detect edges within the DEM (Fig. 2).Appropriate filter sizes can be determined by visually analyzing the smoothness/roughness of the respective study area.Chavez and Bauer (1982) defined several categories based on the topography where edge detection on very smooth surfaces should have a filter size of 9 × 9, whereas on rough surfaces a 3 × 3 filter size should be applied.The advantage of using 2nd order filters is the accentuation of edges that mark a significant spatial change in the second derivative (Pratt, 2007;Wladis, 1999).The result enhances all edges that are defined by a series of adjacent pixels of similar high value, which represent topographic features.Concluding the linear filtering step, all four directional images are analysed on a pixel-by-pixel basis and merged into one omni-directional image, keeping only the maximum value of each pixel (Fig. 2).
Object-Based Image Analysis
The omni-directional image described above serves as basis for the object-based classification using Imagine Objective (ERDAS).Any supervised classification is defined by training pixels that determine the spectral range per class.Foody et al. (2006) suggested a minimum number of 30 training pixels per class in order to ensure an accurate classification result.We followed their recommendation and manually chose 30 training samples per class homogeneously distributed throughout the omni-directional image focussing on two classes: (1) adjacent pixels of high values representing edges and thus topographic features and (2) background pixels representing areas without edges.The classifier analyses each pixel value and compares it to the training samples creating a probability metric (range of 0 to 1) per pixel.As a result a probability image is created where high probability values represent an edge-class membership whereas low values represent the background-class.Those probability values form the basis to qualitatively and quantitatively define an adequate threshold that separates both classes during the following "threshold and clump" procedure.
Given the fact that the probability image strongly depends on the chosen training samples a uniformly applicable threshold cannot be defined.We suggest to iteratively applying different thresholds and comparing the result to the expected outcome.For the present study a threshold value of 0.8 represents the best result.During the "threshold and clump" procedure the probability image is converted into a binary image assigning values equal or above the threshold to 1 and below the threshold to 0. The subsequent clumping analyses the contiguity in all directions and groups connected pixel with values of 1 to objects (ERDAS, 2008).
Due to topographical conditions and the classification process the objects partly contain boundary irregularities e.g.gaps within an object or nearby-objects having a Euclidean distance of only pixel.In order to concretize and simplify the neighbourhood connectivity and thus to enhance further processing it is necessary to clean those irregularities.Most suitable is the mathematical morphology closing, where a dilation operation is performed followed by an erosion operation.During the dilation objects are increased by one pixel on their boundaries.As a possible effect, gaps within the object or between near objects are closed (Costa and Starkey, 2001;Pratt, 2007).The concatenated erosion removes single not connected pixel.Concluding, almost explicitly connected pixels that represent discrete topographic edges are left.
The produced objects differ in pixel number from single pixel to large aggregates.Depending on the considered scale it is noteworthy to mention that objects of certain size (number of pixels) may not be relevant and thus can be excluded.Since we focus on a medium-scale where rather large structures are relevant, we excluded objects having less than 20 pixels using a size filter.The number of 20 pixels originates from the assumption that relevant tectonical or morphological structures have lengths of above 400 m.Since most of the objects are composed of a minimum of three rows, a total length (longest object axis) of 300 m is expected, which is well below the length of relevant structures.Note that this step is not compulsory particularly if the focus were on the extraction of small-scale structures.
So far, objects are raster objects that are difficult to quantitatively compare to vector objects during further analysis.By applying the "centerline convert" algorithm the simplified raster-objects are thinned to lines representing the center or the longest axis of the object (ERDAS, 2008).During the concluding step extracted lines can be linked if certain criteria are fulfilled.Optional criteria include: maximum gaps between line-ends, minimum line lengths, minimum link length and a tolerance value.The way the parameters are chosen strongly depends on the objective of the study.If it is intended to obtain large connected structures minimum maximum gap between line-ends should have high values to be able to link lines with larger gaps.Small-scale investigation should use low values of minimum output-lengths and maximum gap between line-ends.All parameter options are subjective and can significantly influence the result, which intensifies the need to properly set every parameter based on the respective objective.For the present study the focus is on the detection of structures with lengths of above 400 m and furthermore to use a minimum of interpolation in between detected structures.The synergetic effect of both criteria represents most likely the closest possible image to reality.The defined parameter for the present study therefore included: maximum gaps between line-ends (300 m), minimum output-lengths (390 m), minimum link-length (210 m) and tolerance (30 m).The output is a map that displays linear structures with length and orientation-metrics derived from elevation data that are called lineaments from hereon, based on the terminology of O' Leary et al. (1976).
Combination with PCI Line algorithm
PCI LINE (PCI Geomatics) offers an alternative, widely applied and robust method to automatically extract lineaments.
The disadvantage using this method lies in the fact that six parameters can be changed, but only the final result can be visualized.Any intermediate-result, which would help the user to adapt only certain parameters, remains in the internal memory.To gain reliable result the user needs to have some experience.Thus, although lineaments extracted using the object-based approach generally compare favourably and step-wise reproducible to those produced by PCI Line, both methods are integrated for completeness.
The basic approach of PCI Line is similar to the objectbased approach.The only difference is the fact that the PCI Line bases on the robust Canny edge detection algorithm.This algorithm filters the DEM with a Gaussian filter, which depends on the chosen moving window size (RADI) from which the gradient is subsequently computed.The pixels that do not represent a local maximum are suppressed.The next step binarizes the image based on a threshold value GTHR, and a thinning algorithm is applied.A vectorization to extract lines ends the process.Defining parameters include LTHR for a specified minimum pixel size, FTHR for fitting lines and ATHR and DTHR for linking lines over a specified distance and angular difference (Kocal et al., 2004).Based on this knowledge and the chosen parameters (Table 1), we expect to have small differences but general agreement between the results.
By comparing the results it could clearly be seen that both methods produce similar results, yet some larger lineaments extracted by the PCI Line are not detected using the objectbased approach.Vice-versa, the PCI Line algorithm does not detect smaller lineaments that are detected by the objectbased approach.
To obtain a complete and complementary lineament map we combine both lineament results, where identical lineaments detected by both approaches are singularized during a GIS analysis.The focus is on smaller lineaments as they represent the objective topographic differences to a high degree (Fig. 3).
Differentiation of lineaments
As briefly mentioned above, lineaments directly derived from medium resolution elevation data usually rely solely on natural topographical information.This fact, together with the smoothing of the DEM in the first processing step, ensures that man-made features such as streets, canals, field boundaries etc. are not contained.
Although lineaments are derived only based on natural topographical features and are an indicator of groundwater flow-paths, their hydrogeological importance needs to be evaluated as suggested by Jordan et al. (2005) and Sander (2007).This step is essential in order to reliably decide whether lineaments have significance in terms of hydrogeology or reflect irrelevant topographical features.Sander (2007) suggests using ancillary data, such as information on topography/geology and drainage to group lineaments into classes.The reason for differentiating lays in the development of linear topographical structures (abrupt changes in topography).Those changes are not necessarily bound to tectonical processes only.Instead, erosion (aeolian/fluvial) and different resistivities of rocks can also cause similar linear expressions (Arenas Abarca, 2006;Jordan et al., 2005;Sander et al., 1997) but have less hydrogeological significance.
Since vegetation is not a factor in the present DEM-based case, we include drainage system information and additionally introduce geological maps to differentiate extracted lineaments into geological lineaments (true structural origin) and morphological lineaments (mainly morphological origin with possible structural background).We furthermore investigate the spatial relationship "distance to wells" to create an assessment criterion which quantitatively enables the user to even analyse lineament hydraulic significance.Based on all background information general groundwater flow-paths are derived.
Drainage system
Several software packages offer the possibility to calculate drainage systems.Important is the DEM as input parameter.In order to have a comparable basis it should be the same DEM used for lineament extraction.In the present study the eight-direction (D8) flow model incorporated in ArcMap 9.3 is applied.The output contains steadily changing orientations of drainage lines over pixel distances (30 m).This fact reduces the possibility to objectively compare the drainage system to the extracted lineaments in terms of orientation.To overcome this aspect we suggest generalizing the drainage vector lines over a span of 180 m.The resulting vector layer contains rather straight lines but keeps the general orientation.The lines are composed of several segments created every time the drainage system changes the direction.Since the segments are of interest rather than the total line, it is necessary to first split the drainage lines at their nodes.Using this sequence ensures that a correct orientation and length per segment is assigned.
After preparing the drainage vector it is possible to automatically compare it to the extracted lineaments quantitatively based on three parameters: Euclidean distance between the two, length and orientation.By defining each parameter depending on the objective of the study the similarity between both can be automatically analysed and the influence from the drainage system onto the lineaments can be inferred.In case lineaments display similar orientations for a certain portion of their total length and are located close to the created drainage vector, lineaments are probably induced by the drainages system and have to be evaluated as morphological lineaments.
For the present study we use the "Near" function of Ar-cGIS 9.3 to calculate the Euclidean distance between both features.A 500 m distance is chosen where we assume an influence of drainage systems to extracted lineaments.This is strongly dependent on the study area.In the present case the study area displays a high relief-energy with deep V-shaped valleys and distances between valley depth-line and valley shoulders below 500 m (see Sect. 4).This number should be raised when rather flat areas are investigated or lowered when steep mountainous regions are of interest.Furthermore, to account for either generalization errors or slightly atypical shaped valleys we define (2) an angle difference of ±20 • for at least 20 % of the line segment to account for either generalization errors or slightly atypical shaped valleys.Both numbers are general numbers and most likely independent of scale or study area.
Geological map
Similar to the drainage system it is necessary to infer lithological boundaries contained in geological maps and compare them to lineaments.If available, digital geological maps have to be treated favourably as they guarantee an objective comparison and enable an identical analysis sequence as described in the section before.Analogous geological maps need to be scanned and co-referenced to the DEM before further analysis.Since digitizing is labour-and time-intensive, particularly for larger study areas, we suggest performing the comparison based on lineaments by creating a buffer around each.If a lithological boundary displays similar orientation and is located within the created buffer, most likely this lineament reflects the boundary and therefore has to be evaluated as morphological lineament.
For the present study we create a buffer of 300 m to each side of the lineament.This buffer dimension generally reflects the effect of lineaments induced by lithological boundaries independent of scale and landscape.
Hydraulic significance and derivation of flow-paths
After comparing lineaments to structural features and subsequently evaluating their geological significance, the hydraulic significance of lineaments must be assessed in order to gain information for the possibility of deriving groundwater flow-paths.Sander et al. (1997), Magowe and Carr (1999) and Henriksen (2006) showed that lineaments close to wells correlate with higher well yields.The distance between lineaments and wells, where a significant yield could be observed, varied between 250 m and 2000 m.Since all mentioned studies explicitly explored well yields, it can be assumed that lineaments within similar distances to waterbearing wells also reflect probable hydraulic flow conditions.Thus, by calculating the Euclidean distance from each lineament to the nearest well location a simple quantitative metric is created to directly evaluate whether lineaments have a hydraulic significance or are rather randomly distributed.
Based on the evaluation of the hydraulic significance it is possible to infer general groundwater flow-paths under the co-consideration of topography and outlets (springs).Topography provides the general flow as water always tends towards the lowest point.In contrast, lineaments present a rather defined local flow direction characteristic whereas springs represent the final outlet of the system.Superimposing the lineaments on the 3-D-topography enables the user to derive most probable groundwater flow-paths by following the topography but preferring general orientation of lineaments.If done manually the result can be drawn with vector lines in GIS on the plotted 3-D surface.Nevertheless, an automation of the derivation is preferable as it is absolute objective.
For the present study we assumed all water wells (Fig. 4a) to produce water because only coordinates, name and well type were available.The same assumption applies for oil and gas wells based on Salhov et al. (1982) who describe water fluxes encountered during drilling.The data basis stems from Laronne Ben-Itzhak and Gvirtzman (2005), Mekorot Co. Ltd. (2007), Tel Aviv University (2007) for water wells and Fleischer and Varshavsky (2002), for exploration wells.Additionally, we define a second assessment criterion by introducing a parallel comparison to mapped faults of similar scale (structural map 1:200 000).This is not obligatory for the derivation of flow-paths, but provides extra information on the degree of applicability of different sources for flow path derivation.
Study area
The study area represents the western subterranean catchment of the Dead Sea.The area is around 4160 km 2 located between 34.73 • E and 35.51 • E and 30.83 • N and 32.05 • N (Lat/Lon WGS 84).It is comprised of major parts of the Judean Mountains in the west, the Negev desert in the south and the Dead Sea in the east (Fig. 4a) Within the area a topographical west-east altitude gradient of around 1000 m exists between the Judean Mountains and the brim located next to the Dead Sea and a further steep gradient of around 400 m from the brim to the current Dead Sea level of −425 m (WGS84-EGM96), (own GPS measurements 03/2010).
The entire region is faulted and folded and since Eocene age finally shaped and modified by the strike slip movement of the Dead Sea transform (Garfunkel and Ben-Avraham, 2001), which is part of the Syrian-East African Rift system.
Geology
The geological formations of the western mountain range dip fairly eastwards.Lower Cretaceous Kurnub Group sandstones represent the base of the western escarpment and crop out in the southern study area.Above that, the 800-850 m thick hardly erodible limestones and dolomites of the Judea Group (Cenoman-Turon) are the predominant formation in the area (Fig. 4b) and constitutes the important Lower (L-JGA) and Upper (U-JGA) Aquifers (Guttman, 2000).At the top of the range, the soft Mt.Scopus Group (Senonian-Paleocene) with marl, chalk and clay occurs and reaches thickness of 100 m to 400 m.These variations are controlled by anti-and synclinal structures of the compressional Syrian Arc phase (Flexer and Honigstein, 1984).The entire range is deeply cut by wadis, draining towards the Dead Sea.Products of those Plio-Holocene erosional processes are found within the wadis, their fans and along the Dead Sea coast being of gravel, sand, clay silt, marl and gypsum (Dead Sea Group).
On the western border of the study area, the Hebron anticline, which represents a main structural element, stretches with a SW-NE orientation (Fig. 4b), thus representing a main structural feature.A series of secondary mostly parallel asymmetric anticlines and synclines were developed among smaller fault systems observable throughout the entire area.The principal fault is represented by the western fault of the Dead Sea rhomb-shaped pull-apart basin.The dominating trends of all faults can be categorized into the following groups: Faults of the Group A have an orientation of around 90 • (E-W) and are mainly located in the north-western part of the study area.They may result from the Syrian-Arc deformation with maximum compressive stress trending NNW during Creataceous to Eocene ages (Eyal and Reches, 1983).
Group B is built by faults showing a main NNW-SSE orientation (330 • to 360 • ) that are found from the north-western corner till the Dead Sea region, where their abundance increases significantly (Fig. 4b).These structures may be related to the Dead Sea strike-slip Gvirtzman (2005) transform, activated in the Miocene age (Garfunkel andBen-Avraham, 1996, 2001).The major fault within that group is the N-S oriented western fault of the Dead Sea transform.
Group C faults, trending 310 • to 315 • (NW-SE) are the result of the compressional phase of the Syrian-Arc deformation during the Turonian age.Faults of that type are also described by Gilat (2005) as compressional features that follow Turonian-Senonian faults.The distribution within the study area is mainly in the westerly central part and in the north of the study area extending the western fault of the Dead Sea towards NW.
Hydrogeology
The major groundwater bearing strata are the Kurnub sandstone and the overlaying limy Judea Group with the two distinct aquifers.Both are hydraulically separated by the Bet Meir Formation, forming an aquiclude composed of clay, marl and chalk with varying thickness (Laronne Ben-Itzhak and Gvirtzman, 2005).Hence, and as a result of the eastwards inclination of the rocks, the L-JGA becomes confined towards the Dead Sea while the U-JGA is entirely phreatic.
Where the separating aquicludes thin out locally and deep reaching faults exist, it is assumed that groundwater is able to percolate locally into adjoining aquifers.Consequently, groundwater-flow is defined either structurally or lithologically, meaning heterogeneities in transmissivity forces groundwater to bypass zones of low transmissivity (Guttman, 2000).
Precipitation amounts vary between 600 mm a −1 in the highest elevated parts of the Judea Mountain range and around 100 mm a −1 along the Dead Sea coast (Siebert, personal communication, 2010).However, the precipitation gradient does not decline homogeneously.From the Judean Mountains it slowly decreases towards the brim west of the Dead Sea.Between the brim and the graben the decrease is dramatic as a result of the important change in elevation.The highest amount of precipitation, which almost solely occurs between October and April, falls on outcroppings of the Judea Group aquifers.This aspect makes the west of the study area to the major region of recharge (Guttman, 2000).
The highest natural discharge of the aquifers in form of springs can be sorted by their location (Table 2).Approaching the Dead Sea, the spring discharges rise from low values at hinterland springs (Quilt, Jericho) to high values at the northern springs (Feshka, Kane and Samar) along the DS to decrease again at the southern springs (Kedem, Ein Gedi).
Based on isotopic analysis it can be concluded that spring water is fed by the U-JGA and derived from precipitation in the recharge area (Siebert, personal communication, 2010).Only for the Kedem and Mazor springs a mixture of both Judea Group Aquifers is assumed (Guttman, 2000).
Results
In total, 751 lineaments with lengths varying between 376 m and 9647 m were detected (Fig. 5).A lineament density (5 km search radius) exhibits a higher lineament density in the northern and north-western parts of the study area and along the western fault of the Dead Sea.It is apparent that lineaments within these high-density areas have smaller lengths compared to those in lower density regions.
The frequency-rose diagram (Fig. 5) illustrates the strike directions of all detected lineaments and additionally the differentiation of geological and morphological lineaments.The diagram of all lineaments (Lineaments total) displays the fact that two main strike directions are prominent.Most lineaments are oriented around 0-5 • and 30-40 • , while a smaller amount strikes between both main trending directions.Equally noticeable is a similar frequency distribution of lineaments with an orientation between 290 • to 340 • and 45 • to 60 • .Apparently, only few lineaments have an orientation of 90 • or 270 • .
Partitioning the total lineaments in the detected geological and morphological lineaments reveals that geological lineaments match the main strike directions of the total lineaments almost explicitly.Small numbers represent orientations around 315 • to 350 • but none are around 90 • or 270 • .
For the morphological lineaments three main strike directions are detected: (a) 295-330 • (b) 0-5 • (c) 35-65 • .Most of the lineaments belonging to group b are assigned to be of morphological origin due to their lithological-boundary characteristic (Fig. 6).A smaller number of lithologicalboundary induced lineaments display strike directions of 295-300 • (group a) and 50-55 • (group c).Similar strike directions as group a and group c with an even higher frequency can be observed at the fluvial induced lineaments.Equally striking is the fact that less northern and only few western orientated lineaments are represented.
Considering the distance from lineaments to wells within the 500 m and 1000 m classes shows that the number of morphological lineaments (n = 26/n = 19) is above the number of geological lineaments (n = 15/n = 17) (Table 3).The smallest distance of both lineament types is comparable within 1 m.Within the 1500 m and the 3500 m class, the number of both types is steadily declining, reaching maximum distances of 3110 m and 2658 m respectively.
Therefore, a clear differentiation of lineament types based on the distance to wells cannot be established.Both types behave equally in distribution and very similarly in total number per class.Prior assumptions of morphological lineaments not having the same significance as geological lineaments concerning groundwater cannot be confirmed.Morphological lineaments generally exhibit better numbers with respect to minimum and mean distance as well as to the standard deviation.Moreover, based on the results, we calculated the number of geological and morphological lineaments, to-gether revealing that almost 75 % of all lineaments are within a distance of 1000 m from a known well with a mean value of 879 m.
In order to create a further assessment criterion, we calculated the distances to wells based on the mapped faults from the structural map.The general distribution reveals that more lineaments (10 to 21) are contained within the closer distance classes (≤2000 m) whereas in greater distance classes (>2000 m) the number remains almost constant with 4 to 8 wells per class.The absolute distances of mapped faults to wells are between 13 m and 6767 m.Although these numbers are similar to the previous ones from the "lineaments-to-wells-distances", it is diminished by taking the mean (2140 m) and the standard deviation (1868 m) values into account, which differ strongly.
In summary, the detected lineaments appear to be a better indicator compared to mapped faults in terms of distance to wells.This is most strongly supported by the normal distribution of lineaments enclosing ca.75 % of all lineaments to be within 1000 m distance to wells.To achieve the same percentage for mapped faults, the related distance would be within the 3000 m class.It is also pronounced by comparing mean and standard deviation values which are three orders of magnitude smaller for the distance to wells from lineaments than from mapped faults.
Method
The proposed semi-automatic method of deriving groundwater flow-paths based on extracted lineaments and auxiliary information contains several advantages.(1) The medium-resolution and freely available DEM minimizes efforts to clean non-natural features and requires no financial expanses.(2) Median-filtering outperforms other smoothing filter (e.g.Gaussian, Mean) in terms of edge-preserving.(3) 2nd order Laplace linear-filtering in all four directions even accentuates edges improving subsequent extraction.(4) Applying the object-based image analysis to trace and extract lineaments guarantees a high degree of control concerning the edge detection (5) Auxiliary information greatly assist in evaluating lineaments and provide the basis to derive groundwater flow-paths.
However, some points are critical.Before the supervised classification the location of training samples is user dependent and thus requires certain expert knowledge.To maintain objectivity it would be necessary to analyse whether a general threshold can be applied after the linear-filtering step and directly use the binarized image as input where boundary irregularities are cleaned.Another subjective-related step concerns the line-link parameters.Although parameter settings are study-specific defined, they depend on a priori knowledge on geology/hydrogeology.If set differently, resulting lineaments will as well be different varying in length and number.It is therefore of great importance to define those parameters depending on investigation-scale and -objective.
The derivation of flow-paths is based on well information.The better and reliable the information is, the better the evaluation of lineaments in terms of hydraulic significance.For the presented case study only well name, type and location are available implying an unknown error that could be omitted having more information.
Site specific
The detected northern (0-5 • ) as well as the north-eastern (25-35 • ) geological lineament orientations can be associated to the Syrian Arc system formed during the Turonian age (Flexer et al., 1989).Since the structural map only includes similar north-oriented fault directions, it must be assumed that the detected lineaments clearly describe the NE trending synclines, anticlines and monoclines structures with vertical displacements of up to 0.3 km (Gilat, 2005).We furthermore suppose that those lineament orientations equally represent faults that trend parallel to the hinge lines of the Syn-/Anticlines as shown by Flexer et al. (1989) for the Hebron anticline.
The cluster of morphological and particularly fluvial induced lineament directions around 45 • (±15 • ) suggests that the drainage system follows the NE trending syncline/anticline structures.
The second cluster around 315 • (±30 • ) that also matches structural map fault orientations possibly originates from small NW trending faults.Those structures branch from the western Dead Sea fault partly following older Turonian-Senonian faults (Ginat et al., 1998).Those assumptions are supported by studies by Freund et al. (1968); Kafri and Heimann (1994) and Matmon et al. (1999), who proved the adjustment of the drainage system to morpho-tectonic features in the study area.The northern-oriented (0-5 • ) morphological lineaments explicitly stem from lithological edges and align along the western fault of the Dead Sea, most likely relating them to the Dead Sea stress field.
The 90 • lineament orientation was largely absent, possibly due to the fact that the Syrian Arc stress field related strike directions have been superimposed and/or displaced by younger movements (Gilat, 2005) evoking smaller structures that were either already included in the explanation or could not be detected.The remarkably well-matching orientations of lineaments and faults suggest a strong correlation among both.Based on the fact that faults have hydrogeologic significance by either hindering (compression/mineralized) or improving groundwater flow (extensional), this implicitly also accounts for lineaments within the study area.
Taking the distance analysis into consideration, which indicates that almost 75 % of all detected lineaments are within 1000 m to the nearest well, we propose that the extracted lineaments do strongly have a hydrogeological significance.This also corresponds to findings of Sander et al. (1997), Magowe and Carr (1999) and Henriksen (2006).Therefore, this aspect enables us to derive general groundwater flow-paths.
Hence, taking the lineament importance for groundwater flow into account, together with elevation information and known spring areas along the Dead Sea (outlets), we suggest possible groundwater flow-paths (Fig. 7) The flow-paths derived clearly exhibit a general E-W or SW-NE flow from the recharge areas in the western and the south-western part of the study area towards four main spring areas at the Dead Sea.The flow-paths in the northern part derive from the Ramallah anticline.They turn southward as they reach the western fault of the Dead Sea basin oriented NW in this region.There are most likely also flow-paths coming from the northern part of the Hebron anticline that have an E-NE trend.Both flow-paths merge and feed the Ein Feshka spring area.
Those flow-paths, together with the catchment size and the precipitation amount in the recharge area, lead to the high discharge for the Ein Feshka spring.The southwardlocated spring areas Kane, Samar and Darga exhibit similar (Guttman, 2000) and (Laronne Ben-Itzhak and Gvirtzman, 2005) and the groundwater level contour map are added (Groundwater level extrapolated refers to the insecurity of only three water wells in the southern area taken from Arad, 1966;Vengosh et al., 2007) (Projection: UTM WGS 84 Zone 36 • N).
characteristics.These areas are mainly fed by groundwater that follows ESE oriented flow-paths and partly NE trending groundwater, which most likely flows only towards the Darga springs.The catchment is smaller, thus reducing the potential amount of produced recharge which de-facto results in less discharge.
The groundwater that feeds the Ein Gedi spring area can be divided in two main flow-paths.The NE-trending flowpath is the longest of all flow-paths in the area but presumably bears only a small amount of water as the precipitation in the south-western area does not exceed an average of 220 mm a −1 and is associated with high evaporation (Diskin, 1970).ESE-trending flow-paths with recharge areas in the Judean-Mountains with annually 600 mm of precipitation are rather significant.Taking those facts into account, with respect to the number of lineaments and in particular the overall trend towards Ein Gedi spring area, suggests a higher amount of flow and discharge than has been reported so far.
Based on the lineament map, the spring area around Mineral Beach and Kedem probably receives a very small amount of groundwater as only three lineaments are nearby and oriented towards that area.This could be due to the fact that these springs are fed by the deeper L-JGA, resulting in thermal springs with higher mineral concentrations (Guttman, 2000;Gvirtzman et al., 1997).
Since structures related to the deeper L-JGA could not be found, this appears to be the boundary condition for the lineament analysis.Thus, based on the lineament analysis, it is assumed that only flow-paths of the U-JGA can be derived from the lineament analysis and that only the structural developments of according Turonian to Pliocene ages are reflected.This assumption is partly underlined by Gilat (2005) who describes mega-lineaments visible on satellite images as reflections of Late Miocene-Pliocene structural developments.
What remains unclear is the hydraulic potential of single lineaments.Anisimova and Koronovsky (2007) describe lineaments as permeable for fluids in general.However, in the central and northern part of the study area, Ilani et al. (1988) pointed out that in carbonate rocks of the Cretaceous Judea Group along structural lineaments with E-W orientation and NE-trending monoclines iron mineralization and dolomitization occurred, thus inhibiting fluid flow.Similar processes could also characterize other lineaments with different orientations in the study area.Another aspect incorporates reverse faults mentioned before (Eyal and Reches, 1983;Flexer et al., 1989), although different hydraulic rock characteristics could possibly counteract on each other, hindering fluid flow as well.These aspects should be further investigated, if it is intended to prove hydraulic significance of individual lineaments.
Taking into account that limitations exist, the produced flow-path map are in good agreement with existing groundwater flow models (Guttman, 2000;Laronne Ben-Itzhak and Gvirtzman, 2005).Additionally, an interpolated contour map of the groundwater level based on well data (Fig. 7) underlines the derived flow-paths in the general flow and in complex sub-regions with varying flow directions in the northwest and west.Thus, based on these correlations it can be inferred that the flow-path map is valid.
Conclusions
In terms of efficient large-scale groundwater mapping the usage of remote sensing data, specifically of DEMs is apparent and can successfully be used to detect lineaments as indicators for hydraulic flow conditions.
In addition to efficiency, objectivity and transparency are eminent for reproduction.Applying the proposed semiautomatic approach using the ASTER GDEM and a combined linear filtering and object based classification approach fulfils all aspects.The linear filtering step exclusively relies on matrix based algorithms, whereas the object-based classification with Imagine Objective (ERDAS) needs only small adjustments during the process.Combining the lineament result with the similar automatic extraction algorithm LINE produces a lineament map that can be evaluated as objective and efficient.
Classifying and interpreting the result using ancillary information as suggested by Sander (2007) helps to understand the hydrogeological and hydraulic significance of each lineament and enables the authors to derive groundwater flowpaths.Based on this analysis, we concluded that: -Detected lineaments within the study area have strong correlation with hydrogeologically relevant structural features since lineament orientations match remarkably well either Syrian Arc or Dead Sea Stress field related structural features that mainly have hydro-geological significance.
-It was shown that 75% of all lineaments, independent of lineament type, are located within a Euclidean distance of 1000 m to the nearest well.This implies that a high number of lineaments possess groundwater significance.
-Taking both points into account, it was suggested that together with an elevation map and the locations of spring areas the lineament map is appropriate to derive possible groundwater flow-paths.Compared results obtained from groundwater modelling of Guttman (2000) and Gvirtzman et al. (1997) which are based on water level data of wells reveal a good agreement.In return, this suggests that the delineated flow-paths from the lineament map are valid.
These results also show the applicability of the semiautomatic extraction method presented to objectively delineate lineaments and subsequently derive groundwater flowpaths.Moreover, it can preferably be applied prior to field campaigns in order to choose suitable sampling sites.Another application option is in remote/critical regions with limited geological/structural map availability and well information to improve the knowledge on groundwater.The applied medium resolution DEM potentially neglects smaller features that would be contained in DEMs with higher resolution.However, main advantages are the global and free availability and the fact that less processing is needed since man-made features are not contained.Using auxiliary information helps to gain insights in the connection between lineaments and structural features and thus in hydrogeological relevance of lineaments.Nevertheless, even without auxiliary data on geology or wells the developed method can be transferred to other study sites as only the medium resolution DEM is sufficient to derive general groundwater flow-paths.In those cases, the result would have less informative value but still provide useful knowledge on groundwater.
Fig. 3 .
Fig. 3. Left figure shows a subset of the comparison of lineaments resulting from object-based classification (white lines) to lineaments obtained from PCI LINE algorithm (black lines) -right figure shows the same subset after singularization of identical results with respect to the object based result.
Fig. 4 .
Fig. 4. Describes (a) topographical features of the study area ("Water wells assumed" refers to three water wells in the southern area taken from(Arad, 1966;Vengosh et al., 2007) and gives (b) a lithological and structural overview (Projection: UTM WGS 84 Zone 36 • N).
Fig. 5 .
Fig. 5. Lineament map with rose diagrams for all lineaments and the differentiation of geological and morphological lineaments -in the background a calculated lineament density map with a 5km radius is displayed (Projection: UTM WGS 84 Zone 36 • N).
Fig. 6 .
Fig. 6.Frequency-rose diagrams differentiating morphological lineaments to fluvial induced lineaments and lithological border induced lineaments (some lineaments are double-counted since they correspond to both characteristics).
Fig. 7 .
Fig. 7. Derived groundwater flow-paths based on lineament map, altitude and Dead Sea spring locations -for comparison the modelling results of(Guttman, 2000) and (Laronne Ben-Itzhak andGvirtzman, 2005) and the groundwater level contour map are added (Groundwater level extrapolated refers to the insecurity of only three water wells in the southern area taken fromArad, 1966;Vengosh et al., 2007) (Projection: UTM WGS 84 Zone 36 • N).
Table 1 .
Chosen parameter for LINE algorithm.
Table 2 .
Annual spring discharge of know spring locations within the study area (DS: Dead Sea shore).
Table 3 .
Comparison of Euclidean distances from wells towards the nearest feature (differentiated lineaments and faults contained in the structural map 1:200 000). | 9,982 | 2011-08-25T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Zika Virus Prediction Using AI-Driven Technology and Hybrid Optimization Algorithm in Healthcare
The Zika virus presents an extraordinary public health hazard after spreading from Brazil to the Americas. In the absence of credible forecasts of the outbreak's geographic scope and infection frequency, international public health agencies were unable to plan and allocate surveillance resources efficiently. An RNA test will be done on the subjects if they are found to be infected with Zika virus. By training the specified characteristics, the suggested Hybrid Optimization Algorithm such as multilayer perceptron with probabilistic optimization strategy gives forth a greater accuracy rate. The MATLAB program incorporates numerous machine learning algorithms and artificial intelligence methodologies. It reduces forecast time while retaining excellent accuracy. The projected classes are encrypted and sent to patients. The Advanced Encryption Standard (AES) and TRIPLE Data Encryption Standard (TEDS) are combined to make this possible (DES). The experimental outcomes improve the accuracy of patient results communication. Cryptosystem processing acquires minimal timing of 0.15 s with 91.25 percent accuracy.
Introduction
An infection's prelude and rapid spread to other regions of the globe attract the attention of the international community. Climate change is also a significant contributor to the rapid spread of illness. e species Aedes aegypti, which is widespread in urban areas, is the primary source of the illness, according to the CDC. Zika virus is a mosquitoborne illness that is comparable to dengue fever, west Nile virus, Chikungunya virus, and yellow fever, all of which are spread by mosquitoes. It is believed that mosquito bites are causing the spread of these illnesses and that Aedes mosquitoes are the primary vectors of these infections. Humans have had a significant challenge as a result of those, and this has mostly resulted in the transience of many tropical and subtropical nations [1]. It is an intrauterine illness caused by the Zika virus; the symptoms of Zika virus are often moderate fever, joint discomfort, and rashes, which are similar to those of dengue and Chikungunya virus. If an infected mosquito bites a pregnant woman and she becomes sick, the virus may spread across the placenta and affect the fetus. It is believed that pregnant women who are infected with the Zika virus will have neurological problems such as microcephaly and may give birth prematurely to their children. Even males who are infected with the 20 viruses may spread the virus to their sexual partners via anal, oral, or vaginal intercourse. e increasing population in urban areas has also increased the demand for water portability, which has resulted in people storing water in their homes, causing the Aedes aegypti mosquitoes to breed quickly in that climatic condition [2]. e increasing population in urban areas has also increased the demand for water portability, which has resulted in people storing water in their homes. It is believed that mosquitoes that have formed in water that we use in the home are the primary cause of the ailments. In India, the climatic conditions are ideal for mosquito breeding and development.
Preventive methods for limiting the breading of mosquitoes in the infected region are part of the overall awareness campaign in the affected area: keeping the home clean, making sure there is no standing water in or around the house, and making sure any storage water is properly sealed with a lid if there is any. If any of them are discovered to be infected and exhibiting any of the symptoms, they must notify the appropriate healthcare facilities promptly. e individuals in question will be supplied with the medicine they need.
Cryptography is a critical milestone in the development of network security. e term cryptography refers to anything that is concealed or secret. Cryptography is the aim of secret writing with the goal of safeguarding information. e science of breaching cryptosystems, also known as cryptanalysis and cryptology, is the technique of breaking them down by trial and error. Cryptology plays a significant role in the protection of data in computer networks. Writing and solving data using codes are what cryptology is all about, and it includes both cryptography and cryptanalysis.
Cryptography may be divided into three types: asymmetric, symmetric, and hashing. e following sections provide explanations of cryptography [3]. Asymmetric key cryptography is a kind of cryptography in which the key is not shared between two parties. Asymmetric cryptography, often known as public key cryptography, is distinguished by the fact that it uses both a public and a private key. While encrypting the data, the sender sends both the public key and the secret key used to decrypt the data. Only the person who has the secret key may decode the data throughout the decryption process. As a result, it is extremely safe when compared to the symmetric type, yet the time is extremely sluggish. Figure 1 demonstrates the encryption process.
Having control over the data is regarded to constitute secure at the location where it is kept. If a user wants to make use of the advantages of cloud computing, he or she must first choose an appropriate network and then make use of the dispersed resources and arrangements made possible by cloud computing. When it comes to data transfer, the protection of the information is critical [4]. As a result, data that is really necessary should be secured, and obtaining privacy access rights is a major difficulty. Figure 2 represents the data security issue.
Secureness in Cloud
Database. Generally speaking, confidentiality refers to data or information that is not made available to anybody other than its owner, whether that is a person, a device, or a method. CSP is aware of the location or position of data stored by the user and knows where it is stored.
ere will be certain data that will provide some access assurance to a restricted number of people where the permission has been applied to particular data to secure data confidentiality in this case. Because of the sensitivity of the data that has been gathered in the cloud, some illegal access might result in risks. Assuredness should be provided to customers in the form of a privacy policy to ensure that data is handled appropriately and that mechanisms are in place to guarantee data security in the cloud. It is the responsibility of CSP to implement a variety of measures to ensure data integrity. e CSP informs the client about the kind of data that is being stored in the cloud [5]. As a result, it is necessary for CSP to maintain records about the data, such as the type of data, whether public or private, when and where it is required, the type of Virtual Memory and accumulated, the period of time when it is accessed, and so on, in order to protect the data from unauthorized access and to maintain data confidentiality.
Data Allocation in Server.
We can access our data from any place and at any time thanks to the cloud, which is a superior service provider in our opinion. A risk is related with the location of data collection, which is associated with a higher degree of risk [6] as compared to other sites. e user should be aware of the location of sensitive data storage while an organization is using it, and he or she should have the ability to seek information about the location. At order to avoid confusion, the CSP and client should be in a certain place where the server location and data storage location are both known to the one in control of the scenario. When moving data from one location to another, such as via emails or uploaded photographs to Facebook, there are a number of things to take into account.
Recovery of Input Data.
While transferring data to the cloud, the CSP offers flexibility and ensures that the storage system has accurate information about the data sent. At the very least, a RAID configuration should be maintained in the storage system. e majority of CSPs will have several copies spread among a large number of independent or free servers. e cloud service provider provides the back-end application service, and in the event that a problem happens inside the company, the cloud service provider retrieves the data.
Secureness in Data Using Encryption Standards.
According to a study report from 2100 Indian Business Technology Professionals, data auditing and confidentiality are the most significant obstacles in organizations using cloud technology [7]. e results of a survey conducted by Salt March Intelligence reveal the amount of sensitivity of business professionals to various technologies, including the difficulties they have in embracing cloud apps, infrastructures, customers, and storage. In today's fast-paced company climate, agility, cost savings, and flexibility are all required.
e cloud environment provides all of these benefits. Zika virus illness prediction and data security are critical for both protecting patients from sickness and safeguarding their personal information. Nowadays, everything is being moved to the cloud, and healthcare is one of the industries that is being moved to the cloud. As a result, more precision in prediction is required; yet, only a limited amount of study has been done in order to anticipate the ZIKA virus. e amount of time it takes to perform a security request may vary depending on the configuration of the system or program being utilized. It is necessary to design a gap prediction and security algorithm in order to triumph over the research and to meet the requirements. In order to improve performance in forecasting the Zika virus with high levels of accuracy, the primary goal of the study activity is to develop new methods of prediction. Because there is no more data available, synthetic data is being generated for the prediction of the Zika virus. e data is divided into two groups based on the results of the machine learning classifier: infected and uninfected. For the prediction of the Zika virus, a variety of classifiers are tried; eventually, the MLP classifier outperforms the others in terms of accuracy. e Journal of Healthcare Engineering 3 use of encryption methods such as symmetric and asymmetric cryptography is investigated for the purpose of data security. For the purpose of encrypting the data, many methods are used. Finally, the suggested Hybrid Encryption technique is used for the purpose of safeguarding the data in the shortest amount of time possible, resulting in improved performance. e paper has the following structure: Section 2 consists of literature survey, Section 3 consists of methodology and outcome of proposed algorithm, and Section 5 consists of conclusions with future work.
Literature Survey
Data integrity is discussed in detail by Wang et al. (2015) [8], who propose that when a service provider provides many services to cloud users and those users share data in a group, the originality of the data should be maintained, and this is accomplished through public auditing, in which case the signature of the shared data blocks must match the signature of the service provider. Different blocks are signed by different users at multiple times when different users modify the same file. For security reasons, any user may have their access terminated before they have the opportunity to be resigned by an existing user. It is necessary to invoke the idea of public auditing in conjunction with an efficient user revocation procedure; proxy resigns are carried out by the cloud on behalf of the current user at the moment of revocation. Existing users will not be required to download and resign their licenses. Chen et al. (2012) [9] have claimed that cloud computing offers several benefits, such as the ability to host applications and data on the client's behalf and that clients and users are increasingly using hybrid or public clouds. e scale of their market prevents certain huge organizations and corporations from shifting data to the cloud for some mission-critical applications, and this is a problem. Users' recommendations and perspectives on security and privacy protection concerns are taken into consideration, and appropriate measures are taken. e authors provide a comprehensive analysis of data security and protection in relation to privacy regulations across the whole life cycle of data that is discussed in more detail. Some of the existing solutions and research efforts connected to privacy and security concerns are presented, as well as some of the challenges that remain. Lopez-Barbosa et al. (2016) [10] made a proposal about the real-time utilization of Internet of ings devices, such as smart phones. Using sensor devices and cloud computing, Quwaider and Jararweh (2016) [11] have presented a method for increasing public health-related awareness in the community. It is necessary to utilize the map reduction idea in order to detect anomalies in the information supplied by the sensors in real time. Mamun and colleagues (2017) [12] detailed how a notion of delivering speech signals to a doctor using cloud technology may be implemented. e doctor then diagnoses the patient and keeps track of them with the use of mobile phones and the cloud, which is the suggested methodology's manner of operation. With the use of smart phone technology, Zhang et al.
[2015] [13] suggested a method of monitoring and managing the epidemic. Based on their network contacts, the whole population is divided into several clusters, with outbreak methodologies being employed and deployed at the cluster level in this case. Sareen, Sood, and Gupta (2016) [14] made a proposal for intrusive technical enhancement in the Internet of ings, mobile computing, among other things. In parallel with enhancing the quality of service provided by technology, healthcare services are also being enhanced. Because of this virus, people have been affected in a variety of geographical locations. As a result, a wide range of neurological symptoms and infections were discovered and documented. As a result, care should be made to avoid contracting Zika since it is very contagious among pregnant women, babies, and adults alike. e existence of the Zika virus in India was established by Sumit Bhardwaj and colleagues (2017) [15]. On the basis of the samples, four of the Zika-infected patients were identified during the screening process. As a result, it may become a significant concern in the future. When compared to Chikungunya, Zika is expected to become a major concern in the near future.
e Zika epidemic, as represented by Petersen et al (2016) [16], as well as the infection among pregnant women and newborns, was addressed. Children with microcephaly are at a higher risk of developing neurological problems. A number of syndromes associated with neurological disorders were explored. is procedure has a high degree of accuracy, and the backpropagation approach was employed to anticipate the most accurate outcome possible. Kadri et al (2016) [17] have recommended that the Zika virus be designated a worldwide public health emergency. As a result, nations with a low risk of contracting the Zika virus were supplied with information pictorial expertise. It is necessary to take preventative measures.
It was suggested by Orellana et al. (2010) [18] that Google Docs have a new transparent user layer, which is implemented in Firefox, that encodes the record before collecting it in the Google server, making it impossible to access a data without having the correct password. e user is given the opportunity to choose the algorithms that will be used to encode the information. Once an algorithm has been chosen, the data is converted into cypher text and stored on Google's servers.
e results demonstrate that blowfish performs much better when the key size is decreased, and the speed is increased. According to Singh et al. (2012) [19], elliptical curve cryptography is a fantastic approach of encryption technology. In wireless communication, the security layer must be implemented as robustly as possible, and for this reason, the framework is created using the ECC technique, which encrypts data in a powerful manner. It facilitates communication by using a multiagent system. e ECC has been used for the communication of wireless apps as well as certain web-based applications.
Singh et al. (2016) [20] have suggested a hybrid framework that incorporates both symmetric and asymmetric methods. When ECC and Blowfish are used together, the security level is significantly increased. e crypto and the CSP agent are both accessible for the purpose of distributing the key to the user. Even though CSP is not aware of it, the crypto agent is encrypting the data on its behalf.
Only the authorized individual will be able to decipher the information. User's private key is shared with CSP, and CSP's public key is shared with the user. As a result, not even the CSP was able to decode the data. CA is in charge of providing these services. Nathiya et al. (2019) [21] provided an explanation of the many network assaults that might occur when a packet is being sent. e intrusion detection approach is explained here and divided into four stages, with the goal of detecting attacks on cloud data storage as the data is being transferred. When an attack is introduced into the network, it is detected using false alarm methods, and the suggested algorithm HINDS is used to identify the attackers who have done so. When it comes to cloud storage security, Gampala et al. (2012) [22] introduced the ECC method as well as a digital signature mechanism for protecting information. With the use of the ECC technique, the security of the data is enhanced while the key size is reduced.
Jana et al. [23], in this case, used the hybrid approach, in which the downloading and uploading of data are completed at both the sender and recipient ends of the transmission. If any data is lost, it is impossible for both parties to decode it, which increases the overall security of the system. e multilayer algorithm is secure on both the user's and the server's end. Mohamed et al. (2015) [24] advocated that a framework be developed and validated for cloud environments in order to ensure that they are safe on both the client and server sides. When encoding or decoding data for connection setup, the Diffie Hellman cryptography is utilized in conjunction with ECC cryptography, and the integrity of the data is confirmed using the MD5 algorithm when updating the data. e suggested solution is a mix of ECC and SHA in order to provide a better outcome in data security. From the publications mentioned above, we can conclude that ECC is an improved asymmetric approach with a smaller key size. For the purpose of ensuring data integrity and authorization, a variety of encryption techniques and approaches were used. A brief overview of cloud computing is provided in the next chapter, followed by a detailed discussion of data security challenges and solutions for safeguarding data stored in the cloud.
Proposed Zika Virus Prediction Using MLP Classifier
Listed below is a description of each of the four components of the proposed system. e data gathering, fog layer, cloud layer, and, finally, the process are all in constant contact with the individuals engaged in the provision of healthcare. A framework has been designed for the identification of the Zika virus as well as the fortification of data in order to combat the virus. Cloud computing is used in this instance, because it is critical in the fact that it is capable of handling massive volumes of data from sensors and portable devices that have been mixed up. In order to connect end users to large-scale cloud services for storing and processing data, as well as for offering application services, it is essential to have a secure connection. Figure 3 depicts the whole architecture of the predicted model for predicting the Zika virus, which includes all of its components.
Input Data Generation.
Synthetic data is information that has been manufactured artificially rather than via realworld data collection. In algorithmic testing, it is used to evaluate a dataset of operational data or a dataset from a production environment. Additionally, it may be used to the validation of mathematical codes and, to a greater extent, to the training of machine learning forms. It is used in the modelling of a scenario or the calculation of a theoretical value, among other things. It delivers an unexpected outcome and if the findings are found to be unsuitable, it gives the required cures or answers to the problem. e actual and confidential data for the basic test are replaced with synthetic data created by the test engine. It is occasionally necessary to produce synthetic data in order to safeguard the confidentiality of the concerned data. We are utilizing synthetic data to test all of the real-time events that occur. We were unable to get real-time data because we needed to protect the anonymity of the patients who had been afflicted with the Zika virus; therefore, we developed synthetic data. Our technique is an early prediction system, and we are able to forecast whether a patient is infected or not based just on the symptoms that they exhibit. Even with that, we were unable to get real-time information. It is similar to a real-time dataset in that we create the information of the patients, such as how many days he has been sick with fever and whether he is travelling to a high-risk location. e symptoms of the Zika virus are thoroughly examined in order to develop the suggested technique for conducting the tests. Because it is difficult to get patient information in India, we want to employ synthetic data, which will allow us to test all of the possible combinations based on our assumptions. In this case, the diagnosis of infection is made based on seven separate symptoms. e potential combinations of Zika virus symptoms samples are included in the following. en, using synthetic data, the location of mosquito breeding sites and the location of mosquito dense sites are determined. As a result, the mapping is done randomly with respect to the area, symptoms, and the user. erefore, it is simple to distinguish between infected and uninfected patients, as well as the preventative actions that should be implemented by government agencies and hospital personnel [25].
Input Data Tuning Layer.
e suggested model includes a data component that comprises the specifics of the user's health data, environmental data, and location data, among other things. It is possible to get information on environmental conditions such as humidity, carbon dioxide level, and meteorological conditions by using environmental data. Because it is the primary cause of mosquito reproduction, it should be stressed repeatedly. Knowing well that our climatic conditions are ideal for mosquito reproduction, there is no need to watch everything every second. It is instead highlighted that the general climatic situation is favorable. e next step is to collect user health information. To do so, each user must register with the system using the mobile application that is available. Each user was assigned ID, which was generated. e indications of the Zika virus are obtained from users on a regular basis and reported to the Journal of Healthcare Engineering 5 authorities.
e symptoms are responded in a yes or no fashion, according to the yes or no pattern. Not only are the symptoms recorded, but also is the user's health-related information. ese kinds of information are gathered with the assistance of the sensor that is made available to the user [26]. e acquired data is protected using some kind of encryption technology, which is done in order of priority. User input is required for the symptoms of the Zika virus to be collected over a period of time, and the data are submitted as a yes or no pattern. e data collected by the environmental sensor includes information on mosquito breeding and population density. e sensor collects data in real time and uses it to pinpoint the location of the breeding grounds. In addition, carbon dioxide levels are regularly measured and studied in order to determine the climatic state of a certain site. Every piece of information pertaining to the environment is gathered in this section and saved in the fog computing servers. e data in the location part are connected to the data in the preceding section in that it displays the ideal site where there is a probability of mosquito density, and education opportunities are in height when compared to the climatic conditions. Table 1 represents the attributes of the input used in the proposed work. Table 2 gives the prediction rate based on the symptoms, and Table 3 represents prediction rate based on the environmental criteria.
Input Fog Computing Layer.
Fog computing is a distributed computing environment that is used to handle large amounts of data in real time. It works as a platform between the cloud service provider and the user, allowing for largescale data storage in the cloud to be accomplished. It is necessary to employ fog computing in order to reduce processing and performance time [27]. When it gathers all of the sensor data and stores it in a fog server, only the data that has been determined as necessary is evaluated and sent to the cloud for further processing. e processing speed and time are shortened as a result [28]. Because of the fog, the latency range, bandwidth, and everything else has risen. So, it serves as an independent server for data processing and archiving purposes. In the proposed work, fog is tasked with the responsibility of gathering all sensitive information from the user and determining if the symptoms match those of the Journal of Healthcare Engineering user. As a consequence, this sort of result is solely sent to the cloud. Fog is a first-level environment in which sensitive data acquired from the sensor must be kept in huge quantities due to the nature of the environment. As a result, there is a need to analyze the data and ensure that they correspond to the given one. is is followed by sending the data to the cloud, where the final data categorization and subsequent processing will take place.
Data
Security. e acquired data are safeguarded via the use of a secret sharing method, in which the data are divided into little pieces and prioritizing is given for the various tiers. e level of protection provided for a piece of data is determined by its sensitivity. e protection of user personal data, which should be kept safe from the hands of unauthorized individuals, is given the highest priority. In the second level, there is information about the environment, and the information is saved on multiple servers. e third item is the least important since it should include information on symptoms as well as a warning to the individual to take the required precautions to prevent contracting the Zika virus. e hospitals in the government-run healthcare system provide the essential guidance to the public. Figure 4 represents the overall proposed system.
Classification Using Multilayer Perceptron with Probability Optimization.
e probabilistic model-based classifier is based on the mean and variance measures of the produced classes, which are a total of 124 classes in this case (62 classes of the NN and 62 classes of DT). e mean and variance measurements are used to calculate the exterior probability value of the approved picture, which is then expressed as a percentage. In order to avoid overfitting, we estimate the exterior probability of the 73 classes that are important to both the NN and the DT classifiers independently. e resulting probability of the NN classifier is multiplied by the resultant probability of the DT classifier to generate the new probability value for the classifier. e most exact recognition of the character picture is obtained in line with the greatest value of the new posterior probability distribution. e procedures involved in the modelling of the probabilistic classifier for the recognition of character are explored in further detail in the following sections.
In the probability calculation using (1), In this case, o is the output class label, and the class label of the NN classifier is indicated as Do, whereas the class label of the DT classifier is written as E. It is possible to define the mean and variance metrics associated with the NN and DT classifiers in the form of equations (2) and (3): e mean value is denoted by the letter N0, and the variance value is denoted by the letter W 0 in this equation. According to the following equation, the posterior probability formula for the NN classifier may be found: NN class label mean value is represented as M (C s), and NN class label variance measure is represented as W (D s) in this example.
According to the following equation, the posterior probability formula for the DT classifier may be found: e mean value of the DT class label is denoted by the letter N (E o), while the variance measure of the DT class label is denoted by the letter W (E o).
NN classifiers have posterior probabilities that are higher than the posterior probability of the DT classifiers, which is represented by the formula for maximum posterior probability (max posterior probability) (5). e probabilistic model, as shown in equation (6), is used to identify the input character picture with the highest likelihood of being recognized: On the basis of the greatest measure of posterior probability, the identification of a character is conducted. In line with the acceptance criteria, the optical character picture with the highest likelihood of being identified properly is selected from the class. Figure 5 represents the Multilayer Perceptron Neural Network. Each of the NN and DT classifiers has its own mean value and variance value, which are computed independently in the algorithm. e posterior probability value is then computed using the method given above, which takes into account the mean and variance values that have been acquired. e posterior probability values of the NN and DT classifiers are merged to generate a single probability value, which is then blended with the other posterior probability values. Final recognition is achieved by using the greatest probability measure to determine whether or not the input character is recognized.
For identifying the instances, the multilayer perceptron classifier employs the backpropagation approach, which is described in more detail below. e network was built via the MLP algorithm, which was then analyzed and tweaked throughout the training phase. Except for the numeric classes, the network is composed entirely of sigmoid nodes Journal of Healthcare Engineering for the threshold value. e backpropagation technique is required for the elements in order to get a complicated output result. It operates on the inputs in the network using the feedforward mechanism. When performing the iterative method, a set of weights is used to forecast the class label for each iteration. e feedforward algorithm contains a single input layer, one or more hidden layers, and eventually a single output layer, as shown in the diagram. During the classification process, if a mistake happens, the backpropagation approach is used to enhance the accuracy of classification while simultaneously decreasing the amount of input values and the time required for training. layer l � y kl z l + θ 2 , It is computed by feeding the production spinal into the hidden film, which then processes the contribution over using the feedforwarding method, which results in a sigmoid function.
Weight and Objective Function of Classifier.
e output node k has an activation value of and an anticipated target value of tk for node k, and the change among the predictable and authentic target values is represented as and node k is defined as e network is recognized based on the pace at which it is learning. If it is set too low, network learning will be very sluggish, and if it is set too high, the network will oscillate between minimum and maximum values. Altering the knowledge rate from a big to a minor value during the backpropagation technique has a number of benefits. Assume that a network begins with weights that are far from the set of optimal weights and that it receives rapid training initially. When the learning rate falls throughout the course of learning, it is claimed that the process has reached a minimal optimum point. Because overshooting is less likely when the learning processes slow down. Figure 6 shows the hybrid structure of AES with DES. e suggested hybrid technique includes two levels of encryption, or two tiers of security, and it is a framework that supports both symmetric encryption and asymmetric encryption functions at the same time.
e two algorithms ECC and AES are a mix of hybrid methods for safeguarding data that are stored in the Common Security Policy (CSP). It is necessary to employ the AES symmetric method at the first level of encryption since the key is known to the user. e output of the first level of encryption is then encrypted a second time using the ECC algorithm, which is an asymmetric technique of encrypting data. In this case, the AES key is shared with both the user and the CSP in the first level, whereas in END USER Cloud Database Encryption using AES Level1 Encryption/Decryption using ECC Level 2 Data Collection the second level, the public key is used by the CSP and the private key is generated and shared with the user alone in the third level of the encryption scheme. e research of hybrid encryption method with two levels of encryption technology that went awry is shown in the illustration below. Each user will be provided with a set of two keys.
Key z � AESj, In this case, AESi represents the ith user of the symmetric key, and it is only known to the user. ECCpri displays the ith user's asymmetric private key, which is solely used by that particular user and no one else. e asymmetric public key user I is represented by the ECCpui, and it is known to the CSP. When a user saves data in the cloud, the cloud service provider (CSP) provides a set of keys that are used to encrypt the data. First, the data is encrypted using a key that is only known to the user and no one else. Before it is stored in the cloud storage, the cypher text generated as a consequence of the calculation is encrypted once again. In the next step, the completed encrypted text is saved in cloud storage [20][21][22]. Even the CSP is unaware of the key required to decode the data. In this case, the user will only be responsible for the first level of decryption. e key for second-level decryption is known to the user in this case. is process of delivering service to both the user and the CSP is carried out with the assistance of a third-party agent known as the Crypto Provider.
Provider of cryptographic services: client-side encryption and decryption are handled by the client-side cryptographic processor (CP) [29]. It is ready after the set of users' keys has been collected. Whenever a CSP registers a new user, the CP makes the ECCpui available to the CSP. If a user wishes to keep data in cloud storage, he or she must first encrypt the data using the AESi algorithm and then using the ECCpui algorithm.
Pseudocode of Proposed work.
Experimental Results
In order to make predictions about the Zika virus, the Matlab version is employed. It is a data mining software package that comprises a variety of machine learning algorithms for processing large amounts of data. It analyzes data in the ARFF file format by default, with CSV as a supported file format for Weka as a second option [30]. e explorer option is utilized for both the training set data and the testing dataset. e synthetic data was constructed using all feasible combinations of our assumptions and assumptions from other sources. Because it is regarded to be an uncommon illness in India, it is difficult to gather information about it. Figure 7 represents the encrypted data. e MATLAB tool makes use of the dataset that was supplied in the previous section. It is necessary to utilize all of the possible combinations of the symptoms. en, there are approximately 500 instances and 15 characteristics added to the mix. e individual is classed as infected or uninfected based on the information they provide. In this case, the MLP classifier is utilized to categorize the instances into groups. e MLP classifier has a 97 percent accuracy rate, which is excellent. e occurrences are classed as infected or uninfected based on the true positive and false positive rates for each case. e dataset under this study is publicly available. e Journal of Healthcare Engineering algorithm for determining the sensitivity and specificity of the cases is used to determine these characteristics.
(11) Figure 8 represents the decrypted data. e threshold value for each class probability ranges from one class to the next. As a result, a classifier that yields an MLP threshold is described. It is indicated on the X-axis that the example dimensions are supplied and on the Y-axis that the true positive rate is presented. Figure 9 represents the classification accuracy. Let us say that the sensitivity, specificity, and accuracy of the aforementioned equations are determined. It is determined whether or not the diagnostic test is accurate based on these data. Following that, the specificity of the test indicates the usual diagnostic situation, which is a negative result. Table 4 provides an overview of the comparative examination of several classification algorithms.
While accuracy is defined as the ability to accurately detect the genuine result of the whole population, it is concerned with the real severity of a test diagnostic condition. A tall compassion test is designed to capture all of the potential positive situations that might occur during a test. As a result, sensitivity is utilized in the screening of diseases. When compared to other mosquito-borne diseases, the symptoms are mild to moderate.
ere is an incubation period of around 5 days for the virus [32]. If the symptoms linger for more than 7days, the individual should consult for an early prediction approach that employs our suggested method. If they are confirmed to be contaminated, they must have an RNA test in which their RNA is thoroughly examined and analyzed.
Conclusion
To determine whether or not the user is infected, the suggested system is utilized to gather data from the user and, depending on the symptoms, diagnose the user using the MLPNN algorithm for improved accuracy. e common characteristics of environmental factors are ready to be used to create a risky environment that is susceptible to infection. Once infections in patients have been found, the information pertaining to those diseases must be safeguarded in cloud storage. In order to safeguard the data stored in cloud storage, a double layer encryption approach using a hybrid crypto algorithm is used. e data is encrypted using the ECC and AES encryption methods, and even the third-party supplier has little knowledge of the contents of the encrypted data. Our suggested approach makes use of categorization to provide better results with an accuracy of 98 percent, and it assists the government's primary healthcare department in controlling the number of mosquitoes that are reproducing extremely successfully in the area. We are able to provide a better solution for the Zika virus infection when the healthcare industry and the government work together to implement our technology. e increased accuracy achieved in this research will be adopted, which means that it will assist physicians in the accurate prediction of Zika virus and the reduction of microcephaly illness in newborns and fetuses, among other things. Even premature delivery was averted to some extent. Patients in India must be monitored, and if any of the symptoms listed above are observed in any of them, the prediction system will take care of the prediction, as well as data protection, which is extremely secure, and the healthcare sectors do not essential to be concerned about the information being kept in fog storage because of the HEA used in the proposed model, which stands for Health Equity Act. Using the prediction system for anything beneficial to human civilization is the long-term goal of the project. e RNA test is the second step in the Zika virus prediction process. It is intended to concentrate on prevention and preventing the spread of the Zika virus. When it comes to cloud computing, new technologies are being developed on a daily basis, and data breaches are also occurring, so we must be prepared to deal with both the ups and downs. Because of this, research should be conducted for the benefit of society as a whole.
Data Availability e data that support the findings of this study are available from the corresponding author upon request. | 9,483.6 | 2022-01-12T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Transit OD Generation Based on Information Technology
Due to development in computer and information technology, data access and collection have becoming more and more convenient. In many cities’ transit system, transit vehicle GPS data and passenger IC card data could be provided. This paper focuses on the method that uses the passenger IC card data (only record once per trip) and transit vehicle GPS data to generate the transit OD matrix. After analyzing the characteristic of transit trips, the author gives the definition of continuity for a transit trip. Based on this definition, this paper then presents a search method to generate the transit OD matrix. The validity of this method has been tested in the modeling process for Zhengzhou city’s comprehensive transportation system. At the end, it is hoped that this research may give a useful lesson for other cities’ transportation modeling practice.
Introduction
How to use public transportation IC card data has done in many literatures [1][2][3]. Previous studies focused on the use of public transportation IC card data to get the demand on-site car; in addition, considering certain characteristics of the bus passenger flow to estimate approximate number of passengers alighting on stops. The gravity model is adopted to generate OD between bus stops with constrains on the number of passengers boarding and alighting. More complex approach was to develop a bi-level mathematical programming model, the upper-level issue is responding to least squares model, the lower-level is the problem of public transportation network equilibrium assignment [4]. Due to lack of bus GPS data, these methods gain the information from the bus IC card and application of a number of vehicles on the site, meanwhile applied to the bus OD estimation process.
With the development of information technology in recent years, more and more cities begin to cellect the bus GPS data. Recently some studies pay attention to explore the basis of the bus GPS and bus IC card data fusion to generate a bus OD [5]. However, the assumptions lead to that the method can be applied only in a single bus line. How to effectively utilize large amounts of data in computers and IT development is still a very interesting research direction.
This paper focused on how to use public transportation IC card data (only the case of credit card on the bus) and bus GPS data to generate a bus OD. Different from previous studies, In this study, not only presenting a model to estimate the bus OD, but searching alighting stops is conducted by the bus travel characteristics of the credit card records with meeting certain conditions to reflect the public transport of bus travel the inherent law of the OD. Bus OD generated from the existing methods are mostly corresponding to a certain extent. The new methed is to use the concept of continuity of bus travel "to reproduce" the number of on-site car and bus OD generated by the new method in the overall evaluation on the bus travel through the survey data. As the hypothesis conditions, most applications on a virtual network are to verify its effectiveness. This method has been successfully used in the traffic model of zhengzhou, and generated the article dynamic bus OD with more than ninety thousand of the GPS data and millions of bus card record.
Bus GPS and IC card data
This study is based on two basic database of bus GPS and bus IC, data record provided by the Zhengzhou Public Transport Company (May 18, 2010 all-day bus operators).
Bus GPS data can describe the detail time of each bus arriving in the stations etc. The data tables can provide the information comprising of company names, bus lines, the number of vehicles, operation directions, the station number, station name, arrival time. There are a total of 90 million records in a all-day data (per vehicle to reach each site for a record). Public transportation IC card data can record detailed information of each riding on transit vehicle, and the information tables are consisted of six fields: the card number, card type, using card date, using card time, using bus line, the vehicle number. There are a total of 129 million records in a all-day (each passenger per ride card information for a record).
Tab.2 Example of transit IC data
The analysis of bus OD needs to match the bus GPS and IC card data on line, additionally using vehicles and taking time. After the inspection for matching, a total of 88 million bus GPS data and 118 million bus IC card data can be in exact match. The rest of the data is not directly used due to various reasons caused by no match between bus GPS or IC card.
Characters of resident's bus travel
The analysis of survey data on the bus characteristics of residents are from Zhengzhou City, June 2010 household travel characteristics survey. The principles of extraction of data sources are: (1) taking a person as a unit, the survey object (personal) has one trip at least in a day; (2) the survey object is completed in survey space, namely the first trip is from his/her home and the last trip destination is also his/her home. According to the above principles, there are a total of 8547 subjects extracted from survey data. Since the families surveyed are subjected to uniform distribution in space, we can approximately consider that the survey objects are similarly subjected to uniform distribution in space.
(1) The distribution of frequency of bus usage Only 11.32% travelers have a bus trip within public transport survey in one day, and 68.71% travelers take two bus trips in one day. Analysis of residents of bus travel, the bus mode chosen by traveler is dominant role with 70.98% trips in all.
Tab.3 Travel mode structure for persons with transit trips (2) The continuity of bus travel The continuous characteristics with two bus trips in one day are: the next travel origin is generally the destination of this trip, namely the other transit modes are rarely used by passengers between two trips when to choose travel modes. Therefore, we can define the continuity of a bus trip: when travelers have many trips in a day, if the destinations of their next trip were the origins of this trip, this bus trip can be called the continuity trip. For the last trip in one day, if the destination of trip is the origin of this trip, the last trip is also the continuity trip.
According to the definition above, if there is only a bus trip in one day, this bus trip is non-continuous. The continuity trip is a proportion 89.79% with the statistics source. If we take out the trip with one travel process, the continuity trips can be 94.7%. In other words, the most bus trips are the continuity trips. Analyzing the non-continuity of bus travel for travel using public transportation more than once, about 60% are with non-continuity trips. This indicates that the destination is closer to the travel origin of the non-continuity trip. In addition, taxi and car (take) modes chosen are often accompanied by bus non-continuity trips. This shows that there are no convenient bus lines until the next time. Figure 2.
(2) Algorithm description Firstly, the records of same number cards should be extracted(including each card on the train line, direction, and site information) and sorted by charge time. Search every get-off records, and the search method: the next records site location as the target, the line get-on this time as the search direction, on-line follow-up site for the search site collection. When the shortest distance of a station in the follow-up site of this on the is found, the site is recorded as get-off site. When the card records the the last one, the target location was the location of the first sites charged.
52
Advances in Intelligent Transportation System and Technology Obviously, the search algorithm on every card records (including the last credit card records) can be found in the corresponding get off site. But not all the bus travel are continuous, so there will be searches that get-off site and get-on site is the same(i = 0). In this case, the OD between the card records stations could be regarded as invalid.
(3) Algorithm evaluation The search algorithm showed good adaptability to be able to handle a variety of complex situations. Firstly, for the continuity of bus travel, the search algorithm can get accurate get-off site, typical bus travels cases (A) and (B) as shown in Figure 1. According to the proportion of the continuity of bus travel in the travel survey of households, the accuracy of the search algorithm can achieve about 95%. Secondly, for some of the non-continuity of bus travel, the search algorithm can be accurate to get off site. Figure 1, a typical bus travel (C) of the case, for example, when the passengers between the two bus trip mixed up with other ways, the search algorithm is still applicable. According to the characteristics of non-continuity of bus travel, the search algorithm could find a relatively accurate or relatively close method to get off site most of the non-continuity of bus travel. Finally, for some non-continuity of bus travel, get-off sites found through the search algorithm is invalid (as the situation that the get-on site and the get -off site are the same one). When this records are regarded as invalid, the bus station between the OD can get higher accuracy by the search algorithm.
The generation of the bus OD
The search algorithm can get most of the card records of the OD between bus stations, but it can't deal with only one credit card record and some non-continuous bus travel card records. As the large number data of the OD between bus stations get a high accuracy, OD between bus stations would be got through by loft on-site car (credit card) number for the whole sample.
Card records corresponding to get off site can also match its corresponding off time by bus GPS data. For many of the same card number records, the OD between bus stations could be transformed to the bus OD by the judgment of the transfer site. Because of bus GPS data, the transfer site definitions can be very precise, such as the combination of the two stations (site on the car and get off the site) the spatial distance, vehicular arrival and the time interval for the comprehensive judgment. As normal circumstances, the level peaks on the get off at the time of a passenger in a site before the next time in less than 25 minutes can be considered the site (ie, get off the site and the site on the car) for the transfer points. Peak within 30 minutes can be considered as a transfer site (due to the peak is very crowded, and passengers may not be squeezed into the vehicles to take the line first to reach). When a site has been identified as a transfer site, the bus trip and the next one will be combined (to retain the previous site on the train and get off of after times site). All card records does not exist to meet the conditions of the transfer site, the OD between bus stations was converted into a bus OD. Different from bus station OD, bus OD expression is public transportation amount of spatial relations, rather than public transit passenger volume of spatial relations. Zhengzhou bus OD generated results in Table 5. In Table 5 with a record on the train line and get-off the line is inconsistent with the description of bus travel after the transfer. When getting on line and getting off line consistent (right now travel sequence time is also consistent), there is no change to bus travel. It can be got from Table 5, the bus OD obtained by the new method is dynamic. Figure 3 is a bus company statistics IC card credit card traffic generated bus OD allocation in the model line statistical results of the comparison. It can be seen, each line card statistics of traffic and the bus company forecast results have a very high goodness of fit (0.97), which also illustrates the bus OD generated by the above method has a very high accuracy.
Fig3. Comparison of forecasted and reported line ridership
It should be noted that the above method is applicable to passengers in the IC card bus travel OD generated. A considerable portion of bus passenger is coin-operated ride the city bus. The features of bus travel card passengers (usually urban resident population as the main body) and the coin passengers (floating population) are significant difference. Therefore, the generated based on the IC card data bus OD should not loft to generate the overall bus OD.
In the case of the lack of coin-operated passenger, bus travel data can be assumed to constitute a card passengers urban resident population, and using the above-generated credit card passenger bus trip OD, bus service level measurements and land use data (population, employment, etc.) to calibrate
54
Advances in Intelligent Transportation System and Technology passenger transport model [6] . The flow of population distribution can be used to predict on the basis of the calibration of the passenger traffic model and the coin passenger bus travel OD (or floating population). Credit card passengers and coin-operated passenger bus trip OD are merged to generate the overall bus OD. Figure 4 shows the results for the overall bus OD in Zhengzhou city, assigned bus network.
Fig4. Transit assignment results in AM peak period for Zhengzhou city Obviously, the exact extent of the coin passenger bus travel OD have a certain impact on the quality of the overall transit trip OD, but the bus trip data is difficult to get through the survey. The future can be combined with sections of cross-section survey of passenger transport, back stepping.
Conclusions
This paper is based on the analysis of bus travel characteristics of the concept of continuity of bus travel. Based on the concept of continuity and bus system (bus GPS and IC card data), this paper constructs a search algorithm to generate the OD between the bus station, which to generate other types of bus OD. Different from previous studies is that the proposed method can be easily applied to the reality of the bus system; bus OD generated can reproduce the inherent law of bus travel. The bus OD generated should be noted as to the bus GPS and IC card data. It can accurately reflect the general urban population (or credit card passengers) bus travel distribution, and the floating population (or coin passengers) bus trip distribution in future is still needs further research. | 3,438.4 | 2012-07-01T00:00:00.000 | [
"Computer Science"
] |
Experimental reservoir computing using VCSEL polarization dynamics
: We realize an experimental setup of a time-delay reservoir using a VCSEL with optical feedback and optical injection. The VCSEL is operated in the injection-locking regime. This allows us to solve different information processing tasks, such as chaotic time-series prediction with a NMSE of 1 . 6 × 10 − 2 and nonlinear channel equalization with a SER of 1 . 5 × 10 − 2 , improving state-of-the-art performance. We also demonstrate experimentally, through a careful statistical analysis, the impact of the VCSEL polarization dynamics on the performance of our architecture. More specifically, we confirm recent theoretical prediction stating that a polarization rotated feedback allows for the enhancement of the calculation performance compared to an isotropic feedback.
edge-emitting lasers, and permits higher computation speed thanks to faster VCSEL intern dynamics. Although we suggested recently an extension of the time-delay reservoir architecture using VCSEL polarization dynamics [15], the performance of this architecture has never been verified experimentally.
In this letter, we experimentally prove that reservoir computing indeed benefits from using the polarization dynamics of a VCSEL. This system has already been studied through simulations in our previous work [15]. To exhibit this result, we first consider the dynamics of the experimental system to find the best operating point. We then test our reservoir computer with two different benchmark tasks: chaotic time-series prediction and nonlinear channel equalization. We prove while testing the consistency of the result statistically that operating the reservoir with polarization rotated feedback improves the computing performance.
We consider the set-up shown in Fig. 1(a). The reservoir itself is composed of a VCSEL emitting at 1550 nm from Raycan and an optical feedback loop. Along the feedback loop, the light is going through an attenuator Keysight 81577A (att) which controls the feedback strength and a polarization controller (P.C.) which adjusts the polarization of the feedback and allows switching between two different configurations: isotropic or rotated feedback. The optical delay line is 39 ns-long, as fixed by the length of fiber brought by the packaging of each components in the optical feedback. This imposes the processing speed to be at 25.6 MHz. According to [15], the time separation θ between two nodes for this system has to be around 20 ps to provide the best performance. However due to the frequency limitation imposed by our oscilloscope and our modulator, we have chosen to set this value to θ = 100 ps. This corresponds to the highest frequency we can record with our oscilloscope, i.e. a Tektronix DPO 71604C 16 GHz bandwidth, and the fastest modulation speed of our Mach-Zehnder modulator. That leads to N = 390 virtual neurons spread along the delay-line.
The input of data is made optically through optical injection. The input layer comprises a continuous tunable laser Yanista Tunics T100S. Its polarization is adjusted in order to be aligned with the modulation axis of the modulator. Finally a last polarization controller (P.C.) allows controlling the polarization of the injection. All the input signals are numerically generated: the mask composed of 390 different values (as much as the number of nodes) is randomly generated, taking values in {-1; 1}, then this mask is modulated by the different input values. These signals are loaded in an Arbitrary Waveform Generator (AWG) Tektronix AWG 700002A and generated at 10 GSamples/s and sent to the RF port of the the modulator.
The output layer is composed of an EDFA-amplifier (ampl) and a photodiode (PD) Newport 1544-B 12 GHz bandwidth. The signal of the photodiode is recorded by the oscilloscope at 50 GSamples/s. Using an optical amplifier is mandatory as the power recorded from the VCSEL is lower than the detection threshold of our photodiode. The fiber splitter between the feedback loop and the output layer yields 90% of the total power for the detection. The recorded signals are post-processed using a computer. We first focus on the dynamical properties of this system. Figure 2 depicts different optical spectra. Figure 2(a) shows the one of the free running VCSEL. All figures are centered on the wavelength of the free-running emission of the VCSEL, which is 1552.88 nm. The bias current is set to 1.5 times the threshold current I th = 3 mA, which corresponds to 4.5 mA. The total output power of the VCSEL is then 166 µW. This current value is chosen as it is the one allowing the best discrimination of the different inputs in the response of the reservoir computer. This has been observed in simulations and it is therefore consistent with our experimental observations. The operating temperature is set to 21 • C. In this condition, the VCSEL emits in two linear polarization modes. The power ratio between the two polarization modes is 54.23 dB and their frequency difference is measured to be 16.4 GHz. Note that the suppression ratio is high, but similar to what we simulated in [15], and we still expect an impact of the polarization on the performance. The tuning of the polarization of the feedback loop is made with the lowest attenuation possible (10 dB). This allows exhibiting different dynamics depending on the polarization of the feedback as shown on Figs. 2(c) and 2(d). In these conditions, the VCSEL with rotated feedback exhibits several optical frequencies in both polarization modes ( Fig. 2(d)), compared to isotropic feedback, in which the VCSEL shows a cleaner optical spectrum (Fig. 2(c)). Therefore the polarization rotated optical feedback yields a richer polarization nonlinear dynamics when compared to the isotropic optical feedback. While applying injection, the VCSEL exhibits a much narrower peak in its dominant polarization mode ( Fig. 2(b)). Indeed the VCSEL is locked at the operating point [16]. The detuning between the master laser and the slave laser is 0.01 nm (-1.05 GHz). The emitted power of the depressed mode is also reduced.
For the operating point, the system is set at the edge of instabilities as it is the best state for a system to perform reservoir computing [17]. This specific point has also been confirmed for our system numerically in [15]. We increase the attenuation and the optical injection power until the system reaches the steady state. This state is reached while applying an attenuation of 18 dB in the optical feedback for both isotropic and rotated feedback configuration, and a mean optical injected power of 50 µW. Figure 2(b) shows the optical spectrum of the setup in such conditions.
In the following the reservoir is kept at this operating point. Figure 1 shows the input and output signals from the reservoir. Figure 1(b) displays a part of the signal used to feed the reservoir computer: nineteen masked input values. Each input value is maintained during τ = 39 ns. Figure 1(c) shows the response of the reservoir computer corresponding to the input values of Fig. 1(b). The mask allows keeping the reservoir in a transient state during the operation.
Following the procedure previously explained, we have first tested our reservoir with the Sante-Fe series prediction task [18]. This task aims at predicting the next time-step value of the Santa-Fe chaotic series, from the knowledge of the previous values. We asses the level of performance with the normalized mean square error (NMSE) defined as: where N is the number of samples, y(i) is the target signal,ỹ(i) is the estimated signal by the reservoir, and σ y is the variance of the original signal. This task has been chosen as it is a well-known task to test reservoir computers, and allows comparing the efficiency of our architecture to other experimental reservoir computer based on semiconductor lasers [14]. In our case, we used the total power of the optical signal as the state for each node, that means the sum of the power in the dominant and depressed polarization modes. 6000 samples are used to train our system, performed with a linear regression. The 2992 other samples are used for testing. In these conditions, we successfully reach a NMSE of 1.9 × 10 −2 with parallel feedback. Performance is slightly better with rotated feedback, with a NMSE of 1.6 × 10 −2 . Both are the mean results over 3 different training and testing. This is also an order of magnitude lower than the results obtained with other laser-based time delay reservoir computer, even with shorter training and testing sets used in ref. [14]. An example of prediction is given in Fig. 3: the predicted signal is really close to the target signal even for the lowest values, thus leading to a low relative error. As this task is a reconstruction of a chaotic series, the target value is almost continuous. Hence, this task is highly sensitive to the signal-to-noise ratio (SNR). Considering this fact, we suggest that the performance we reached is related to the SNR we can achieve experimentally and which is estimated at about 12 dB.
Since this reservoir computer is first thought to solve telecommunication problems, we have studied the channel equalization task more in depth [19]. This task aims at reconstructing a signal that has been distorted through a nonlinear communication channel. This original signal d(i) is built by drawing symbols randomly in {-3; -1; 1; 3}. The different inputs are first linearly combined as follows: This signal is then modified using a nonlinear function: The output values u(i) of the nonlinear channel are finally used to feed the reservoir computer, and infer the original inputs d(i). The performance on this task is measured through the symbol error rate (SER). This is the total number of errors committed while performing the task divided by the total number of symbols in the signal. Figure 4(a) presents an example of experimental signal reconstruction in the case of rotated feedback configuration. Only one symbol has been incorrectly reconstructed in the 83 th position. In that case, SER is 1.1 × 10 −2 As already mentioned, the SNR in the output layer of our system is low, around 12 dB. We therefore run new simulation based on the simulation framework used in [15] to have an insight of the role of this level of noise in the reservoir computer on its performance. These results are shown in Fig. 4(b). It shows the expected performance on nonlinear channel equalization task depending on the SNR of the readout layer, for the two different types of feedback under study. Simulating the reservoir computer with 12 dB of SNR in the output layer yields the SER that is achievable about 1.9 × 10 −2 with parallel feedback and 1.5 × 10 −2 with perpendicular feedback.
We then run the experimental measurement series for both isotropic and rotated feedback configuration. Figure 4(c) depicts the SER we obtained over the nine measurements we made for each configuration, and the mean SER. Each time, the training was made with 10000 samples, and the testing with 5400 samples, as previously using the total optical power as the state of one node. The mean SER is lower with perpendicular feedback (1.5 × 10 −2 ) as the one with parallel feedback (2 × 10 −2 ). Added to that, these results are almost identical to the one theoretically obtained. Considering the low SNR we are able to reach, these results have been statistically analyzed with a one-sided t-test with a significance level of α = 2.5% to compare the averaged SER obtained from series of SER measurement realized in the isotropic (µ SE R, I F ) and polarization-rotated feedback (µ SE R, RF ) configuration, respectively. The null hypothesis H 0 : µ SE R,I F = µ SE R, RF is tested against the alternative hypothesis H 1 : µ SE R, RF < µ SE R,I F . We first confirm the normality of the data using a Kolmogorov-Smirnov test [20] and then apply the t-test. We find that the statistics of interest computed from the data (and following a Student's distribution with 15 degrees of freedom) belongs to the rejection region of the test : (−∞, −2.131]. As a result, we reject H 0 in favor of H 1 with significance level α = 2.5%. This implies that we have strong statistical evidence in this low-SNR situation that the polarization rotated feedback allows smaller SER in the nonlinear channel equalization task. This strengthens the first insight we had from the chaotic time series prediction. The experiment conclusively shows the improved performance of the reservoir computer using polarization competition as theoretically predicted in [15]. Figure 4(b) also tells us the way to increase the performance further, i.e. by increasing the SNR. The low SNR is due to the noise added by the amplifier at the readout layer. This amplification is mandatory to detect the low power signal emitted by the VCSEL. A higher SNR can be achieved by either replacing the amplifier with a less noisy one or by increasing the laser output power.
To conclude, we presented in this article an experimental realization of a time-delay reservoir based on a VCSEL. The laser is used in injection locking mode to perform calculation. Using this particular dynamics, we were able to solve different tasks successfully : the chaos prediction task with a NMSE of 1.6 × 10 −2 and nonlinear channel equalization with a SER of 1.5 × 10 −2 , both error rates being below the state-of-the-art and at high bit rates. Moreover, we proved experimentally earlier theoretical predictions with large statistical significance : using rotated feedback instead of isotropic allows enhancing the computing performance of a physical reservoir. | 3,159.8 | 2019-06-18T00:00:00.000 | [
"Physics",
"Engineering"
] |
Triethyl Citrate (TEC) as a Dispersing Aid in Polylactic Acid/Chitin Nanocomposites Prepared via Liquid-Assisted Extrusion
The production of fully bio-based and biodegradable nanocomposites has gained attention during recent years due to environmental reasons; however, the production of these nanocomposites on the large-scale is challenging. Polylactic acid/chitin nanocrystal (PLA/ChNC) nanocomposites with triethyl citrate (TEC) at varied concentrations (2.5, 5.0, and 7.5 wt %) were prepared using liquid-assisted extrusion. The goal was to find the minimum amount of the TEC plasticizer needed to enhance the ChNC dispersion. The microscopy study showed that the dispersion and distribution of the ChNC into PLA improved with the increasing TEC content. Hence, the nanocomposite with the highest plasticizer content (7.5 wt %) showed the highest optical transparency and improved thermal and mechanical properties compared with its counterpart without the ChNC. Gel permeation chromatography confirmed that the water and ethanol used during the extrusion did not degrade PLA. Further, Fourier transform infrared spectroscopy showed improved interaction between PLA and ChNC through hydrogen bonding when TEC was added. All results confirmed that the plasticizer plays an important role as a dispersing aid in the processing of PLA/ChNC nanocomposites.
Introduction
Polylactic acid (PLA) is an attractive biopolymer for packaging and biomedical applications because of its biodegradability, non-toxicity, good mechanical properties, high optical transparency, and its commercial availability. However, PLA is brittle, and it exhibits low thermal stability, low melt strength, moderate barrier properties, and a slow crystallization rate. It is, therefore, necessary to modify the PLA to improve these properties to make PLA competitive among the common polymers used in industry [1][2][3]. PLA has been mixed with plasticizers [4], polymers [5], layered silicates [6], carbonaceous nanomaterials [7], cellulose [8], chitin [9], or a combination of these materials resulting in hybrid composites [10].
The development of nanocomposites based on PLA and chitin can be a good approach to improve the properties of PLA and to produce fully bio-based and biodegradable materials. Chitin nanofibers and nanocrystals have been recently used as additives to enhance thermal and mechanical properties of extrusion and that the plasticizer content should be at least 7.5 wt % to achieve well-dispersed and distributed nanocrystals.
Materials
Polylactic acid (PLA) (Ingeo 4043D grade) from NatureWorks LLC (Minnetonka, MN, USA) in pellet form was used as the matrix. Chitin powder from yellow lobster shell waste, purified at Pontifical Catholic University of Chile following the process reported in our earlier study [17], was used as the starting material for isolation of chitin nanocrystals (ChNC). These nanocrystals were used as to reinforce the PLA with and without the addition of a plasticizer. Liquid triethyl citrate (TEC) with a Mw of 276.3 g/mol (≥99% Alfa Aesar GmbH & Co KG, Karlsruhe, Germany) and ethanol (99.5%) was purchased from Solveco (Stockholm, Sweden). TEC was used to enhance the ChNC dispersion in the PLA matrix, and ethanol was used as a solvent for TEC, since it is partially soluble in water and to control the flowability of the suspensions for the liquid feeding. In addition, plasticizer and ethanol were the liquid media for feeding ChNC into the extruder.
Preparation of Chitin Nanocrystals and Suspensions for Liquid Feeding
Chitin nanocrystals (ChNC) were isolated via the acid-hydrolysis treatment according to the procedure reported earlier by Salaberria et al. [14]. Briefly, the chitin flakes were hydrolyzed with 3 M HCl Panreac (Barcelona, Spain) at 100 ± 5 • C under stirring for 90 min. After hydrolysis, the suspension was diluted with distilled water, washed via centrifugation and transferred to dialysis membranes for 3 days. Finally, the suspension was subjected to ultrasonic treatment for 10 min to disintegrate the remaining larger particles and then vacuum filtered using a polyamide filter Sartorious Biolab Products (Göttingen, Germany) with a 0.2 µm pore size to obtain a ChNC gel with a solid content of 19.5 wt %. Figure 1a shows an optical microscopy image of well-dispersed ChNC in water and a photograph of chitin nanocrystals displaying flow birefringence due to good dispersion. The AFM image in Figure 1b displays the typical rod-shaped ChNC with diameters in the range of 2-24 nm, which are shown as height distribution in Figure 1c, and with lengths in the range of 114-831 nm, which are shown as length distribution in Figure 1d. The width and length were measured using the Nanoscope V software Veeco (Santa Barbara, CA, USA) and the "FibreApp" (Zurich, Switzerland) respectively.
Materials
Polylactic acid (PLA) (Ingeo 4043D grade) from NatureWorks LLC (Minnetonka, MN, USA) in pellet form was used as the matrix. Chitin powder from yellow lobster shell waste, purified at Pontifical Catholic University of Chile following the process reported in our earlier study [17], was used as the starting material for isolation of chitin nanocrystals (ChNC). These nanocrystals were used as to reinforce the PLA with and without the addition of a plasticizer. Liquid triethyl citrate (TEC) with a Mw of 276.3 g/mol (≥99% Alfa Aesar GmbH & Co KG, Karlsruhe, Germany) and ethanol (99.5%) was purchased from Solveco (Stockholm, Sweden). TEC was used to enhance the ChNC dispersion in the PLA matrix, and ethanol was used as a solvent for TEC, since it is partially soluble in water and to control the flowability of the suspensions for the liquid feeding. In addition, plasticizer and ethanol were the liquid media for feeding ChNC into the extruder.
Preparation of Chitin Nanocrystals and Suspensions for Liquid Feeding
Chitin nanocrystals (ChNC) were isolated via the acid-hydrolysis treatment according to the procedure reported earlier by Salaberria et al. [14]. Briefly, the chitin flakes were hydrolyzed with 3 M HCl Panreac (Barcelona, Spain) at 100 ± 5 °C under stirring for 90 min. After hydrolysis, the suspension was diluted with distilled water, washed via centrifugation and transferred to dialysis membranes for 3 days. Finally, the suspension was subjected to ultrasonic treatment for 10 min to disintegrate the remaining larger particles and then vacuum filtered using a polyamide filter Sartorious Biolab Products (Göttingen, Germany) with a 0.2 µm pore size to obtain a ChNC gel with a solid content of 19.5 wt %. Figure 1a shows an optical microscopy image of well-dispersed ChNC in water and a photograph of chitin nanocrystals displaying flow birefringence due to good dispersion. The AFM image in Figure 1b displays the typical rod-shaped ChNC with diameters in the range of 2-24 nm, which are shown as height distribution in Figure 1c, and with lengths in the range of 114-831 nm, which are shown as length distribution in Figure 1d. The width and length were measured using the Nanoscope V software Veeco (Santa Barbara, CA, USA) and the "FibreApp" (Zurich, Switzerland) respectively. To feed the nanocrystals in liquid form, suspensions containing ChNC in water, TEC plasticizer and ethanol were prepared as follows: ChNC gel in water (19.5 wt %) was pre-dispersed in ethanol at a ratio of 1:5 water to ethanol for 2 h using magnetic stirring, and then mixed with TEC for 2 h. The same amount of the ChCN gel was added to all suspensions to prepare nanocomposites with a 3 wt % of ChNC, and the TEC content was varied in each suspension such that the final amount of plasticizer in the nanocomposites would be 2.5, 5.0, and 7.5 wt %. A suspension without a plasticizer was prepared for the extrusion of the unplasticized nanocomposite. Each suspension was ultrasonicated UP400S, Hielscher (Teltow, Germany) for 2 min in an ice bath prior to the extrusion and then pumped into the extruder. Mixtures of water, ethanol and TEC with the same proportions were prepared for To feed the nanocrystals in liquid form, suspensions containing ChNC in water, TEC plasticizer and ethanol were prepared as follows: ChNC gel in water (19.5 wt %) was pre-dispersed in ethanol at a ratio of 1:5 water to ethanol for 2 h using magnetic stirring, and then mixed with TEC for 2 h. The same amount of the ChCN gel was added to all suspensions to prepare nanocomposites with a 3 wt % of ChNC, and the TEC content was varied in each suspension such that the final amount of plasticizer in Polymers 2017, 9,406 4 of 16 the nanocomposites would be 2.5, 5.0, and 7.5 wt %. A suspension without a plasticizer was prepared for the extrusion of the unplasticized nanocomposite. Each suspension was ultrasonicated UP400S, Hielscher (Teltow, Germany) for 2 min in an ice bath prior to the extrusion and then pumped into the extruder. Mixtures of water, ethanol and TEC with the same proportions were prepared for the extrusion of plasticized PLA materials (control samples), as well as a mixture of only water and ethanol for the extrusion of PLA (control sample for unplasticized composite).
The prepared nanocomposites are coded as PLA-TEC (the number indicates the amount of plasticizer)-ChNC, the unplasticized composite is named as PLA-ChNC, and PLA always makes reference to extruded PLA under the presence of water and ethanol, and it will be indicated otherwise.
Extrusion of Nanocomposites
PLA, plasticized PLA materials (PLA-TEC), unplasticized nanocomposite (PLA-ChNC) and plasticized nanocomposites (PLA-TEC-ChNC) were prepared using a co-rotating twin-screw extruder ZSK-18 MEGALab, Coperion W&P (Stuttgart, Germany) with a liquid-assisted feeding of suspensions with a slight modification of the process described by Herrera and co-workers [9]. A K-tron gravimetric feeder (Niederlenz, Switzerland) was used to feed PLA, and a high-pressure syringe pump 500D, Teledyne Isco (Lincoln, NE, USA) was used for the liquid feeding of suspensions with ChNC and solutions without ChNC. A schematic representation of the process with the parameters and the screw configuration used are shown in Figure 2. The total throughput of the process was 2 kg/h, the screw speed was set to 300 rpm, and the temperature profile was ranging from 185 to 200 • C. The PLA pellets and suspensions were fed at the main feeding zone with a specific feeding rate for each particular material according to the final composition, as shown in Table 1. The prepared nanocomposites are coded as PLA-TEC (the number indicates the amount of plasticizer)-ChNC, the unplasticized composite is named as PLA-ChNC, and PLA always makes reference to extruded PLA under the presence of water and ethanol, and it will be indicated otherwise.
Extrusion of Nanocomposites
PLA, plasticized PLA materials (PLA-TEC), unplasticized nanocomposite (PLA-ChNC) and plasticized nanocomposites (PLA-TEC-ChNC) were prepared using a co-rotating twin-screw extruder ZSK-18 MEGALab, Coperion W&P (Stuttgart, Germany) with a liquid-assisted feeding of suspensions with a slight modification of the process described by Herrera and co-workers [9]. A Ktron gravimetric feeder (Niederlenz, Switzerland) was used to feed PLA, and a high-pressure syringe pump 500D, Teledyne Isco (Lincoln, NE, USA) was used for the liquid feeding of suspensions with ChNC and solutions without ChNC. A schematic representation of the process with the parameters and the screw configuration used are shown in Figure 2. The total throughput of the process was 2 kg/h, the screw speed was set to 300 rpm, and the temperature profile was ranging from 185 to 200 °C. The PLA pellets and suspensions were fed at the main feeding zone with a specific feeding rate for each particular material according to the final composition, as shown in Table 1. Two atmospheric venting and vacuum venting along the extruder were used to remove water and ethanol, as well as the trapped air. The extruded materials were cooled down in a water bath and then pelletized and dried at 55 °C overnight. The pelletized materials were compression molded using a hot press LPC-300 Fontijne Grotnes (Vlaardingen, Netherlands) to prepare films of approximately 200 µm thickness for further characterization. The pellets were placed inside metal plates covered with Mylar ® films Lohmann Technologies Ltd (Milton Keynes, UK) and compression molded at 190 °C for 210 s at contact pressure and then for 30 s at 4 MPa. The films were immediately removed from the metal plates and air-cooled to room temperature (~2-5 min) to avoid crystallization. Two atmospheric venting and vacuum venting along the extruder were used to remove water and ethanol, as well as the trapped air. The extruded materials were cooled down in a water bath and then pelletized and dried at 55 • C overnight. The pelletized materials were compression molded using a hot press LPC-300 Fontijne Grotnes (Vlaardingen, Netherlands) to prepare films of approximately 200 µm thickness for further characterization. The pellets were placed inside metal plates covered with Mylar ® films Lohmann Technologies Ltd (Milton Keynes, UK) and compression molded at 190 • C for 210 s at contact pressure and then for 30 s at 4 MPa. The films were immediately removed from the metal plates and air-cooled to room temperature (~2-5 min) to avoid crystallization.
Weight
The effect of water, ethanol, TEC plasticizer and ChNC on the molecular weight of PLA was evaluated via gel permeation chromatography (GPC) using an Ultimate 3000 HPLC system (Thermo Scientific, Germering, Germany). The columns use are as follows: four Phenogel GPC columns, from Phenomenex, with a 5 µm particle size and 1E5, 1E3, 100, and 50 Å porosities, respectively. Tetrahydrofuran at a flow rate of 1 mL/min was chosen as the mobile phase, and mono-disperse polystyrene standards were used for the universal calibration.
Melt Flow
The melt flow index of the prepared materials was measured using a melt indexer MI-1 Göttfert (Buchen, Germany). The measurements of the pelletized compounds were performed at least three times at 190 • C with a 2.16 kg load, and the average value in grams per 10 min is reported.
Transparency
Light transmittance of the materials was measured using a Perkin Elmer UV/VIS Spectrometer Lambda 2S (Überlingen, Germany). The scan was carried out in duplicated from 200 to 800 nm with a scan speed of 240 nm/min.
Dispersion and Morphology
The overview of the dispersion and distribution of ChNC in liquid feeding suspensions, as well as nanocomposite films were studied using a Nikon Eclipse LV100NPOL polarizing optical microscope (Shanghai, China). In the case of the nanocomposite films, cryogenic fracture surfaces were also analyzed using a FEI Magellan 400 XHR-SEM (Hillsboro, OR, USA). A thin layer (~10 nm) of tungsten was sputter-coated on the surfaces to avoid charging.
Chemical Characterization
Fourier transform infrared spectroscopy (FT-IR) studies were performed to determine the interaction between the PLA matrix and chitin nanocrystals and the effect of further addition of TEC. The samples were ground and mixed with KBr to prepare pellets. The spectra were collected
Thermal Properties and Crystallinity
The thermal properties of materials were measured using a differential scanning calorimeter DSC 821e, Mettler Toledo (Schwerzenbach, Switzerland). Approximately 3 mg of the material was heated in a semi-hermetic pan from −20 to 200 • C. The tests were performed with a heating rate of 10 • C/min under nitrogen atmosphere. The degree of crystallinity (Xc) of the films was calculated following the equation [28]: where ∆Hm is the enthalpy of melting (pre-melt crystallization was subtracted from the melting enthalpy), ∆Hcc is the enthalpy of cold crystallization, ∆Hm 0 is the enthalpy of melting for a 100% crystalline PLA sample, which is assumed to be 93 J/g [29], and w is the weight fraction of PLA in the sample.
Thermo-Mechanical Properties
The thermo-mechanical properties of prepared materials were determined using a TA Instruments Q800 DMA (New Castle, DE, USA) on the 5 mm × 30 mm specimens. The experiments were performed in tensile mode from 25 to 100 • C with a heating rate of 1 • C/min and a constant frequency of 1 Hz. The testing was performed in duplicates.
Mechanical Testing
The tensile properties of prepared materials were measured using a Shimadzu AG-X universal tensile testing machine (Kyoto, Japan) with a 1 kN load cell. The 5 mm × 80 mm specimens were cut using a rectangular press mold and then conditioned for 24 h at room conditions (25 ± 2 • C and 25% ± 2% of relative humidity). The gauge length was 20 mm, and the crosshead speed was 2 mm/min. The values for stress and elongation at break were directly obtained from the testing results, and modulus of each sample and the work of fracture were calculated from the stress-strain curves. Moreover, the properties of extruded PLA without water and ethanol were also measured and reported to analyze the effect of water and ethanol on the mechanical properties of neat PLA. The average value of five tests was reported. One-way analysis of variance (ANOVA) followed by the Tukey-HSD multiple comparison tests with a 5% significance level was used to analyze the results.
Suspensions for Liquid Feeding
Prior to the extrusion, the dispersion of the ChNC in the prepared suspensions was studied using an optical microscope and compared to the aqueous ChNC suspension (Figure 1a) to see the effect of ethanol and TEC. Figure 3 shows that ChNC dispersed in water, ethanol, and TEC at different concentrations are similar compared with the aqueous ChNC dispersion shown in Figure 1a.
This confirms that the addition of ethanol and TEC did not significantly affect the dispersion of ChNC in the suspensions. All ChNC suspensions showed good stability before the extrusion. However, it is worth noting that the viscosity of suspensions was affected by the addition of the plasticizer. Suspension with the highest TEC content (7.5 wt %) resulted in the highest viscosity. The possible reason can be the better dispersion of ChNC, which was not evident at the optical microscope scale, or more interactions between TEC and ChNC.
Suspensions for Liquid Feeding
Prior to the extrusion, the dispersion of the ChNC in the prepared suspensions was studied using an optical microscope and compared to the aqueous ChNC suspension (Figure 1a) to see the effect of ethanol and TEC. Figure 3 shows that ChNC dispersed in water, ethanol, and TEC at different concentrations are similar compared with the aqueous ChNC dispersion shown in Figure 1a. This confirms that the addition of ethanol and TEC did not significantly affect the dispersion of ChNC in the suspensions. All ChNC suspensions showed good stability before the extrusion.
Molecular Weight
The influence of water, ethanol, TEC, and ChNC, as well as of all of them together on the molecular weight of PLA was studied using GPC, and the average molecular weights (M w ) are shown in Table 2. When comparing M w of unprocessed PLA (as received) with M w of extruded PLA with and without water and ethanol, it is observed that the extrusion process affects the molecular weight of PLA more than the feeding of water and ethanol. This can be attributed to a decrease in local shear due to the plasticizer effect of water [6]. M w of the extruded PLA with water and ethanol was similar to that of the unprocessed PLA pellets (M w~1 99 kg/mol), showing that water and ethanol did not degrade PLA even if it is known that PLA is susceptible to hydrolytic degradation. When PLA was extruded with water, ethanol and TEC, the presence of TEC increased the molecular mobility of PLA, which may increase the water diffusion rate into the PLA molecules and thus, enhances the hydrolytic degradation [30], which results in a PLA-TEC5.0 material with a somewhat lower molecular weight (M w~1 96 kg/mol) but still less degraded that the extruded PLA. When comparing the molecular weight in Table 2 of unplasticized composite (PLA-ChNC) and plasticized nanocomposite (PLA-TEC5.0-ChNC), the values show that the reduction in molecular weight of PLA due to the addition of ChNC in a water ethanol suspension (from 199 to 181 kg/mol) was more than that due to the addition of ChNC in water, ethanol and TEC suspension (from 199 to 193 kg/mol). It is possible that, in general, the presence of additives, such as ChNC, may increase the thermo-mechanical degradation of PLA due to higher shear forces, as has been reported by others [6,31]. This effect may be smaller in a presence of a plasticizer. It is concluded that in this study, the polymer degradation due to chitin was hindered by the use of plasticizer and that the addition of chitin promoted the polymer degradation more than the addition of water and ethanol. Similarly, Stoclet et al. [6] reported that the processing of PLA/halloysite nanocomposites via conventional extrusion (dry method) resulted in higher degradation of PLA than the water assisted extrusion process, where the injection of water decreased the effect of the halloysite on the PLA molecular weight. In contrast, Rizvi et al. [32] reported hydrolytic degradation of PLA when it was processed with chitin in water suspension in a micro-compounder. However, the difference between that study compared with the present one is the long processing time, and the micro-compounder does not effectively remove the water and/or solvents, and the authors did not use a plasticizer. The processing time in Rizvi's study was 6 min, while the resident time in this study is less than 1 min, which may not be enough time to promote the hydrolysis of PLA. It should also be noted that the extrusion process involving liquids works better as a continuous process than as a batch process and with extruders with an appropriate degassing system.
Melt Flow
The measurement of the melt flow index (MFI) of the prepared materials gives indirect information about the dispersion and interaction between the polymer and nanocrystals since the flow behavior of polymer nanocomposites is influenced by the interfacial characteristics and the nanoscale structure [33]. The effect of the addition of varied amounts of TEC on the flow properties of PLA and PLA-ChNC was evaluated and the MFI values are listed in Table 2. The results show that the plasticized PLA exhibited higher MFI than PLA, as expected. The MFI of PLA was 3.1 g/10 min, and PLA-TEC7.5 showed the highest value of 4.9 g/10 min due to the highest amount of the plasticizer. The addition of TEC increases the polymer free volume and the polymer chains' mobility and, thus, decreases the viscosity and increases the MFI which is a typical effect of the plasticizer [34].
Opposite to the effect of the plasticizer, the addition of nanocrystals restricts the polymer chains' mobility and, thus, the MFI of the matrix decreases. It is seen from Table 2 that all nanocomposites, except for the PLA-ChNC, exhibited lower MFI than their respective materials without ChNC. The addition of ChNC to the PLA-TEC7.5 material decreased its MFI from 4.9 to 3.7 g/10 min, showing the largest effect. This result is an indication that the dispersion and interaction of the nanocrystals in the PLA-TEC7.5-ChNC nanocomposite were better than the nanocomposites with lower TEC contents. On the other hand, the PLA-ChNC composite showed a higher MFI than PLA, which indicates that the interaction of nanocrystals with the matrix was poor. In this case, the higher MFI can also be due to the lower molecular weight of the PLA-ChNC composite.
Transparency and Visual Appearance
The visual appearance of the extruded PLA with water and ethanol and its nanocomposite films as well as the optical microscopy images of the film surfaces are shown in Figure 4 (to the left). It is clear that the unplasticized composite shows visible agglomeration, which is not observed in plasticized nanocomposites. However, the optical microscopy images also show micro-sized agglomerations for the PLA-TEC2.5-ChNC nanocomposites but not for the nanocomposites with 5.0 wt % and 7.5 wt % TEC. The optical transparency of materials was measured because it can give an indication of the dispersion and distribution of ChNC in PLA. It is known that if the size of particles is smaller than the wavelength of visible light, the transparency of the matrix is affected less [35]. It was noticed during the test that the addition of TEC did not affect the PLA transparency, and the light transmittance spectra were overlapping with that of PLA. Therefore, those UV/VIS spectra are not displayed in Figure 4 (to the right), but the spectra of the extruded PLA with water and ethanol and its nanocomposites are shown. It is observed that the light transmittance of PLA decreased with the addition of chitin nanocrystals. At 550 nm of visible light, the light transmittance of PLA was 90%, whereas it was only 52%, 44%, 24%, and 30% for the PLA-TEC7.5-ChNC, PLA-TEC5.0-ChNC, PLA-TEC2.5-ChNC, and PLA-ChNC materials, respectively. These results show that the nanocomposites with the highest TEC content (7.5 wt %) had the best transparency of the nanocomposites and, thus, expected to have the best dispersion of ChNC which is in accordance with the MFI results. Figure 5 displays the cryogenic fracture surface of the unplasticized PLA-ChNC composite and the nanocomposites with a different TEC content. These micrographs clearly show that the dispersion and distribution of ChNC gradually improved with the plasticizer content as was also seen in the transparency and MFI studies. Figure 5 displays the cryogenic fracture surface of the unplasticized PLA-ChNC composite and the nanocomposites with a different TEC content. These micrographs clearly show that the dispersion and distribution of ChNC gradually improved with the plasticizer content as was also seen in the transparency and MFI studies. The micrograph at higher magnification for the PLA-ChNC composite (Figure 6a) shows poor dispersion and distribution and large agglomeration (~10 µm) of ChNC, whereas the PLA-TEC7.5-ChNC nanocomposite, with the highest TEC content (Figure 6b), exhibits more even, well-dispersed and distributed chitin nanocrystals with few agglomerations which are much smaller than those in Figure 6a. These results are in agreement with our previous studies, where the addition of poly(ethylene glycol) (PEG) enhanced the dispersion of cellulose nanocrystals [23] and with the results reported by Wang et al. [24] and Qu et al. [25] who have reported that acetyl tributyl citrate (ATBC) and PEG enhanced the dispersion of carbon black and cellulose nanofibers in PLA, respectively.
Morphology of Nanocomposites and ChNC Dispersion
Polymers 2017, 9,406 10 of 16 The micrograph at higher magnification for the PLA-ChNC composite (Figure 6a) shows poor dispersion and distribution and large agglomeration (~10 µm) of ChNC, whereas the PLA-TEC7.5-ChNC nanocomposite, with the highest TEC content (Figure 6b), exhibits more even, well-dispersed and distributed chitin nanocrystals with few agglomerations which are much smaller than those in Figure 6a. These results are in agreement with our previous studies, where the addition of poly(ethylene glycol) (PEG) enhanced the dispersion of cellulose nanocrystals [23] and with the results reported by Wang et al. [24] and Qu et al. [25] who have reported that acetyl tributyl citrate (ATBC) and PEG enhanced the dispersion of carbon black and cellulose nanofibers in PLA, respectively.
Chemical Charaterization
The effect of addition of TEC in the interaction between PLA and ChNC was analyzed using FTIR. Figure 7A shows infrared spectra of extruded PLA with water and ethanol, PLA-TEC7.5, PLA-ChNC, and PLA-TEC7.5-ChNC. The characteristic peaks of PLA were observed in all analyzed materials. The peak at 1760 is attributed to the carbonyl (-C=O) stretching of PLA. The peaks between 2850 and 3000 cm −1 belong to the C-H asymmetric and symmetric stretching vibration [36]. The peak of the -C-O-bond
Chemical Charaterization
The effect of addition of TEC in the interaction between PLA and ChNC was analyzed using FTIR. Figure 7A shows infrared spectra of extruded PLA with water and ethanol, PLA-TEC7.5, PLA-ChNC, and PLA-TEC7.5-ChNC. The micrograph at higher magnification for the PLA-ChNC composite (Figure 6a) shows poor dispersion and distribution and large agglomeration (~10 µm) of ChNC, whereas the PLA-TEC7.5-ChNC nanocomposite, with the highest TEC content (Figure 6b), exhibits more even, well-dispersed and distributed chitin nanocrystals with few agglomerations which are much smaller than those in Figure 6a. These results are in agreement with our previous studies, where the addition of poly(ethylene glycol) (PEG) enhanced the dispersion of cellulose nanocrystals [23] and with the results reported by Wang et al. [24] and Qu et al. [25] who have reported that acetyl tributyl citrate (ATBC) and PEG enhanced the dispersion of carbon black and cellulose nanofibers in PLA, respectively.
Chemical Charaterization
The effect of addition of TEC in the interaction between PLA and ChNC was analyzed using FTIR. Figure 7A shows infrared spectra of extruded PLA with water and ethanol, PLA-TEC7.5, PLA-ChNC, and PLA-TEC7.5-ChNC. The characteristic peaks of PLA were observed in all analyzed materials. The peak at 1760 is attributed to the carbonyl (-C=O) stretching of PLA. The peaks between 2850 and 3000 cm −1 belong to the C-H asymmetric and symmetric stretching vibration [36]. The peak of the -C-O-bond The characteristic peaks of PLA were observed in all analyzed materials. The peak at 1760 is attributed to the carbonyl (-C=O) stretching of PLA. The peaks between 2850 and 3000 cm −1 belong to the C-H asymmetric and symmetric stretching vibration [36]. The peak of the -C-O-bond stretching in -CH-O-and in -O-C=O of PLA appear at 1182 and 1081 cm −1 , respectively [24]. The peaks at 1621 and 1656 cm −1 and at 1556 cm −1 correspond to the amide I and II [37], respectively. The peaks at 3110 and 3271 cm −1 are ascribed to the N-H stretching [38]. The above mentioned data confirmed the presence of chitin in the composites. From the PLA spectra, a peak at approximately 3510 cm −1 can be seen, which is related to the O-H bond stretching deformation. This indicates the presence of hydroxyl groups in pure PLA [39]. This peak did not change with the addition of TEC. However, this peak was broader and slightly shifted to a lower wavenumber (3506 cm −1 ) when ChNC were added to PLA, and it further broadened and shifted to 3494 cm −1 when ChNC was added together with TEC, as can be seen in Figure 7B. These results indicate the H-bonding interactions between PLA and ChNC. Rosdi and Zakaria [40] also found that the peak at 3505 cm −1 was shifted to a lower wavenumber when chitin was added to the PLA matrix, possibly due to some interaction between the hydroxyl groups of PLA and the hydroxyl groups of chitin. The results also indicate that the H-bonding interactions between PLA and ChNC were enhanced in the presence of TEC. It is believed that TEC may help the intermolecular interaction between PLA and chitin and enhances their interfacial interaction, which is in agreement with the SEM images. Similar results have been reported by Qu et al. [25], who showed that PEG improved the intermolecular interaction between PLA, PEG, and cellulose. No new peaks were detected when TEC or ChNC were added to PLA or when TEC was added to the PLA-ChNC nanocomposite.
Thermal Properties and Crystallinity
DSC thermograms and glass transition (T g ), cold crystallization (T cc ) and melt (T m ) temperatures of extruded PLA with water and ethanol, plasticized PLA materials and nanocomposites are shown in Figure 8. All presented T g , T cc , and T m for the materials indicate their semi-crystalline nature. The T g , T cc , and T m of PLA are 60, 121 and 147 • C, respectively, and these values decreased with 7.5 wt % TEC content to 47, 113, and 142 • C. The decrease is because of the plasticizing effect [41]. T g , T cc , and T m of the plasticized PLA materials remained almost the same with the addition of ChNC, and only a slight increase of T g from 47 to 49 • C was observed for the material with a 7.5 wt % of TEC. This slight improvement of the glass transition temperature may be due to a better interaction between PLA, TEC and ChNC in this nanocomposite, which hinders the polymer molecular mobility. Figure 8 shows the degree of crystallinity (X c ) where the addition of TEC and ChNC did not show any significant effect. [24]. The peaks at 1621 and 1656 cm −1 and at 1556 cm −1 correspond to the amide I and II [37], respectively. The peaks at 3110 and 3271 cm −1 are ascribed to the N-H stretching [38]. The above mentioned data confirmed the presence of chitin in the composites. From the PLA spectra, a peak at approximately 3510 cm −1 can be seen, which is related to the O-H bond stretching deformation. This indicates the presence of hydroxyl groups in pure PLA [39]. This peak did not change with the addition of TEC. However, this peak was broader and slightly shifted to a lower wavenumber (3506 cm −1 ) when ChNC were added to PLA, and it further broadened and shifted to 3494 cm −1 when ChNC was added together with TEC, as can be seen in Figure 7B. These results indicate the H-bonding interactions between PLA and ChNC. Rosdi and Zakaria [40] also found that the peak at 3505 cm −1 was shifted to a lower wavenumber when chitin was added to the PLA matrix, possibly due to some interaction between the hydroxyl groups of PLA and the hydroxyl groups of chitin. The results also indicate that the H-bonding interactions between PLA and ChNC were enhanced in the presence of TEC. It is believed that TEC may help the intermolecular interaction between PLA and chitin and enhances their interfacial interaction, which is in agreement with the SEM images. Similar results have been reported by Qu et al. [25], who showed that PEG improved the intermolecular interaction between PLA, PEG, and cellulose. No new peaks were detected when TEC or ChNC were added to PLA or when TEC was added to the PLA-ChNC nanocomposite.
Thermal Properties and Crystallinity
DSC thermograms and glass transition (Tg), cold crystallization (Tcc) and melt (Tm) temperatures of extruded PLA with water and ethanol, plasticized PLA materials and nanocomposites are shown in Figure 8. All presented Tg, Tcc, and Tm for the materials indicate their semi-crystalline nature. The Tg, Tcc, and Tm of PLA are 60, 121 and 147 °C, respectively, and these values decreased with 7.5 wt % TEC content to 47, 113, and 142 °C. The decrease is because of the plasticizing effect [41]. Tg, Tcc, and Tm of the plasticized PLA materials remained almost the same with the addition of ChNC, and only a slight increase of Tg from 47 to 49 °C was observed for the material with a 7.5 wt % of TEC. This slight improvement of the glass transition temperature may be due to a better interaction between PLA, TEC and ChNC in this nanocomposite, which hinders the polymer molecular mobility. Figure 8 shows the degree of crystallinity (Xc) where the addition of TEC and ChNC did not show any significant effect. Figure 9 shows storage modulus and tan delta (δ) as a function of temperature for PLA nanocomposites and their counter-parts without nanocrystals with different TEC contents as well as those for the unplasticized materials. In Figure 9a, PLA and PLA-ChNC are compared. It is observed that the addition of ChNC did not affect the storage modulus or tan delta peak position. Respectively, in Figure 9b-d, the plasticized nanocomposites with 2.5, 5.0 and 7.5 wt % TEC content are compared with their respective counterpart without ChNC. Similar to the DSC results, only the PLA-TEC7.5-ChNC nanocomposite showed a slight increase in the tan δ position. In addition, a decrease in the intensity of the peak was also observed. A positive shift in tan δ commonly indicates restricted molecule movement, and a decreased intensity of tan δ shows that lower number of polymer chains participates in the transition, which is expected because of the well-dispersed and distributed nanocrystals in the PLA-TEC7.5-ChNC nanocomposite. This better ChNC dispersion is also reflected in an improved storage modulus. These results indicate that the nanocomposites with the highest TEC content (7.5 wt %) shows better dispersed and distributed nanocrystals and, thus, slightly enhanced thermo-mechanical properties.
Thermo-Mechanical Properties
When comparing PLA with the plasticized PLA materials, it is observed that the increased TEC content in PLA decreases the tan delta peak position towards lower temperature from 62 • C to 53 • C with the addition of 7.5 wt % TEC, which confirms the plasticizer effect of TEC. Moreover, it is seen that the increased TEC content together with ChNC enhances cold crystallization, and higher TEC content is more effective that the lower TEC content. Figure 9 shows storage modulus and tan delta (δ) as a function of temperature for PLA nanocomposites and their counter-parts without nanocrystals with different TEC contents as well as those for the unplasticized materials. In Figure 9a, PLA and PLA-ChNC are compared. It is observed that the addition of ChNC did not affect the storage modulus or tan delta peak position. Respectively, in Figure 9b-d, the plasticized nanocomposites with 2.5, 5.0 and 7.5 wt % TEC content are compared with their respective counterpart without ChNC. Similar to the DSC results, only the PLA-TEC7.5-ChNC nanocomposite showed a slight increase in the tan δ position. In addition, a decrease in the intensity of the peak was also observed. A positive shift in tan δ commonly indicates restricted molecule movement, and a decreased intensity of tan δ shows that lower number of polymer chains participates in the transition, which is expected because of the well-dispersed and distributed nanocrystals in the PLA-TEC7.5-ChNC nanocomposite. This better ChNC dispersion is also reflected in an improved storage modulus. These results indicate that the nanocomposites with the highest TEC content (7.5 wt %) shows better dispersed and distributed nanocrystals and, thus, slightly enhanced thermo-mechanical properties.
Thermo-Mechanical Properties
When comparing PLA with the plasticized PLA materials, it is observed that the increased TEC content in PLA decreases the tan delta peak position towards lower temperature from 62 °C to 53 °C with the addition of 7.5 wt % TEC, which confirms the plasticizer effect of TEC. Moreover, it is seen that the increased TEC content together with ChNC enhances cold crystallization, and higher TEC content is more effective that the lower TEC content.
Mechanical Properties
The mechanical properties of PLA nanocomposites and their counter-part materials without nanocrystals are reported in Table 3. In addition, the mechanical properties of the extruded PLA without water and ethanol are reported, and if comparing these properties with those from the extruded PLA in the presence of water and ethanol, no significant effect on the mechanical properties of PLA was noticed. It is possible to see in Table 3 that the addition of TEC decreased the tensile strength of PLA and did not increase the elongation at break or work of fracture, as expected. These results indicate that a higher amount of plasticizer is required to obtain a noticeable effect on the toughness. Labrecque et al. [4] reported that all citrate esters are effective in improving the elongation at break at higher concentrations (≥20%), but do not show any significant increase at lower concentration. However, both DSC and DMA results showed that the plasticizer contents used in this study were enough to plasticize PLA.
The nanocomposites with TEC ≥ 5 wt % showed higher tensile strength and ultimate strength than the respective plasticized PLA without ChNC. Moreover, these materials showed slightly improved Young's modulus based on the ANOVA test. The elongation at break and work of fracture were decreased in all cases except for the nanocomposite with 7.5 wt % of TEC. This is due to less agglomeration and better dispersion of ChNC in the PLA-TEC7.5-ChNC nanocomposite.
The decrease observed in the tensile strength of PLA-ChNC can be attributed to the hydrolysis of PLA during the processing [32], which was observed in the GPC results, and because of the presence of micro agglomerations with a poor interface, as observed in the SEM studies. These results are similar to the results reported by Hishammuddin and Zakaria [36] where the incorporation by mixing, and then casting of commercial chitin into PLA, resulted in reduced tensile strength and elongation. Salaberria et al. [20] also reported a slight decrease of mechanical properties of PLA when functionalized (acylation) chitin nanocrystals were introduced into PLA via extrusion/compression. Rizvi et al. [32] found that the stiffness of PLA increased with increasing chitin content while the strength was found to decrease. However, in this study, it was found that the addition of ChNC into PLA together with TEC showed enhanced mechanical properties when ≥5.0 wt % of plasticizer was used. This is explained because the dispersion and distribution of ChNC and their interaction with the PLA matrix was improved with increasing plasticizer content as it has been shown in the previous sections of this paper. Similarly, Li et al. [18] reported that PEG worked as a compatibilizer for chitin nanofibers and PLA when it was used as pretreatment for the nanofibers before the compounding process.
Conclusions
This study was carried out to determine a plasticizer content that has the minimum plasticizer effect on PLA, but still enhances the dispersion and distribution of ChNC in the PLA matrix and, thus, obtain a nanocomposite with improved properties. Therefore, PLA composites with 3 wt % of chitin nanocrystals and triethyl citrate with varied contents of 2.5, 5.0, and 7.5 wt % were produced via liquid-assisted extrusion.
The gel permeation chromatography confirmed that the addition of water and ethanol during the extrusion process did not significantly affect the molecular weight of PLA.
The liquid feeding of ChNC together with TEC plasticizer resulted in PLA-TEC-ChNC nanocomposites with improved dispersion and distribution of ChNC. The nanocomposite with the highest plasticizer content (PLA-TEC7.5-ChNC) showed enhanced mechanical, thermal, and thermo-mechanical properties, compared with its counter-part without ChNC (PLA-TEC7.5). The improved interaction between PLA and ChNC in the presence of TEC is attributed to hydrogen bonding, which was supported by the FTIR study.
It will be interesting to study the effect of a higher plasticizer content to determine the synergic effect of the plasticizer as a dispersing and toughening aid with a minimum impact on the properties of PLA. The presented facile process of nanocomposites using liquid-assisted extrusion with a plasticizer, which facilities nanomaterial dispersion, can be a step forward for a large-scale production of bionanocomposites. | 9,730 | 2017-08-31T00:00:00.000 | [
"Materials Science"
] |
Spatiotemporal Dynamics of Activation in Motor and Language Areas Suggest a Compensatory Role of the Motor Cortex in Second Language Processing
Abstract The involvement of the motor cortex in language understanding has been intensively discussed in the framework of embodied cognition. Although some studies have provided evidence for the involvement of the motor cortex in different receptive language tasks, the role that it plays in language perception and understanding is still unclear. In the present study, we explored the degree of involvement of language and motor areas in a visually presented sentence comprehension task, modulated by language proficiency (L1: native language, L2: second language) and linguistic abstractness (literal, metaphorical, and abstract). Magnetoencephalography data were recorded from 26 late Chinese learners of English. A cluster-based permutation F test was performed on the amplitude of the source waveform for each motor and language region of interest (ROI). Results showed a significant effect of language proficiency in both language and motor ROIs, manifested as overall greater involvement of language ROIs (short insular gyri and planum polare of the superior temporal gyrus) in the L1 than the L2 during 300–500 ms, and overall greater involvement of motor ROI (central sulcus) in the L2 than the L1 during 600–800 ms. We interpreted the over-recruitment of the motor area in the L2 as a higher demand for cognitive resources to compensate for the inadequate engagement of the language network. In general, our results indicate a compensatory role of the motor cortex in L2 understanding.
INTRODUCTION
The engagement of the motor cortex in language processing has been intensively discussed within the framework of embodied cognition. Based on the embodied view, language processing, specifically semantic processing (i.e., processing of meaning), involves not only classic language-related regions but also the motor system to simulate the perceptual meaning conveyed by words (Barsalou et al., 2008;Fischer & Zwaan, 2008;Gallese & Lakoff, 2005; Pulvermüller & Fadiga, 2010; Zwaan, 2014). The embodied view of semantic processing has been supported by neuroimaging and electrophysiological studies during the past decade, showing neural activations and oscillations in the motor cortex during meaning understanding (Fargier et al., 2012;Fernandino et al., 2013;Klepp et al., 2014Klepp et al., , 2015Mollo et al., 2016;Moreno et al., 2013). In addition, the action-sentence compatibility effect (Glenberg & Kaschak, 2002) has been taken as evidence for the involvement of the motor system in action-related semantic processing. Faster response was found when the direction of movement is congruent with the direction conveyed by the sentence (Glenberg et al., 2008;Kaschak & Borreggine, 2008;Santana & de Vega, 2011;Zwaan & Taylor, 2006). However, some recent studies failed to replicate any such motor compatibility effect (Greco, 2021;Morey et al., 2022;Papesh, 2015).
Clinical studies have provided more direct evidence for the involvement of the motor cortex in semantic processing by investigating patients with motor impairment (e.g., Parkinson's disease, or PD; Buccino et al., 2018;Cardona et al. 2014;Desai et al., 2015;Fernandino et al., 2013;Kargieman et al., 2014;Kemmerer et al., 2012;Monaco et al., 2019). These studies showed that the motor-impaired participants had a selective difficulty in comprehending the action-related words (e.g., kick), manifested as a lower accuracy rate, longer response time, and an absence or attenuation of modulation of motor responses in patients with PD, compared with the healthy control group. The revealed association of impaired motor skills and deficits in understanding action-related meaning would support the embodied account of semantic processing. However, some other lesion studies failed to find the causal effect of motor cortex impairment on the processing of action-related meaning (Maieron et al., 2013;Papeo et al., 2010). These studies showing the dissociation of motor impairment and motoric semantic processing question the necessity of the motor cortex in language processing.
The emerging controversial findings have stirred up critiques and reflections on the embodied assumptions of language processing. As has been pointed out, the rapidly growing popularity of the embodied account is likely to result in the ignorance of other potential interpretations (see, e.g., Chatterjee, 2010;Hauk & Tschentscher, 2013;Mahon, 2015;Mahon & Hickok, 2016). Studies with the embodied hypothetical stance tended to interpret the data within the theoretical framework of embodiment with a prior hypothetical bias. For example, results showing motor activation in language tasks have been monotonically interpreted as the result of mental simulation of motor-related meaning, and therefore taken as an additional piece of evidence to confirm the embodied assumption. However, activation of the motor cortex may not necessarily be due to the mental simulation of motoric meaning. It can be ubiquitous in the language processing in general (Meteyard et al., 2012;Tian et al., 2020) or related to other aspects beyond strict linguistic processing (Maieron et al., 2013).
Functional and Epiphenomenal Role
The emerging controversial findings impelled researchers to re-examine the role of the motor cortex in language processing and test whether activations in the motor cortex reflect the retrieval of lexical-semantic information (functional role) or arise as a byproduct of postsemantic motor imagery (epiphenomenal role). Some studies attempted to disentangle the functional and epiphenomenal role by scrutinizing the temporal information of motor activations (García et al., 2019;van Elk et al., 2010). In van Elk et al.'s (2010) study, an early activation of the motor area indexed by the mu rhythm event-related desynchronization (ERD) was found preceding semantic processing (around 400 ms after onset) and sustaining in parallel with semantic processing (around 700 ms after onset). Based on the early latency of motor activation, it was concluded that motor activation primarily reflected lexical-semantic retrieval and integration rather than post-lexical motor imagery.
Compared with neurotypical studies, lesion (pathological and virtual transient dysfunctions caused by repetitive transcranial magnetic stimulation [rTMS]) studies offered a more direct pathway for scrutinizing the causal role of the motor cortex, since researchers were able to detect the causality by manipulating stimulations over the motor cortex (Bocanegra et al., 2017;Desai et al., 2015;Fernandino et al., 2013;Pulvermüller et al., 2005;Reilly et al., 2019;Vukovic et al., 2017). In Vukovic et al.'s (2017) study, rTMS was employed over the left motor cortex within 200 ms of word onset to examine whether the stimulation would affect the processing of hand-related action words and abstract words in the lexical decision task-which requires very shallow lexical-semantic processing-and semantic judgment task-which requires explicit access to action-related meaning processing. The stimulation impaired the comprehension of the action words but facilitated that of the abstract words, compared with the performance in the lexical decision task. The interruptive effect of stimulation on lexical-semantic processing suggested a functional role of the motor cortex in semantic processing. Consistent results were also reported among studies concerning motor disorders, where associations were found between the impairment in action performance and the impairment in action-verb processing (Bocanegra et al., 2017;Desai et al., 2015;Fernandino et al., 2013).
Conversely, some studies reported dissociations between motor impairment and action semantic deficits (Maieron et al., 2013;Papeo et al., 2010). In Maieron et al.'s (2013) study, functional magnetic resonance imaging (fMRI) was employed to examine functional connectivity between the language network and primary motor cortex (M1) in an action-verb naming task. Participants were patients whose lesions involved (or spared) the M1 and healthy controls. It was found that lesions in the M1 did not degrade the performance of the action-verb naming task compared with the healthy controls. Results of the functional connectivity further revealed a lack of task-modulated connectivity between the M1 and language network in the action-verb naming task for both lesion and healthy groups. These findings indicated an accessory rather than functional role of the motor cortex in the processing of action words.
Gradations of Motor Cortex Involvement
Instead of confirming or refuting the embodied hypothesis, some studies turned to explore the degree of motor cortex involvement, such as whether the motor cortex was differentially involved in different language settings. As highlighted by Chatterjee (2010) and Meteyard et al. (2012), the discussion of the graded nature of embodiment would shed light on the role that the motor system plays in semantic processing.
The gradation of motor cortex involvement has been mostly explored from the perspective of language proficiency (L1: native language; L2: second language) (Birba et al., 2020;De Grauwe et al., 2014;Monaco et al., 2021;Tian et al., 2020;Vukovic & Shtyrov, 2014;Zhang et al., 2020). By employing a passive reading task involving action-related words, Vukovic and Shtyrov (2014) found that the engagement of the motor cortex was greater for L1 than L2 for German-English speakers, indexed by a stronger ERD for the L1 than the L2 at around 8-12 Hz (mu rhythm). The stronger ERD for the L1 was interpreted as the result of a more integrated perception-action circuit for the L1 lexical-semantic representation. In contrast, in our earlier fMRI study (Tian et al., 2020), stronger activation of the motor cortex was found for the L2 than the L1, which was interpreted as the consequence of higher demand for cognitive resources as compensation for a less proficient language. Similarly, Monaco et al. (2021) also reported greater motor excitability for the L2 (English) than the L1 (French) in an action-related semantic judgment task, indexed by a higher motor evoked potentials for the L2 when the TMS was given 275 ms after word onset. However, the authors only claimed a different degree of motor cortex involvement between L1 and L2 semantic processing without further interpreting the implications underlying such differences. On the other hand, a similar degree of motor cortex activation has been reported (De Grauwe et al., 2014) between the L1 (Dutch native speakers) and the L2 (German advanced learners of Dutch) groups in performing a lexical decision task involving cognates and non-cognates with motor or non-motor-related meanings. The study therefore concluded that the lexical-semantic representation of the L2 was adequate to induce a similar degree of motor activation relative to the L1.
In addition to language proficiency, the gradation of motor cortex involvement has also been explored by manipulating the level of linguistic abstractness (e.g., literal/metaphorical/abstract language; Desai et al., 2013;Schaller et al., 2017;Tian et al., 2020). In Desai et al.'s (2013) study, four levels of linguistic abstractness were manipulated at sentence level, including literal action, metaphorical action, idiomatic action, and abstract verb. The blood oxygen level dependent signals of fMRI showed attenuated activation in the motor regions with the increase of linguistic abstractness (literal > metaphor > idiom > abstract). In our earlier study (Tian et al., 2020), we reported a similar decremental trend of motor activation with a hierarchically decreasing pattern of motor cortex activation from the literal to the abstract verb phrases.
The Present Study
Previous studies have advanced our understanding of the motor system in semantic processing by exploring the gradations of motor cortex involvement in different linguistic circumstances. However, the discussed studies using fMRI, electroencephalogram (EEG), or TMS lacked either temporal or spatial accuracy in describing brain activation. Combining spatial and temporal resolution is crucial for the comprehensive understanding of how (and when) the motor cortex contributes to language understanding since timing and source dynamics of brain activation needs to be extracted simultaneously from language and motor areas. Majority of previous studies only focused on the motor regions of interest (ROIs), while ignoring the simultaneous neural activities of the language regions. In the present study, we employed magnetoencephalography (MEG) with millisecond temporal resolution and sub-centimeter spatial resolution to explore the temporal activation dynamics of motor and language areas in semantic processing. Specifically, we aim to investigate whether the degree of the engagement of motor and language areas is modulated by language proficiency (native language and second language) and linguistic abstractness (literal, metaphorical, and abstract).
Participants
A total of 26 participants (8 male, 18 female) were recruited from the University of Jyväskylä, Finland. Participants were Chinese-English speakers, who started to learn English at the mean age of 9.77 (SD = 2.73) and had an average of 16.38 years' (SD = 4.67) experience in learning English. Participants had the Lextale vocabulary test (www.lextale.com; Lemhöfer & Broersma, 2012) to measure their L2 vocabulary knowledge (mean ± SD: 74.18 ± 8.35). All participants were right-handed with normal or corrected-to-normal vision. None of the participants reported having any history of neurological disorder. Participants gave informed consent prior to participation. Participants were compensated for their participation in the experiment. The study was approved by the ethics committee of the University of Jyväskylä. Two participants were excluded from data analysis due to the low accuracy rate in behavioral performance (below 75%, mean = 93.04%, SD = 6%), resulting in 24 participants in the final analysis.
Experiment Design
To examine the effect of language proficiency and linguistic abstractness on the degree of motor cortex involvement, L1 and L2 experiments were designed. Within each experiment, the factor of linguistic abstractness was manipulated with a gradual increase of abstractness from literal to metaphorical to abstract conditions. Each trial consisted of two verb phrases, with the second verb phrase either semantically congruent or incongruent with the first one. Participants were required to perform a semantic judgment task, where they needed to judge whether the second verb phrase shared the same meaning as the first phrase by pressing the response buttons.
Stimuli
A total of 180 verb phrases (60 in each condition) were used in both L1 and L2 experiment. The literal and metaphorical phrases contained an action-related verb, either hand or arm related. The abstract phrase connoted the same meaning expressed by the metaphorical one (Table 1). Phrases in the L1 experiment were semantically equivalent to those in the L2 experiment, with few exceptions in the metaphorical condition, due to the lack of Chinese equivalents of some English metaphorical expressions. The verb phrases in both L1 and L2 experiments shared the same syntactic structure: verb + object. A frequency norming test and familiarity rating test were conducted to ensure that stimuli across conditions did not differ significantly in the aspects of word frequency and word familiarity (p > 0.01). Motor-relatedness of all stimuli was evaluated on a 5-point scale (1: not related at all; 5: very related ): L1 experiment (literal: 4.60 ± 0.40; metaphorical: 2.78 ± 1.24; abstract: 2.12 ± 1.23) and the L2 experiment (literal: 4.39 ± 0.58; metaphorical: 2.64 ± 0.96; abstract: 2.11 ± 1.04). Only the first verb phrase, which is independent of task-related strategic manipulations, was used for further MEG analysis.
Experimental Procedure L1 and L2 experiments shared the same experimental procedure. As suggested by previous studies, L1 could have a stronger translation priming effect on L2 than the other way around (i.e., asymmetrical cross-language priming effects; Chen et al., 2014;Keatley et al., 1994;Smith et al., 2019). To avoid the translation priming effect, L1 experiment was presented after L2 experiment. Trials were shown in a pseudo-randomized order. As shown in Figure 1, each trial began with a 500 ms fixation at the center of the screen, followed by a 500 ms long blank interval. Afterward, the first verb phrase was presented for a duration of 1,500 ms, followed by a 1,000 ms long blank interval. The second verb phrase was then presented for 1,500 ms, followed by "?" with a duration of maximal 3,000 ms. Participants were expected to give a response after the "?" appeared. Visual stimuli were presented using Presentation software (Neurobehavioral Systems, 2022). L1 stimuli were in KaiTi font and L2 stimuli in Times New Roman font. The viewing distance from participants' eyes to the stimuli on the projection screen was one meter. The L1 stimuli subtended a horizontal visual angle of 3°5 0 , and the L2 stimuli subtended a horizontal visual angle of 4°58 0 .
MEG Data Recording
Continuous neuromagnetic signals were recorded using a 306-channel (102 magnetometers and 204 planar gradiometers) whole-head MEG system (MEGIN Oy, 2022) in a magnetically shielded room at the Centre for Interdisciplinary Brain Research, University of Jyväskylä, Finland. The head position of each subject was monitored by five head-position indicator (HPI) coils attached over the forehead and behind each ear. Electrooculography signals were recorded simultaneously by four electrodes attached around the eyes: above/below the right eye, near the corner of the left/right eye. One ground electrode was attached to the collar bone. The position of three fiducial landmarks (nasion, left/right preauricular points), as well as approximately 120 digitization points over the scalp, were acquired to establish the head coordinate frame for the coregistration between MEG data and the MRI template. MEG signals were online bandpass filtered at 0.1-330 Hz with a sampling rate of 1000 Hz.
MEG Data Preprocessing and Source Estimation
Raw MEG data were processed in MaxFilter 2.2 (Elekta, 2010) with the time-domain extension of the signal space separation method to minimize external magnetic disturbance and withinsensor artifacts and to compensate for head movement (Taulu & Kajola, 2005). Head position was estimated with a buffer length of 30 s and a correlation limit of 0.980. Head movement correction was performed using a 200 ms window with a 10 ms step. The error limit of HPI coil fit acceptance was 5 mm with a g-value of 0.98. The preprocessing was performed with Meggie, a graphic user interface built in-house based on MNE-Python software . First, visual inspection was done to identify and exclude the bad data segments in the continuous MEG data. Then, MEG data were resampled to 250 Hz. A lowpass filter of 40 Hz (transition bandwidth 0.5 Hz, filter length 10 s) was applied. Physiological artifacts related to heartbeat, blink, and saccade were removed using a semiautomatic independent component analysis method. Event-related epochs were extracted from -200 ms to 1,000 ms relative to the onset of the first verb phrase. A 200 ms interval before the onset was used as the baseline. MEG epochs with an amplitude exceeding 3,000 fT/cm for gradiometers or 4,000 femtoteslas (fT) for magnetometers were rejected from further analysis.
In the calculation of evoked responses for the literal, metaphorical, and abstract conditions, the first verb phrase was combined across the congruent and incongruent trials. Evoked responses were obtained by averaging the signals of each condition (literal, metaphorical, and abstract) in each experiment (L1 and L2 experiment).
Source estimation was performed in MNE-Python ( Version 0.17.0; Gramfort et al., 2013). The CN200 template (https://www.nitrc.org/projects/us200_cn200; Yang et al., 2020), based on T1-weighted magnetic resonance images of 250 healthy Chinese adults, was used for cortical reconstruction and volumetric segmentation. Coregistration between the CN200 template scalp and the digitized scalp was performed for each participant using a three-axis scaling mode. Shrunk covariance with cross-validation was used to estimate the noise-covariance matrix (Engemann & Gramfort, 2015).
Dynamic statistical parametric mapping (dSPM; Dale et al., 2000), which is based on minimum-norm estimate (Hämäläinen & Ilmoniemi, 1994), was used for source estimation with a source space consisting of 4,098 vertices and 4,098 loose-constraint and depthweighted current dipoles (loose = 0.2, depth = 0.8) distributed on the cortical surface in each hemisphere. Source estimation results were then noise normalized using the dSPM. The source estimates across participants were morphed to the same cortical space (CN200 template).
ROI Selection
Regions of interest were selected in a hybrid way. First, based on the timing of peak activities in the grand-averaged sensor waveform (Figure 2A), the spatial distribution of cortical sources corresponding to each peak was identified ( Figure 2B).
Next, the source distribution was compared against previous meta-analysis results of neuroimaging studies pertaining to semantic processing and motor performance/imagery. Brain regions appearing in both the data-derived cortical activation maps and previous meta-analyses were selected as ROIs for the present study. The selection was done by using MNE_analyze (https://mne.tools/0.17/manual/gui/analyze.html#the-labels-menu; Gramfort et al., 2014). First, label names corresponding to the literature-derived brain regions were selected from the parcellation list (Destrieux Atlas a2009s;Destrieux et al., 2010). Then, the partition of the selected region was overlaid with the MEG data on the inflated cortical surface. Only areas which showed prominent activation within the partitions were selected as ROIs. Both language and motor ROIs were selected left-lateralized due to only minor activation in the right hemisphere ( Figure 2B). All ROIs were parcellated based on the Destrieux Atlas a2009s (Destrieux et al., 2010; see the schematic view of ROIs in Figure 3).
The above procedure resulted in the following language ROIs: short insular gyri (partially overlapping with inferior frontal gyrus; Binder et al., 2009;Friederici et al., 2003;Rueckl et al., 2015), planum polare of the superior temporal gyrus (part of anterior temporal cortex; Grand-averaged source activation quantified as mean dynamic statistical parametric mapping (dSPM) value over time points corresponding to each peak: ±40 ms duration prior and after the relatively transient peaks (peak 1 and peak 2), and ±100 ms prior and after the relatively sustainable peaks (peak 3 and peak 4). The intensity of the color in the cortical activation map indicates the degree of dSPM value. L1: native language (Chinese); L2: second language (English); lit: literal; met: metaphorical; abs: abstract; L: left hemisphere; R: right hemisphere. Carreiras et al., 2013;Lambon Ralph et al., 2017;Patterson et al., 2007), and superior temporal sulcus (Citron et al., 2020;Rueckl et al., 2015). Motor ROIs were selected as the inferior part of precentral sulcus and central sulcus (part of primary motor cortex; Hari et al., 1998;Hétu et al., 2013;Michelon et al., 2006;Porro et al., 1996;Yousry et al., 1997). The ROI-based source time courses are shown in Figure 3.
Time Window Selection
The time window was selected based on the latency of peak activities in the grand-averaged sensor waveform (Figure 2A) and the corresponding time-resolved source activation maps ( Figure 2B). Based on the visual inspection, four peaks were identified in the sensor waveform: peak 1 at around 140 ms, peak 2 at 260 ms, peak 3 at 400 ms, and peak 4 at 700 ms.
Based on the source activation map, the first two peaks reflected activation in the visual cortex (peak 1) and more distributed areas across occipital-temporal lobes (peak 2), which were not included for statistical analysis. During peak 3 (300-500 ms, with peak activity at around 400 ms) and peak 4 (600-800 ms, with peak activity at around 700 ms), activation was found within temporal and frontal-central lobes, overlapping with our selected ROIs. Therefore, these two time windows, TW1 (300-500 ms) and TW2 (600-800 ms), were used for further statistical analysis.
Statistical Analysis
Statistical analysis was performed on the amplitude of the source waveform (represented as dSPM value) extracted from each ROI separately for TW1 and TW2. To examine the effect of language proficiency and linguistic abstractness, the nonparametric two-way repeated measures Figure 3. Grand-averaged source time courses for the literal, metaphorical, and abstract conditions in the L1 and L2 experiments in the indicated ROIs (language ROIs: short insular gyri, planum polare of the superior temporal gyrus, and superior temporal sulcus; motor ROIs: inferior part of the precentral sulcus and central sulcus). The parcellation of each ROI is shown in the inflated brain surface with a lateral view. For a better view, the planum polare of the superior temporal gyrus is also shown with a rostral view. analysis of variance with spatiotemporal clustering was performed in MNE-Python. To solve the multiple comparison problem, a cluster-based permutation test across time and space was employed (Maris & Oostenveld, 2007). The randomization times of the permutation test were 1,000, with a threshold for cluster inclusion α = 0.05 and the permutation significance α = 0.05. The p-values across language ROIs and motor ROIs were corrected for multiple comparison using Benjamini-Hochberg false discovery rate (FDR; Benjamini & Hochberg, 1995).
General Pattern and Time Course of Activation
The rough level activation timing (grand-averaged sensor waveform across the 204 gradiometers) and spatial distribution (source activation within the major activation peaks) across conditions in the L1 and L2 experiments are shown in Figure 2A and 2B. The source activation map revealed robust activation in the occipital lobe at 130 ms for both the L1 and L2, with slightly greater amplitude for the L1 than the L2. At around 260 ms, activation was found in the posterior temporal area for the L1 and in the lateral occipital-temporal area for the L2. At the peak around 400 ms, a notably greater amplitude was observed for the L1 than the L2. Activation in L1 was broadly distributed to the insular area (partially overlapping with the inferior frontal gyrus), posterior temporal area, anterior temporal area, inferior part of precentral area and central area. For the L2 (mainly the metaphorical condition), robust activation was observed mainly in the posterior temporal area. At around 700 ms, the pattern between L1 and L2 was reversed: L2 showed greater amplitude than L1 in the central and precentral areas.
Statistical Results
Cluster-based permutation F test on source data was performed for each language and motor ROI in the TW1 (300-500 ms) and TW2 (600-800 ms) respectively. Statistical results are shown in Table 2. Language and motor ROIs with significant spatiotemporal clusters are shown in Figure 4.
For the language ROIs, in the TW1, the cluster-based permutation F test revealed a significant main effect of language proficiency in the short insular gyri (p = 0.042) and the planum polare of the superior temporal gyrus (p = 0.042), manifested as greater activation within these areas for the L1 than for the L2. Statistical analysis did not reveal any significant interaction effect or the main effect of abstractness. In the TW2, no significant effect was found for language ROIs.
For the motor ROIs, no significant effect was found in the TW1. In the TW2, results showed a significant main effect of language proficiency in the central sulcus (p = 0.020), manifested as greater activation in the L2 than in the L1. No significant interaction effect or the main effect of abstractness was found.
DISCUSSION
In this MEG study, we investigated the degree of involvement of the language and motor areas in a language comprehension task. We employed spatiotemporally sensitive MEG recordings, which allowed us to examine the temporal trajectory of language and motor cortex activation. Specifically, we investigated whether the degree of involvement of language and motor areas in the stage of semantic processing was modulated by learner-specific factors (i.e., language proficiency), and/or by stimulus-specific factors (i.e., level of abstractness of the language stimuli).
Our source analysis evidenced a typical spatiotemporal trajectory of visual word processing, which witnessed an early robust activation in the occipital area, followed by activation flowing from the posterior to the anterior temporal and frontal areas (Brennan & Pylkkänen, 2012;Carreiras et al., 2013). In addition, the source estimation results showed neural activation of motor areas across all conditions (literal, metaphorical, and abstract) in both native language (L1) and second language (L2). More importantly, our results showed an overall greater involvement of language areas (short insular gyri and planum polare of the superior temporal gyrus) in the L1 than in the L2 in the time window of 300-500 ms, which has been broadly associated with semantic analysis (Kutas & Federmeier, 2011;Lau et al., 2008;Lau et al., 2013). Although greater activation in the posterior superior temporal sulcus can be seen for the L1 than the L2 in the grand-averaged source results ( Figure 2B), it failed to show any statistically significant difference. In addition, our results showed an overall greater involvement of motor area (central sulcus) in the L2 than in the L1 in the late time window of 600-800 ms, which might be associated with post-semantic analysis and integration.
Compensatory Role of the Motor Cortex in Late-Acquired L2 Processing
Our findings corroborate previous studies in showing that the motor cortex is involved in the processing of not only the L1 but also the L2 (Birba et al., 2020;De Grauwe et al., 2014;Monaco et al., 2021;Tian et al., 2020;Vukovic & Shtyrov, 2014;Zhang et al., 2020). In fact, our results suggest a stronger role for motor areas in the L2 than the L1. Our findings are also in line with earlier studies which suggested that the motor (or sensorimotor) area is involved in the processing of not only action-related but also abstract meaning (Dreyer & Pulvermüller, 2018;Guan et al., 2013;Tian et al., 2020;Vukovic et al., 2017). These findings jointly indicate that motor cortex involvement is ubiquitous in semantic processing, regardless of the linguistic features of the stimuli.
The stronger involvement of the motor cortex in the L2 semantic processing, independent of its linguistic abstractness, allows us to speculate on its role in language processing more generally. The finding is in line with some previous studies showing greater motor activation in the L2 than the L1 (Monaco et al., 2021;Tian et al., 2020), though not exactly in the same time window (275 ms after onset in Monaco et al.'s study, 600 ms in the present study). The somewhat earlier emergence of the effect in Monaco et al. may arise from the use of single verbs, while in our study the stimuli were verb phrases, which are relatively more complex semantically, and may evoke longer-lasting cortical engagement. In addition, the semantic task in Monaco et al.'s study required explicit motor simulation, as participants needed to judge if the verb represents a physical or mental action. In contrast, the task in our study only required the evaluation of semantic congruency and did not require any action-related judgment. Although the underlying process in L1 and L2 may be different between Monaco et al.'s study and ours, both studies indicate stronger involvement of motor areas in L2.
However, there are also contradictory findings. The results of Vukovic and Shtyrov's study (2014) pointed to greater involvement of the motor cortex in the L1 than the L2, indicated by stronger mu rhythm ERD. This apparently opposite pattern may at least partly be due to the differences in the brain activation measures. ERD (and event-related synchronization) reflects the temporal changes in the power of oscillations, and particularly the 10-20 Hz (hence mu rhythm) is often associated with the level of top-down inhibitory control. Unlike ERD, evoked responses, on the other hand, are time and phase locked to the onset of incoming sensory input and are likely to reflect a different source of neuronal activation. Particularly for the later stages of activation, evoked responses are likely to represent activation of a distributed network, the center of which is represented by the spatial extent of the source model. In their study, Vukovic and Shtyrov interpreted the stronger modulation for the L1 as the results of a more integrated perception-action circuit for the L1 lexical-semantic representation and a higher degree of embodiment for the L1. An alternative interpretation of their findings may, however, be that even though the task did not require verbal output, L1 more readily and automatically engages articulatory preparation, which may manifest as stronger predictive (i.e., top-down) allocation of resources in the motor areas. This interpretation would be in line with the results of anticipatory alpha modulation in visual and language domains (Wang et al., 2018) and challenges the embodied interpretation of the findings. The stronger and automatic recruitment of motor representations in the L1 in early time windows would also be compatible with increased engagement of motor areas in the L2 in later time windows (as shown in our study). Indeed, given the strongly time-evolving nature of language processing in the brain, it is conceivable that the role of the motor cortex may vary across time. As the source result shows in our study, the activation in the L1 (but not the L2) extended to the precentral sulcus in 300-500 ms, although the difference between L1 and L2 did not show statistically significant clusters.
The discussion of the role of the motor cortex in language comprehension may thus need to be approached with increased resolution (both temporally and spatially), as different neuroimaging modalities, and even different neural measures derived by same modality suggest divergent roles. It is also of crucial importance to acknowledge the time-varying nature of language processing. In addition to the methodological concerns, the search for functional significance of motor cortex also requires rigorous use of reasoning in interpreting the neuroscientific findings. Indeed, it needs to be noted that the greater degree of motor cortex activation may not necessarily imply a higher degree of embodiment. As has been pointed out, the involvement of a certain cognitive process cannot be unequivocally inferred from the presence of brain activation of a certain region (cf. reverse inference, e.g., Henson, 2006;Mahon & Hickok, 2016;Poldrack, 2006), as a particular brain region may carry multiple cognitive functions with a primary or secondary role.
The difficulty in specifying the correspondence relationship between brain regions and cognitive functions also applies to neuroimaging studies concerning action-related language processing. Neural activation of the motor cortex has mostly been elucidated as the result of utilizing the motor cortex for mentally simulating action-related meaning. The inference is made based on the established fact that the motor cortex is engaged in motor execution, motor planning, and motor imagery, as has been widely reported (Filimon et al., 2007;Hanakawa et al., 2008;Leonardo et al., 1995). Consequently, motor activations in the studies of semantic processing are believed to indicate the engagement of the motor cortex in the mental simulation of action-related meanings. However, the motor cortex, in addition to its motor-related cognitive functions, has also been shown to be functionally involved in other cognitive processes in a sub-dominant way, including (procedural) memory retrieval, cognitive control, inhibition, and integration (Francis, 2005;Miller, 2000;Mofrad et al., 2020;Lambon Ralph et al., 2017;Ullman, 2004;Willems et al., 2010). In the context of language processing, as mentioned in Maieron et al.'s (2013) study, the engagement of the primary motor cortex may be related to other aspects of cognitive processing rather than specific linguistic processing, which was inferred based on the lack of modulation of language-motor coupling during the action-verb generation task for both lesion and healthy groups.
In the present study, greater activation of the motor cortex was found for the L2 than the L1 across conditions. Referring to the above reasoning, we are of the opinion that the greater activation of the motor cortex may not imply a greater degree of embodiment in the semantic processing of the L2, but a higher demand for cognitive resources to compensate for its lower proficiency and weaker semantic representation compared with the L1. The interpretation is made based on the joint findings of the underactivation of the language areas (short insular gyri and planum polare of the superior temporal gyrus) at the semantic processing stage and the overactivation of the motor area (central sulcus) at the post-semantic processing stage in L2, compared with L1. The planum polare of the superior temporal gyrus, as part of the anterior temporal lobe, has been shown to be a semantic hub for integrating domain-specific concepts and semantic integration in general (Lambon Ralph et al., 2017;see review by Visser et al., 2010). Interpreted in the context of the present study, the L1 with richer semantic representation (compared with the L2) is likely to engage a greater degree of the anterior temporal lobe for meaning processing. In contrast, the weaker semantic representation of L2 may cost longer time for participants to access the meaning of L2, which might account for the early underactivation in language areas. The motor cortex was over-recruited, presumably, to offset the inadequate engagement of language areas, as a result of weaker semantic representation of L2, compared with L1. However, it is important to collect more direct evidence on the causal role of language and motor areas in linguistic tasks, as neuroimaging studies are necessarily correlative in nature.
Similar interpretation about the compensatory mechanism has also been reported in a study of individuals with dyslexia (Richlan et al., 2011), where underactivation in the left temporal region and overactivation in the motor cortex was found in adults with reading difficulty. This lends support to the idea that motor areas may represent general supportive functions in case of lower proficiency. Indeed, it has been validated by converging empirical evidence, that the retrieval of weakly encoded information relies more strongly on the control network (Lambon Ralph et al., 2017). Based on the above discussion, we argue that the greater activation of the motor cortex in the L2 may not signify a higher degree of embodiment, but a higher demand for cognitive resources to compensate for the inadequate engagement of the language network.
Functional Role of the Motor Cortex
By clarifying the role of learner-specific (i.e., language proficiency) and stimulus-specific (i.e., abstractness) factors, our findings shed light on the functional role of the motor cortex in language processing. There has been a longstanding debate on the functional and epiphenomenal role of motor cortex involvement in the literature on embodied language processing (Bocanegra et al., 2017;Desai et al., 2015;Fernandino et al., 2013;García et al., 2019;Reilly et al., 2019;Repetto et al., 2013;van Elk et al., 2010;Vukovic et al., 2017). Similar to our paradigm, some earlier studies attempted to disentangle these two roles by referring to the time course of the motor cortex activation, compared with that of the language areas (García et al., 2019;Papeo et al., 2009;Reilly et al., 2019;van Elk et al., 2010). The motor-related activations or modulations occurring at an early stage of semantic processing (130-190 ms poststimulus in García et al., 2019;300 ms in Reilly et al., 2019;400 ms in van Elk et al., 2010) are considered as evidence supporting the assumption of the functional (i.e., necessary) role, which claims that the motor cortex directly contributes to semantic processing, while activations occurring at a later stage are considered to reflect post-semantic motor imagery (500 ms in Papeo et al., 2009) and not necessarily contributing to language comprehension. However, the onset of semantic processing is unlikely to be clearly defined by a fixed time point, and it may vary considerably depending on learner-related factors (e.g., language proficiency and language experience) and language-related factors (e.g., language distance). For a less proficient language, the latency of lexical-semantic retrieval and integration can be delayed compared with the highly proficient native language. Considering the influence of language proficiency, we assume that the greater activation of the motor cortex in the L2 in our study is not the result of post-semantic motor imagery but reflects the general cognitive processes that support semantic processing in an indirect way. It may thus be useful for the discussion of the functional or epiphenomenal role of the motor network to focus not only on latency of motor cortex activation, but also on language proficiency, which may lead to variance in the latency of semantic access.
The Null Effect of Abstractness
Our study did not reveal any significant effect of abstractness, suggesting that neural responses in the motor areas may not be modulated by the degree of abstractness of the linguistic input. The finding is inconsistent with our prediction of decreased motor involvement with the increase of abstractness. Our finding is also inconsistent with previous studies exploring the effect of abstractness on a continuum (i.e., literal, metaphorical (idiomatic), and abstract; Desai et al., 2013;Tian et al., 2020). In their studies, hierarchically attenuated motor activation was found with the increase of linguistic abstractness.
So far, most studies concerning the effect of abstractness on motor cortex involvement mainly focused on literal and figurative action-related language (mainly metaphorical and idiomatic). Some revealed greater involvement of motor cortex for literal than figurative language (Cacciari et al., 2011), and some reported a similar degree of motor cortex involvement between them (Boulenger et al., 2009;Boulenger et al., 2012). Inconsistently, some other studies found motor cortex involvement only for the literal language, but not for the figurative (Raposo et al., 2009). The discrepancy in findings may derive from methodological differences across studies, including task demands (covert vs. overt motor association), stimulus properties (word vs. phrase vs. sentence), and ways of presentation (word-by-word vs. whole item, visually vs. aurally). As has been highlighted, the recruitment of the motor cortex in action semantic processing is task (Giacobbe et al., 2022;Tomasino et al., 2008) and context dependent (Raposo et al., 2009).
Moreover, current findings call for a reflection on the relationship between artificial categorization of abstractness and its actual brain response. Although the stimuli do follow a linguistically defined continuum of abstractness, the actual brain responses may not follow such gradation. In future studies, it will be important to test the modulatory effect of abstractness on the degree of motor cortex recruitment by using comparable approaches.
Limitations
Our study has some limitations. First, our study is a correlative study in nature, and interpretations are mainly "bound" to earlier literature. Second, our study only included ROI-based analysis motivated by its hypothesis-driven nature. The exclusion of whole-brain analysis may cause the ignorance of important neural activity in other brain regions. Future studies should further investigate the relationship between language and motor networks in bilingual language processing by employing comparable approaches.
Conclusion
Our study explored the degree of involvement of language and motor areas modulated by language proficiency and linguistic abstractness. We reported an overall greater activation in the language areas for the L1 than the L2 at the semantic processing stage at 300-500 ms, and an overall greater activation in the motor regions for the L2 than the L1 at the later post-semantic processing stage at 600-800 ms. The over-recruitment of the motor areas in the L2 implied a compensatory role of the motor area to offset the lower language proficiency of the L2 in relative to the L1. Our study provides an alternative interpretation of motor cortex involvement in language processing and invites further research to explore the factors that modulate this relationship.
ACKNOWLEDGMENTS
The authors would like to thank Aino Sorsa for her assistance in data collection and Dr. Viki-Veikko Elomaa for program setup. The authors would also like to thank Dr. Weiyong Xu, Dr. Xiulin Wang, Dr. Simo Monto, and Erkka Heinilä for their assistance and suggestions in data analysis.
DATA AND CODE AVAILABILITY STATEMENTS
The data are not publicly available due to the restrictions of research ethics stated in the Privacy Notice for Research Subjects in terms of the privacy of research participants.
The data that support the findings of this study are available upon reasonable request from Tiina Parviainen<EMAIL_ADDRESS>and Lili Tian (litian@jyu.fi).
In compliance with the General Data Protection Regulation, the following situation will be approved when requesting the data: (1) actions aiming to confirm and verify the validity and authenticity of the results of the current research; (2) actions related to scientific research or other compatible purpose. | 9,830.8 | 2022-11-21T00:00:00.000 | [
"Linguistics"
] |
The Social Impact of the Evolution of Internet Language: A Critical Discourse Analysis of Popular Internet Language
: This article aims to explore the evolution of Internet slang and its impact on social communication. By using crawler technology, we collect a large amount of usage data on Internet terms from major social media platforms. This study focuses on analyzing the acceptance of Internet terms among different user groups, the impact of Internet terms on traditional languages, and the future trends of Internet terms. We used quantitative analysis (such as frequency analysis and trend analysis) and qualitative analysis (such as content analysis and discourse analysis) methods to gain a deeper understanding of the socio-cultural meaning and impact of online terms. Research has found that the use of Internet slang is more common among young users and shows specific popular trends over time. At the same time, Internet slang has had a profound impact on traditional language and culture, triggering extensive public discussions on language purity and cultural inheritance. Finally, this article predicts the future development trend of Internet terminology and discusses how to adapt to the changes brought about by Internet terminology while maintaining traditional language norms.
Introduction 1.1 Research background and purpose
The rapid development and widespread use of Internet slang reflects the evolution of society, culture and technology.Internet slang in social media and online communities particularly shows this.This study aims to analyze how the evolution of Internet lingo affects sociocultural structures and explore its impact on personal identity and social interaction.Studying the social impact of Internet slang can not only help us better understand the characteristics of contemporary social communication, but also provide a new perspective for understanding cultural diversity in the context of globalization.As an emerging language form, Internet slang is unique in that it crosses the geographical and cultural boundaries of traditional languages and forms a global communication method [1] .The evolution of Internet slang and its social impact have been hot topics in the fields of sociolinguistics and Internet communication in recent years.With the development of Internet technology, Internet slang has quickly become a part of global communication.This form of language not only affects daily communication, but also shapes new cultural identities and social interaction patterns.This article aims to explore the profound impact of the evolution of Internet language on social culture, and to analyze the currently popular Internet language through critical discourse analysis.
Research questions and scope
This article will focus on the following research questions: How does the evolution of Internet lingo reflect and affect social and cultural changes?How does it shape modern people's social identity and communication methods?The scope of the research will focus on analyzing the development trends of popular online slang and their use in global culture and social media platforms.We'll explore the origins, evolution, and how internet slang affects people's daily communications and social interactions.In addition, this study will also analyze the variation and acceptance of Internet terms in different cultural backgrounds, and how these variations reflect and shape social identity and cultural values [2] .
Development and classification of Internet terms
The development and classification of Internet slang is an important aspect in exploring the evolution of Internet language.Kuznetsova (2022) studied English computer industry terminology and network slang used by network security experts, which reflects the characteristics of this linguistic phenomenon.In addition, the research of Pavelieva and Lobko (2021) shows that English slang in online games also provides a reference for the classification of Internet slang.The way it is formed, the use of metaphors and abbreviations reflects the linguistic process of Internet slang.These terms not only reflect the diversity of Internet culture, but also reveal how Internet terms adapt to changing technological and social environments.These characteristics of Internet slang show that they are an important example of language adapting to the digital age and are of great significance to understanding how language evolves in new communication media [3][4] .
Social impact of Internet terms
The impact of Internet slang on society is reflected in its widespread dissemination and use on social media platforms.Kustiwi, Qadriani and Budianingsih (2022) analyzed 25 Internet terms on Douyin in 2020 and found that these terms are composed of old words with new meanings, innovative words, homophonics, foreign language absorption, and abbreviations.The formation of these terms reflects the influence of creativity and social factors, with both positive and negative social impacts.Liu, Gui, Zuo, and Dai (2019) found that in Internet advertising, the innovative characteristics of Internet terms can increase the attention of advertisements, but their excessive use may have a negative impact on brand and product evaluations.In addition, the widespread use of Internet slang in social media also reflects contemporary society's need for fast and efficient communication.The popularity of Internet slang has brought challenges to the traditional norms and usage habits of language, and at the same time promoted language innovation and diversity [5][6] .
Criticism and acceptance of Internet terms
The criticism and acceptance of Internet terms are related to their status in social context.Research by Majeed and Adisaputera (2020) shows that Internet slang, as an important carrier of popular Internet culture, has a huge impact on people [7] .Internet slang not only reflects social prejudices and stereotypes, but these biases and stereotypes are expressed more strongly in Internet slang than in standard language.Research by Liu et al. shows that the use of Internet slang in advertising can increase viewers' attention to the advertisement, but does not necessarily improve product evaluation and brand awareness [8] .The use of Internet terms in a specific environment can determine the meaning of the text and affect various aspects, thus determining the pragmatic rules.This shows that the criticism and acceptance of Internet language is a complex process, involving multiple aspects of language use, including the social function, cultural connotation and communication effect of language.The widespread use of Internet slang and the social repercussions it brings show that it has become an indispensable part of modern communication and is also an important area of language and cultural research [9] .
Crawler technology and data collection
Selected Platforms and Data Types: Weibo: For its rich user-generated content reflecting current trends in online language.Zhihu: For insights into public opinion and detailed discussions.Douban: To analyze cultural aspects of online language use in reviews.Introduction to Crawling Tools and Techniques: Python Language: Primary programming language for scripting.requests: To send HTTP requests and retrieve web pages.BeautifulSoup: To parse HTML/XML documents and extract data.Selenium: For interacting with web pages that require dynamic interaction.
Data Cleaning and Preprocessing:
Steps include removing irrelevant information, standardizing formats, and text preprocessing (e.g., tokenization).
Quantitative Analysis Methods: 1) Frequency Analysis: To understand the prevalence of certain online terms.
Example formula (Term Frequency-Inverse Document Frequency, TF-IDF): 2) Trend Analysis: To identify patterns over time in the use of specific online terms.Qualitative Analysis Methods: Content Analysis: Analyzing the context and usage scenarios of online language.Discourse Analysis: Examining public attitudes and opinions towards online language.
Analysis of usage of Internet terms
Usage frequency and trends: We applied time series analysis to explore changes in the frequency of use of specific Internet terms over time.By calculating the TF-IDF value for each word, we are able to identify which words have significantly increased in frequency during a specific time period. 1 shows the frequency of use of three popular online terms "eating melon", "peeing it" and "doutu" on different social media platforms.Data shows that on the Weibo platform, the word "eating melon" is used most frequently.This may be because the word is often used to discuss social events or gossip, and these discussions are very common on Weibo.In contrast, "Pixia" is used more frequently on Zhihu, reflecting that users on this platform prefer to use relaxed and humorous expressions when discussing topics."Doutu" is used less frequently on Douban, possibly because the platform is used more for in-depth discussions and sharing rather than intense photo battles.By comparing these data, we can better understand usage patterns and cultural differences in online slang in different social media contexts.
User group characteristics: Researchers or marketing analysts analyze user basic information data, such as age, gender, regional distribution, etc., to understand the characteristics of user groups who use specific online terms.This helps us understand the spread path and audience base of Internet terms. 2 shows the proportion of users of different age groups using specific Internet terms.The 18 to 25 age group accounts for the highest proportion, reaching 40%, indicating that young user groups are more inclined to use these online terms.There is also a considerable proportion of users aged 26 to 35, accounting for 35%, while users over 36 years old account for 25%, which reflects that Internet terms are more popular among younger groups.
Usage frequency trend chart: Figure 1 shows the frequency of use of specific Internet terms over time.Data shows that from January to May 2023, the frequency of use showed an upward trend, especially in May 2023, the frequency of use reached a peak.This indicates that the Internet term gradually increased in influence and popularity during the study period.Trend analysis of frequency of use reveals patterns in the popularity of Internet terms during a specific period, which is critical to understanding the dynamic nature of online communication.For example, we observed that some specific Internet terms showed a significant increase in usage frequency during the study period, which may have been affected by specific social events or Internet hot spots.The identification of this trend not only helps to understand the propagation mechanism of Internet terms, but also provides a basis for predicting future language change trends.
Theme distribution of critical discourse on Internet terms: Figure 2 shows the topic distribution of Internet language critical discourse.As can be seen from the figure, 'Cultural Impact' accounts for 25% and 'Linguistic Purity' accounts for 35%, indicating that this is the main focus.'Social Communication' accounts for 30%, while other categories account for 10%, showing the public's diverse views on the impact of online terms.In analyzing critical discourse on Internet slang, we note the different public perceptions of these modern expressions.Topics of criticism ranged from concerns about the purity of traditional languages to discussions of the impact on social communication.This part of the analysis is of great significance for understanding the acceptance and impact of Internet terms in different social groups.Through careful examination of these discourses, we can gain insight into society's diverse attitudes and reactions to the evolution of Internet slang.
Analysis of social acceptance of Internet terms
The data analysis of this study revealed significant differences in the acceptance of Internet terms among different social groups.Young user groups show a higher acceptance of Internet slang, which may be related to their more frequent use of social media and online communication platforms.In contrast, older user groups are less receptive to these emerging language forms, which may be partly due to their persistence in traditional language and culture.In addition, educational background also plays a role in the acceptance of Internet terms.Users with higher education levels tend to use Internet terms more cautiously, possibly out of considerations for language accuracy and professionalism.
The impact of Internet slang on traditional languages
The rise of Internet slang has had a certain impact on traditional languages.On the one hand, the convenience and expressiveness of online language have enriched daily communication, making language more vivid and contemporary.On the other hand, some scholars and language experts have expressed concerns about the impact that Internet slang may have on traditional language norms.The simplified and informalized nature of online slang may affect the standard and purity of language, especially in its use in academic and formal settings.Therefore, how to accept innovations in Internet terms while maintaining traditional language norms has become an issue worthy of in-depth discussion.
Research summary
This study conducted an in-depth analysis of the use of Internet terms on major social media platforms, revealing the importance and influence of Internet terms in contemporary social communication.Research results show that Internet slang is particularly popular among young user groups, which reflects the rapid evolution of Internet culture and language.Through trend analysis, we found that Internet terms show obvious trends and changes over time.In addition, the impact of Internet slang on traditional languages has aroused widespread social concern, especially in terms of language purity and cultural inheritance.Our research also explores the future trends of online language, foresees its diversified and international development direction, and also points out the potential impact brought by the development of artificial intelligence and natural language processing technology.
Research limitations and future research directions
Although this study provides important insights into the evolution and impact of online parlance, there are some limitations.First, data collection mainly relies on crawler technology and is limited to specific social media platforms, which may not fully represent the language usage of all web users.Second, due to the rapid changes in Internet lingo, some of the findings of this study may quickly become outdated.In addition, this study mainly uses quantitative analysis methods, which may not be able to fully capture the deep meaning and cultural connotation of Internet terms.
To address these limitations, future research can conduct in-depth exploration in the following directions: Researchers should first expand data sources to include more diverse social media platforms and users of different cultural backgrounds to obtain a more comprehensive research perspective.Secondly, they should continue to track the development of Internet language, especially paying attention to the impact of technological development on Internet language.Finally, researchers should combine qualitative research methods, such as in-depth interviews or case studies, to better understand the sociocultural motivations and meanings behind Internet lingo.Through these methods, future research can provide a more comprehensive interpretation of the role and impact of online language in the changing socio-cultural environment.
is the frequency of word w in document d , N is the total number of documents, and () DF w is the number of documents containing word w .
Figure 1 :Figure 2 :
Figure 1: Trend of Online Language Usage
Table 1 :
Data examples: frequency of use of Internet terms on different social media platforms
Table 2 :
User group characteristics | 3,276 | 2024-01-01T00:00:00.000 | [
"Linguistics",
"Computer Science",
"Sociology"
] |
Development of Sistem Informasi Pendataan Warga (Sitawar) for the Realization of Integrated Population Data at RT Level With RW
Citizen data collection by the majority RT still using manual way, the way it can be said not effective in terms of time of workmanship or quality of information produced.With the existence of SITAWAR can help update citizen data more effectively residing in area especially RT level.However, from the results of previous research, there are SITAWAR problems that have been built, can only be accessed by one RT only (stand alone) in one Rukun Warga area, SITAWAR has not been connected to RW level, so if the chairman of the RW is required to report demographic data that exist in the neighborhood to the village level, still done by call each existing RT in the environment or waiting for reports from each RTchairman.Looking at the issues already described, the existing SITAWAR built is developed using the concept of web application, where all RTs in one RW area are connected to the database. This research was conducted using Development Research. While the method of system development using prototype with system design tool using Object Oriented Approach and built using PHP language (Hypertext Preprocessor). With the development of SITAWAR can produce more qualified population information.
I. INTRODUCTION n the previous research, it was revealed that in processing the data kependududkan an administrative area through the data collection of residents on the smallest unit of Rukun Tetangga (RT), data collection of residents by the RTchairmanstill using manual way, and the way it can be said not effective both in terms of time of workmanship and the quality of information produced there are still errors [1].Information quality is influenced by how the data is processed. Improperly processed data, can produce false information, and of course it is difficult to expect correct decisions if built on false information [2].To optimize the existing citizen data information, it is better to use computerized, so it can be used for the acquisition of information about citizens data quickly and accurately.
The problems presented can be solved by the construction of Citizens Registration Information System (SITAWAR), where SITAWAR is a product of information system covering Citizen's data (permanent residents, non-permanent residents), guests (mandatory report 1x24 hours), birth, and death.Based on previous research, with the build of SITAWAR can provide more effective and accurate information about the data of citizens residing in a region especially in Rukun Tetangga and facilitate the chairman of RT in the data collection of its citizens.
However, from the results of previous studies that have been done, there are problems as follows [1] : 1. SITAWAR which has been built, can only be accessed by one RT only (stand alone) in one area of Rukun Warga. That is, between RTs can happen recording one data the same citizens, can be said one citizen recorded in several different RT. 2. SITAWAR is not yet connected to RW level, so if the RW chairman is asked to report the population data in his neighborhood to the village level, it is still done by handling each RT header in his neighborhoodor waiting for a report from each RT chairman. From the problems that have been presented and see the response from user SITAWAR in previous research, then SITAWAR which has been built developed by using concept of web application, where all RT in one region of RW connected to the database. So with the development of SITAWAR as already described can produce more quality population information. The object of this continued research, conducted in RW 08 consisting of RT 01 to RT 05 in Kelurahan Ciumbeleuit, Kecamatan Cidadap, Bandung City. Selection of this research object based on consideration, in the region to be a pilot for the implementation of the village entrance internet program launched by the central government.
In this study there are some limitations of the problem so as not to deviate from the purpose that has been determined so that the study studied will be more focused. The limits of the problem are: I 1. The scope of the use of SITAWAR information system is at the level of Rukun Tetangga (RT) connected to RW. 2. The user entitled to use SITAWAR is the coordinator of the citizen or someone who becomes the leader in a collection of citizens (The chairman of RT and RW).
II. RESEARCH METHODS Research method used is Development Research. Development Research is a research method to develop a product based on the needs of a previous research conducted [3,4]. The resulting product can be either objects or hardware and can also be software [5].
While the system approach method used is prototype. The Prototype model begins with the collection of system requirements. Developers and system users meet to fully define the desired software objectives, identify what needs are known, and outline where further definitions are mandator. A "quick design" is then done [1,6]. This method prioritizes communication between developers and customers.By using this method, developers can easily create a system or application according to the needs of the customers [1,6,7].
III.1 Design of SITAWAR
In general SITAWAR which will be made presented in Figure 1: Adding, changing and reducing citizen data. This activity can only be done by the Head of RT for data of citizens in their respective territories.
Print Citizen Data
Printing data of citizens, both residents and seasonal residents. For the RT Chairman can only print data for the residents in his area, while the RW Chairman can print the data of each RW he leads as well as the whole citizen data.
Print Guest Data
The RT Chairman can print the guest data on for the visiting guests in his area while the RW chairman can print all the guest data in the RW environment or can be separated by RT.
Print Rekapitulation Citizen Data
Chairman of RW can print the recapitulation of citizen data. Recapitulation of the data of this citizen includes the clustering of data of citizens by age, sex, as well as permanent residents and seasonal citizens. From this report we can see birth and death data for each RT.The chairman of the RT can only print the recapitulation of citizen data in their respective territories.
III.2 Implementation III.2.1Homepage of SITAWAR Due to this information system is a development of previous versions, then the addition of access rights feature in this application because it will be accessed by someChairmanof RT and Chairman of RW so when first accessing the SITAWAR page, the system will display the main page containing the "home" menu and the "login" menu as shown below:
III.2.2 LoginMenu Page
After pressing the login menu, then the system will redirect to the login page to enter the data user name and password.
III.2.3 Homepage for Chairman of RT After Login
Here is the main page view after the RT Chair successfully login. On this page is presented home menu, recording, citizen data, guest data, reports, search, and exit menu.
III.2.4 Homepage for Chairman of RW After Login
Here is the main page view after the RW Chair successfully login. On this page is presented data menu of permanent residents, data of seasonal residents, guest data, citizen recapitulation, and exit menu.The menu is generated based on research that has been done for RW in checking the data of citizens throughout the region.
III.2.5 Data Pages of Permanent Residents
On this page the system will provide options before presenting the data. The choice of data to be displayed can be based on RT as well as all data. After making the selection of data view based on needs, it will display data according to the choice that has been selected. On this page provided links to download data of permanent residents. Downloadable data can be based on the number of Family heads only, as well as the overall population data.
III.2.6 Page of seasional Citizens Data
As in the data pages of permanent residents, on this page even the system will provide choice of data that will be displayed either the data by RT or the overall data. In addition, for permanent data inipun RW Chairman can print Data Residents Permanent either based on the head of the family or the whole citizen.
III.2.7 Guest Data Page
On this page, data will be presented in the form of guest data. Data to be hacked by the system can be displayed based on the RT as well as the overall data. The data presented is the guest data that is still visiting and the data of the guest who has returned with the information in the form of interest of the visiting guests.
III.2.8 Report Page
This page contains the data recapitulation of residents in the area of RW 008. The data presented are data of citizens by age and gender, guest data, birth data, and death data. IV. CONCLUSION Based on the results of the analysis that has been done in the development of Sistem Informasi Pendataan Warga (SITAWAR) to the conclusion that is with the construction of SITAWAR data collection of residents by the coordinator of citizens (Chairman of RT) run more effectivelywith minimal error and ambiguity. In addition, the coordinator of citizens can easily obtain information on the residents in the environment more quickly and optimally.
ACKNOWLEDGMENT Thanks to the Lembaga Penelitian dan Pengabdian Masyarakat (LPPM) UNIKOM who has funded this research. | 2,198 | 2017-12-02T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Cuticular Antifungals in Spiders: Density- and Condition Dependence
Animals living in groups face a high risk of disease contagion. In many arthropod species, cuticular antimicrobials constitute the first protective barrier that prevents infections. Here we report that group-living spiders produce cuticular chemicals which inhibit fungal growth. Given that cuticular antifungals may be costly to produce, we explored whether they can be modulated according to the risk of contagion (i.e. under high densities). For this purpose, we quantified cuticular antifungal activity in the subsocial crab spider Diaea ergandros in both natural nests and experimentally manipulated nests of varying density. We quantified the body-condition of spiders to test whether antifungal activity is condition dependent, as well as the effect of spider density on body-condition. We predicted cuticular antifungal activity to increase and body-condition to decrease with high spider densities, and that antifungal activity would be inversely related to body-condition. Contrary to our predictions, antifungal activity was neither density- nor condition-dependent. However, body-condition decreased with density in natural nests, but increased in experimental nests. We suggest that pathogen pressure is so important in nature that it maintains high levels of cuticular antifungal activity in spiders, impacting negatively on individual energetic condition. Future studies should identify the chemical structure of the isolated antifungal compounds in order to understand the physiological basis of a trade-off between disease prevention and energetic condition caused by group living, and its consequences in the evolution of sociality in spiders.
Introduction
Living in groups is widespread and found in insects, spiders, birds and mammals, among other animals. Individuals that live in groups obtain benefits such as predator avoidance [1][2][3][4], foraging efficiency [5,6], and enhanced reproductive success [5]. However, group living has associated costs: compared to solitary individuals or small groups, individuals in large groups can incur costs such as increased competition for resources [7][8][9]. Moreover, group-living animals are faced with the potential risk of accumulating pathogens that can spread more easily between group members [10,11]. Therefore, group living can not only be costly in terms of competition between individuals but also in terms of pathogen defense and disease contagion [12][13][14].
One important cost derived from contagious diseases is the activation and use of immune responses. Immunity can be costly because of toxic byproducts of immune reactions or because it requires resources that are spent at the expense of other functions [15,16]. To decrease these costs, some group-living animals modulate their investment in immune response according to the risk of infection [17]. Under crowded conditions (i. e., when contagion risk is high), some insects show more active immune system compared to organisms living in low densities [18,19], this might allow them to be more resistant than individuals kept solitarily [19,20]. Such density-dependent activation of immune responses can be interpreted as an adaptive strategy to decrease the costs associated with the maintenance and activation of immune defenses. A simpler strategy to deal with microorganisms is to avoid contagion, either via behavioural avoidance of infected individuals or places [21,22], hygienic behaviour in the nest [23] or via chemical avoidance with antimicrobials on the skin or cuticle [24][25][26]. Despite incurring some cost, both behavioural and chemical protections can reduce the cost of activating the immune system once the pathogen has infected the host.
The subsocial crab spider Diaea ergandros lives in nests built from Eucalyptus leaves. Nests contain up to 70 spiderlings that are usually the offspring of a single female [27]. These nests persist several months and all spiders of the group communally enlarge the nest by attaching more leaves. The inside of these nests can be quite sealed, moldy and contain food debris [27], and therefore favors the development of pathogens with the risk of pathogenesis being elevated at increased conspecific density. Furthermore, infections can be particularly dangerous because group members are close relatives and there is thus low genetic variability that could result in more susceptible groups [14,28]. Previous experimental research on D. ergandros shows that individuals in large groups build larger and more protective nests and survive better in the presence of a predator compared to small groups or singly kept spiders [29]. However, the influence of pathogen pressure on large spider groups might be higher but has not been explored yet.
The main aim of this study was to investigate whether D. ergandros spiders have developed density-dependent polyphenism pathogen defenses. Antifungal cuticular response in both natural nests and artificial nests of varying density was measured. Despite previous descriptions in different taxa, cuticular antifungal activity has not been described in spiders yet. Costs involved in the maintenance of cuticular antifungal activity were examined by measuring spiders' lipid body reserves. This study represents the first exploration of density dependence in preventive antifungal production within a spider species and evaluates its possible dependence on physiological condition (lipid reserves). We predict that antifungal protective activity will be a) present in crab spiders, b) a costly trait, dependent on physiological condition, and c) more intense with increasing nest density.
Ethics
No permits were required for the described study, which complied with all relevant regulations. The species used in these experiments (Diaea ergandros) is not an endangered or protected species under CITES regulations.
Study species
The present study was carried out with the subsocial crab spider Diaea ergandros Evans, 1995 (Araneae: Thomisidae). Unlike other social and subsocial spiders, these spiders do not build webs and instead live in nests built from Eucalyptus leaves [30,31]. Each nest consists mainly of a single mature female and her offspring, although migration between nests can occur [31,32]. Even though spiders may migrate between nests, relatedness between nest mates is relatively high [31,32]. Juveniles develop during 8-9 months after which they leave the nest [30]. For the present study we only used juvenile spiders of the instars 4 and 5 (see below
Antifungal activity measurements
Antifungal activity was measured from the cuticle of spiders following a procedure modified from [24]. Since one spider did not provide enough sample of cuticular antifungal measurements (unpublished data), we used groups of five spiders for each sample. Spiders were anesthetized with carbon dioxide and washed with 2 mL of Ethanol 90% for five minutes to remove cuticular antifungals. Ethanol was evaporated from the sample with a rotary evaporator (25 mbar, 25uC). Under sterile conditions in a fume hood, each dry sample of spider extract was re-suspended in 125 mL of Luria Bertani (LB) broth and 100 mL of a culture of Cordyceps bassiana spores (2000-3000 spores/mL) in LB broth were added. Prior exploratory analysis varying spore concentration from 965-3580 spores/mL show no significant correlation with optical density after 24 h of fungal growth in LB broth (Pearson r = 0.16, P = 0.38, N = 15), showing that our assay is not sensitive to initial spore concentration. From each sample, 200 mL were placed in 96-well plates for measuring fungal growth as increments in optical density (OD) with time using a spectrophotometer (405 nm). Antifungal activity was measured as inhibition of spore germination after 24 hours in comparison with a positive control that consisted of a mixture of 100 mL of the C. bassiana culture and 100 mL of sterile LB broth. As a negative control (with no fungi, used to assure sterility during the assay), we used a mixture of 100 mL sample of spider extract in LB broth with 100 mL of sterile LB broth. At least two positive and one negative controls were used for each plate. The OD after 24 hours was considered the value of antifungal activity. Large OD values represent high fungi growth.
Antifungal activity and energetic reserves under natural densities Cuticular antifungal activity was measured in samples from 14 nests (LB broth with fungi and cuticular extracts; see below), 11 positive controls (LB broth with fungi) and 5 negative controls (LB broth without fungi). To examine the relationship between the level of antifungal activity in spider cuticles and the energetic body condition (lipid body reserves) under different densities, 20 nests that contained 12-61 spiders were used. Nest size was estimated by measuring length and width of each nest (60.1 mm). Nest density was calculated as number of spiders/nest size. Due to contamination, antifungals could not be measured in 6 nests, and so sample size was reduced to 14 nests when antifungal activity was analyzed. For this part of the study, we only used juvenile spiders of 4 th and 5 th instars.
Antifungal activity and energetic reserves under manipulated densities
In this experiment we tested if solitary spiders differed in their antifungal activity and body energetic reserves from their siblings kept in groups. For such purpose, individuals from selected nests were randomly allocated to one of two treatments: solitary and grouped spiders. Solitary spiders were kept individually and grouped spiders were kept in groups of 16 individuals in plastic transparent cups (100 mL) for 10 days under natural light and darkness regime. This controls for environmental and sanitary conditions that could be variable in natural nests that are probably exposed to different pathogens. Spiders were starved seven days before the experiment to get individuals with similar initial body condition at the start of the experiment. Once the experiment started, spiders were offered three meals consisting of one male and one female living Drosophila melanogaster per individual. After the 10 day period, five solitary or five grouped spiders from each nest were washed together in 90% Ethanol and antifungal activities were compared (see above). In this experiment we used a total of 10 nests of juveniles (ranging 27-85 spiders per nest) at the 4 th instar when antifungal activity and lipids were measured. Due to contamination, antifungals could not be measured in two of the 10 nests, leaving a total of eight nests for the analyses of antifungal activity.
As a measurement of individual body condition in both natural and artificial nests, we measured lipid body reserves [33]. Lipids were quantified as the difference in body dry weight before and after three 24 hour submersions in chloroform. We found that lipid reserves ranged from 0-6 mg in natural nests and 0-1.5 mg in artificial nests (see results). This can be explained by the age of the studied animals, considering that spiderlings grow with age: while natural nests comprised individuals of 4 th and 5 th instars, artificial nests included only individuals at the beginning of the 4 th instar.
Statistical analyses
The relationship between nest density (number of spiders/cm 2 ), antifungal activity and lipid reserves in nests taken directly from the field was examined using linear regressions. To test for the effect of density manipulation on antifungal activity and lipid contents, we used general linear mixed models with treatment (solitary or crowded) and original nest density (number of spiders) as explanatory variables as well as their interaction. The interaction between both covariates was tested but it was removed from the analysis for being non-significant (antifungals: P = 0.695, lipids: P = 0.634). Given that solitary and crowded treatments came from the same nest, nest ID was included as a random variable in the models. Relationships between antifungal activity levels and lipids were analyzed with Pearson correlations. The presence of outliers was examined with Cook's distances and variance homogeneity was tested with Fligner-Killeen tests [34]. All analyses were performed in R 2.10.0 [35].
Results
A washing of D. ergandros cuticles efficiently reduced fungal growth in a culture medium after 24 hours, showing that spiders possess antifungal activity in the cuticle that is effective against the fungal pathogen Cordyceps bassiana (ANOVA F 2,27 = 9.416, P,0.001; Figure 1). A priori contrasts show that OD measurements of fungal growth in media with cuticle washings were significantly lower than OD of positive controls with C. bassiana (t = 2.744, P = 0.011) and significantly higher than negative controls (t = 2.228, P = 0.034); positive controls showed higher OD than negative controls (t = 4.202, P,0.001), confirming that there was no contamination in the culture medium.
In nests taken from the field, there was no relationship between nest density and cuticular antifungal activity (R 2 = 0.035, P = 0.524, N = 14; Figure 2). In addition, the intensity of cuticular antifungal activity was not correlated with the amount of lipid body reserves in individual spiders (Pearson R = 20.303, P = 0.293, N = 14). However, nest density and individual lipid content were negatively related (R 2 = 0.229, P = 0.032, N = 20; Figure 3), meaning that spiders from high-density nests had a lower lipid content compared to spiders from low-density nests.
In artificial nests, grouped spiders did not differ in antifungal activity from their solitary siblings (Mixed model F = 3.211, P = 0.116, N = 8; Figure 4). Initial spider density was controlled but was omitted from the final analysis for being non significant
Discussion
The present study represents the first evidence that spiders produce cuticular compounds that can reduce potential fungal infections. Cuticular antimicrobials have been described in insects such as ants, termites, wasps, bees, moths, thrips and whiteflies [24][25][26]36,37], and represent a first line defense against pathogens and parasites [38]. Our finding supports the idea that antimicrobial defense accompanies the evolution of sociality in animals living in large aggregations [13,24].
A variety of antifungal compounds have been found in arthropod (particularly insect) cuticles, and they can be either secreted by the host itself or produced by symbiotic microorganisms [39]. These compounds can include free fatty acids, proteins (defensins), amides, aldehydes, terpenes, glucanase enzymes, chitinase and protease inhibitors, alkaloids and quinones [39,40] which, together with cuticular melanization [41,42], inhibit spore germination, hyphae growth or penetration into the body. In particular, melanization response [41], caprylic, valeric and nonanoic acid [26], monoterpenes [43] and salicylaldehyde [44] have shown to be effective against the same fungus used in the present study (Cordyceps bassiana) in lepidopterans and coleopterans. To our knowledge, there is no description of specific cuticular antifungals in spiders, but many of the compounds present in insects have ancient evolutionary origins and find homologies in other insects, nematodes and mammals [40,[45][46][47]; hence, similar compounds can also be present in spiders, contributing to the inhibition of fungal growth in our assays and probably against natural pathogens. Given that many of these compounds are soluble in ethanol, which was the solvent used for their extraction in the present study, our measure of antifungal activity probably includes the effect of several of these or related compounds. As a perspective of this work, investigating the chemical nature and the action spectrum of the cuticular compounds isolated in D. ergandros, as well as addressing to what extent they are synthesized by the host itself or by symbiotic microorganisms would give further insight into the study of antifungals in spiders.
The synthesis of the abovementioned cuticular antifungals [26,36] certainly requires resources that are obtained from the host diet, such as amino-acids and fatty acids, and thus we predicted that only animals in good physiological condition would be able to produce effective antifungals. However, we found no relationship between energetic body-condition (lipid reserves) and intensity of antifungal activity. There are two possible explanations for this finding: (a) that these antifungals are not costly to produce and individuals in a range of conditions can maintain high levels, which seems unlikely given their chemical composition, or (b) that cuticular antifungals are so important in infection avoidance that individuals cannot allow to reduce their production. Given that living solitary can be costly in terms of foraging efficiency and predation risk [29], the latter interpretation can also explain why experimentally isolated individuals lost energetic condition compared to individuals kept in a group in our laboratory experiment.
Isolated individuals paid an energetic cost and not a cost in antifungal activity, probably because dietary restriction can be overcome [48] whereas loss of antifungals is too risky. If the same resources (e. g. amino-acids, fatty acids) are shared between cuticular antifungals and metabolic function, a trade-off between disease prevention and physiological condition may result [49]. As we ignore the plasticity of up-and down regulation of cuticular antifungals, we cannot discard the possibility that differences in antifungal activity could be detected in a long-term experiment.
We predicted that investment in antifungal protective activity would be higher in crowded nests, especially given the potentially higher risk of contagion that exists when there is high genetic relatedness among nest members [24], as in D. ergandros. However, we found no relationship between nest density and cuticular antifungal activity, neither in nature nor under laboratory conditions, suggesting that spiders constitutively express cuticular antifungal activity against the tested fungus. It is possible that cuticular antifungal activity, unlike immune response [17,18,50], is not a dynamic trait that can be regulated under varying risk of infection [21]. Other unmeasured components of spider defense, such as haemolymph immune response or melanization, might be adjusted under different densities, as occurs in other arthropods [17,19], but this remains to be tested.
Despite the benefits of group living in terms of foraging, reproduction or predator avoidance [1,4,5], our results show that living at high densities is costly in terms of reducing energetic reserves. However, we only found this pattern under natural conditions, where food was not artificially supplemented and under natural predation risk. On the other hand, when density was manipulated in the laboratory, food was provided and predators were excluded, grouped spiders maintained higher lipid reserves than their solitary siblings. These contrasting results suggest that when animals are stressed (i. e. under natural conditions), living in large groups can be energetically disadvantageous, while when animals are not stressed (i. e. under laboratory conditions) living in groups is energetically beneficial. For example, nutrient availability under laboratory conditions could reduce cannibalism within nests affecting nest density [51,52], but this idea remains to be tested.
In the present study we showed that living in large groups impacts on physiological condition of group members depending on the environmental conditions, presumably food availability. Despite energetic body condition being highly sensitive to group size, this was not the case for cuticular antifungal activity, which was not affected by nest density. The permanent pressure of pathogens on spider nests is likely to be responsible for the low plasticity of cuticular antifungal expression; if investment in pathogen protection needs to be constant, it can explain why energetic condition is compromised if resources are scarce. Future studies should formally evaluate the physiological basis of a potential trade off between lipid reserves and cuticular antifungals, and evaluate the importance of protective defense in the evolution of sociality. | 4,305 | 2014-03-17T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
The machine protection system for CSNS
The China Spallation Neutron Source (CSNS) accelerator consists of an 80 MeV H- LINAC, a 1.6 GeV rapid cycling synchrotron (RCS) and two beam transport lines. The uncontrolled beam may permanently damage the components or lead to very high residual radiation dose along the beam line. So the equipment protection must be deliberately designed and implemented. The machine protection system (MPS) protects components from being damaged by the beam. The response time requirement for the CSNS MPS is less than 20 ms, so the PLC (programmable logic controller) was adopted to implement the interlock logic. The MPS was implemented as a two-tier architecture system, and developed through utilizing PLC and Experimental Physics and Industrial Control System (EPICS) software toolkits. The application logic was taken into careful consideration during the implementation stage. An embedded CPU module can function as an IOC accessing PLC I/O modules through the sequence CPU, with an embedded Linux operation system. The interlock logic and heartbeat functions were tested with all functions ok. Time consumption has been measured thoroughly since the important requirement, which is around 15 ms to stop the beam. MPS was completed in Sep. 2017 and then put into operation. It has been operating smoothly for more than 3 years. MPS has played an important role in every stage of CSNS’s commissioning and operation and achieved high reliability during the user’s experiment operation. The accelerator recently runs stably with low equipment failure.
Introduction
CSNS is a high-power proton accelerator-based facility. The uncontrolled beam may permanently damage the components or lead to very high residual radiation dose along the beam line. So the equipment protection must be deliberately designed and implemented. The CSNS equipment protection system consists of two protection systems; one is the PLC-based slow protection system, i.e., MPS. The other is the FPGA-based fast protection system, i.e., fast protection system (FPS). The interaction of MPS and FPS is coordinated by the run management system (RMS), which is responsible for the management of accelerator operation [1][2][3][4][5].
The response time requirement for the MPS is less than 20 ms. Considering the response time requirement, PLC is a good choice to implement the MPS. For the CSNS MPS, Yokogawa FA-M3 series PLC was adopted. The CSNS MPS was implemented in two-tier architecture. The field measured response time for CSNS MPS is about 15 ms, which could fulfil the requirements [6][7][8][9].
Input signals classification
The input signals from technical systems, including power supply system, vacuum system, radio frequency system and other related system, were determined after comprehensive discussion and investigation. The signals need to be collected by MPS as shown in Table 1 [10][11][12][13][14].
Sub-area definition
The CSNS has five beam destinations: linac dump (L-DUMP), linac dump1 (LRDMP1), injection dump (I-DUMP), RCS dump (R-DUMP) and target as shown in Fig. 1. Each beam destination is defined as a machine mode, and another specific machine mode is defined as the ION SOURCE for ion source condition only.
The current operation area is the actual beam operation area according to the machine mode. For the availability of accelerator's operation, only the input signals included in the current operation area are involved in the interlock logic, and those outside the current operation area are not involved in. Therefore, it is necessary to manage the input signals properly to facilitate the implementation.
The sub-area definition is designed to achieve this goal. The MPS divides the input signals from the whole facility into 10 sub-areas; each sub-area contains many specific input signals, and one input signal can only belong to one sub-area; MPS will use the machine mode to determine the interlock sub-area combinations as shown in Table 2.
Critical equipment
For some specific power supplies for dipole magnets, which make the beam point to five beam destinations as shown in the figure above. The power supplies are LRSWBPS01, LRBPS, ISEP1, ISEP2, ESEP, RTBPS01, RTBPS02, RTB-VPS01 and RTBVPS02, both the fault signal and the current setting signal, are sent to the MPS. MPS defines these power supplies as the critical equipment. Besides, the primary strip foil is also defined as critical equipment.
Redundant design
In order to promote the reliability of the MPS, the redundant design principle was adopted for the CSNS MPS.
Interconnection with other protection systems
The protection system for CSNS consists of MPS, FPS and PPS (personnel protection system). Figure 2 shows the diagram of the interactions among RMS and MPS, FPS and PPS. The operation of MPS must be interacted with other systems. RMS plays the role of coordinating these three protection system and facilitates the accelerator's operation management. MPS and FPS has independent cable routes to stop the beam. PPS signals are treated as an input of MPS and RMS. Furthermore, the heartbeat signals of PPS, RMS and MPS can be monitored by each other to detect the system malfunction.
Beam stopping procedure
MPS must stop the beam immediately if the fault signal is detected. There are two actuators for MPS to stop the beam, which are ion source 50 kV accelerating power supply and FPS, respectively. When the fault signal is received by MPS, it will send the interlock signal to ion source control system and FPS simultaneously; the ion source control system will turn down voltage of the ion source accelerating power supply to 0 kV, and the FPS will carry out a series of actions. Figure 3 shows the diagram of beam stopping procedure of MPS.
Overall system architecture
The CSNS MPS was implemented as a two-tier architecture system and developed through utilizing PLC and experimental physics and industrial control system (EPICS) software toolkits. The hardware architecture is depicted in Fig. 4; the main station responds for interlock logic, and the four sub-stations respond for signals collection. The signals from various types of equipment are collected by 4 sets of slave PLCs located at different control stations and then transmitted to the MPS-A master PLC through multi-core cables only. The signals of critical equipment are directly sent to the master PLC of MPS-A and MPS-B via cables.
Application logic
The application logic is realized by ladder diagram, and the following four points were taken into careful consideration during the implementation stage.
1) Fail-safe for interface
For the consideration of reliability, the principle of fail-safe is applied to each interface. The interface of the input and output signal is normally closed under normal condition and will be switched to open state when the equipment failure happens or the cable route is broken. 2) Self-locking MPS utilizes the self-locking relay to implement the interlock. If the input signal changes to fault state, the self-locking relay will keep itself locked, which can only be reset by a manual reset trigger. This is illustrated in Fig. 5, the input relay X01004 of vacuum valve R2GV01 changing from closed to open, which results in the internal relay I00102 switching to self-locking state. The self-locking implementation is also helpful for the post-analysis.
3) Snapshot function
The snapshot function is designed to record the input channels state at the interlock moment, which can be used to identify which input signal triggered the interlock first. This is illustrated in Fig. 6; the input relays X00301-X00316 (No. 3 slot digital input module) are assigned to the internal relays I00001-I00016 using a block transfer instruction BMOV. When the falling edge of the interlock signal I00515 is detected, D00101 and D00001 are refreshed by D00001 and I00001-I00016, respectively. That is, D00001 records the snapshot of X00301-X00316 at the interlock moment, and D00101 records the snapshot of X00301-X00316 at the previous interlock moment.
4) Periodic heartbeat monitoring
The healthy state of MPS itself is essential to the operation CSNS. In order to monitor the health of the MPS, the heartbeat signal generating and monitoring method was implemented. Figure 7 shows the periodic heartbeat signals monitoring among protection systems, which are generated and checked by the PLC I/O. Figure 8 shows the 10-s-width periodic heartbeat signals for slave PLCs at local control stations, which are generated by the timer in the ladder.
Software applications
An embedded CPU module named F3RP61 can function as an IOC accessing PLC I/O modules through the sequence CPU, with an embedded Linux operation system. The adoption of the embedded IOC not only simplifies the architecture of the system, but also improves the data transmission speed [15].
Control system studio (CSS) BOY toolset was utilized to design the MPS operator interface (OPI), which is organized in two major layers, i.e., (1) the main window; (2) the detail windows. As shown in Fig. 9, the main window includes 13 detail windows.
The stand-alone application is bypass status save/restore tool, which is designed to provide the operator tools necessary to save a consistent condition of bypass status and to allow, when necessary, a quick restore to a previously saved status.
Comprehensive test and effectiveness
The interlock logic and heartbeat functions were tested first with all functions ok. In selected machine mode, any input channel in the current operation area can stop the beam immediately; those outside the current operation area do not affect the beam.
Time consumption has been measured thoroughly since the important requirement. Figure 10 shows the oscilloscope screenshot for response time test; the yellow signal represents the time when the input signal is received by slave PLC on RTBT station; the blue signal indicates the time when the output signal is generated from master PLC on MPS-A station. The response time is around 15 ms to stop the beam. Delay of cables and PLC I/O modules contribute mainly.
The accelerator recently runs stably with low equipment failure. For example, on June 10, 2019, an MPS interlock occurs when the corresponding vacuum degree deteriorates, in Fig. 11, the blue and green signals from cold gauge R4CCG05 and R4CCG06, respectively, indicate the time when threshold reached; the orange signal represents the time when the valve R4GV02 was triggered, and then, the red signal indicates the time when the MPS interlock was generated from master PLC on MPS-A station to protect valve R4GV02.
Summary
MPS was completed in September 2017 and then put into operation. It has been operating smoothly for more than 3 years. MPS has played an important role in every stage of CSNS's commissioning and operation and achieved high reliability during the user's experiment operation. To eliminate operators' misoperation, MPS is also under strict management. The comprehensive test was carried out, after | 2,430.4 | 2021-02-23T00:00:00.000 | [
"Physics"
] |
New Results for Oscillation of Solutions of Odd-Order Neutral Differential Equations
: Differential equations with delay arguments are one of the branches of functional differential equations which take into account the system’s past, allowing for more accurate and efficient future prediction. The symmetry of the equations in terms of positive and negative solutions plays a fundamental and important role in the study of oscillation. In this paper, we study the oscillatory behavior of a class of odd-order neutral delay differential equations. We establish new sufficient conditions for all solutions of such equations to be oscillatory. The obtained results improve, simplify and complement many existing results.
By a solution of (1), we mean a continuous real-valued function x(t) for t ≥ t x ≥ t 0 , which has the property: Υ is continuously differentiable n times for t ≥ t x , r Υ (n−1) α is continuously differentiable for t ≥ t x , and x satisfies (1) on [t x , ∞). We consider only the nontrivial solutions of (1) is present on some half-line [t x , ∞) and satisfying the condition sup{|x(t)| : t ≤ t < ∞} > 0 for any t ≥ t x . On many occasions, symmetries have appeared in mathematical formulations that have become essential for solving problems or delving further into research. High quality studies that use nontrivial mathematics and their symmetries applied to relevant problems from all areas were presented. In fact, in recent years, many monographs and a lot of research papers have been devoted to the behavior of solutions of delay differential equations. This is due to its relevance for different life science applications and its effectiveness in finding solutions of real world problems such as natural sciences, technology, population dynamics, medicine dynamics, social sciences and genetic engineering. For some of these applications, we refer to [1][2][3]. A study of the behavior of solutions to higher order differential equations yield much fewer results than for the least order equations although they are of the utmost importance in a lot of applications, especially neutral delay differential equations. In the literature, there are many papers and books which study the oscillatory and asymptotic behavior of solutions of neutral delay differential equations by using different technique in order to establish some sufficient conditions which ensure oscillatory behavior of the solutions of (1), see [4][5][6].
The authors in [1,3,7] have studied the oscillatory behavior of the higher-order differential equation And the author of [8] extended the results to the following equation Agarwal, Li and Rath [9][10][11][12] investigated the oscillatory behavior of quasi-linear neutral differential equation under the condition 0 ≤ p(t) < 1.
The latter differential equation was studied by Xing et al. in [13] under the condition The aim of this paper is to study the oscillatory behavior of the solutions of odd-order NDDE (1). By using Riccati transformation, we establish some sufficient conditions which ensure that every solution of (1) is either oscillatory or tends to zero.
Auxiliary Results
In order to prove our main results, we will employ the following lemmas.
Then G attains its maximum value on R at v * = (αC/(α + 1)D) α and Lemma 2 ([15]). Assume that c 1 , c 2 ∈ [0, ∞) and γ > 0. Then where . Assume that f (n) (t) is of fixed sign and not identically zero on [t 0 , ∞) and that there exists a t 1 ≥ t 0 such that f (n−1 and there occur two cases for the derivatives of the function Υ: The rest of the proof is similar to proof of ([3] Lemma 2). Thus, the proof completed.
Proof. Let x be a positive solution of (1). Using (I 4 ) in (1), we have thus, This implies that where = c−p 0 ( +c) +c > 0. Using (6) in (5), we obtain Integrating the above inequality from t to ∞, we obtain Integrating (7) twice from t to ∞, we have Repeating this procedure, we arrive at Now, integrating from t 1 to ∞, we see that which contradicts (4), and so we have verified that lim t→∞ Υ(t) = 0.
Main Results
In the following lemma, we will use the notation Lemma 6. Let x be a positive solution of the equation in (1). If (8) and the equality h • ζ = ζ • h hold, then the following inequality is valid Moreover, if (8) and (9) hold, then Proof. Let x be a positive solution of (1). Then, there exists t 1 ≥ t 0 such that x(t) > 0, x(h(t)) > 0 and x(ζ(t)) > 0 for t ≥ t 1 . By the equality Υ(t) = x(t) + p(t)x(ζ(t)) together with Lemma 2, we obtain the inequality From (5) and the properties h • ζ = ζ • h and ζ ≥ ζ 0 , we obtain Using the latter inequalities and taking those in (5) and (13) into account as well, we obtain which with (12) gives This proves the inequality in (10). In order to show inequality (11) we proceed as follows. From (8) and (9), we obtain Moreover, . (15) Combining (14) with (15) and taking into account (12), we have This proves (11) and completes the proof of Lemma 6. (8) hold. Morever, assume that (4) is satisfied and that there exists a function δ ∈ C 1 ([t 0 , ∞), (0, ∞)) with the property that for all sufficiently large t 1 ≥ t 0 ,there exists t 2 ≥ t 1 such that lim sup Then, a solution x(t) to (1) either oscillates or else tends to zero when t → ∞.
Theorem 2.
Suppose that the functions h and ζ satisfy (8), (9) and h(t) ≤ ζ(t) for t 0 . In addition, suppose that (4) is satisfied. If there exists a function δ ∈ C 1 ([t 0 , ∞), (0, ∞)) with the property that for all sufficiently large t 1 ≥ t 0 ,there exists t 2 ≥ t 1 such that lim sup is valid. Then a solution x(t) of Equation (1) oscillates or tends to zero when t → ∞.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable. | 1,468.6 | 2021-06-21T00:00:00.000 | [
"Mathematics"
] |
Molecular Characteristics and Pathogenicity of Porcine Epidemic Diarrhea Virus Isolated in Some Areas of China in 2015–2018
Since 2010, Porcine epidemic diarrhea virus (PEDV) has caused severe diarrhea disease in piglets in China, resulting in large economic losses. To understand the genetic characteristics of the PEDV strains that circulated in some provinces of China between 2015 and 2018, 375 samples of feces and small intestine were collected from pigs and tested. One hundred seventy-seven samples tested positive and the PEDV-positive rate was 47.20%. A phylogenetic tree analysis based on the entire S gene showed that these strains clustered into four subgroups, GI-a, GI-b, GII-a, and GII-b, and that the GII-b strains have become dominant in recent years. Compared with previous strains, these strains have multiple variations in the SP and S1-NTD domains and in the neutralizing epitopes of the S protein. We also successfully isolated and identified a new virulent GII-b strain, GDgh16, which is well-adapted to Vero cells and caused a high mortality rate in piglets in challenge experiments. Our study clarifies the genetic characteristics of the prevalent PEDV strains in parts of China, and suggests that the development of effective novel vaccines is both necessary and urgent.
INTRODUCTION
Porcine epidemic diarrhea virus (PEDV) is the etiological agent of porcine epidemic diarrhea (PED), a severe diarrhea disease in piglets that is characterized by severe watery diarrhea, vomiting, dehydration, weight loss, and nearly 100% mortality (1). PED occurred sporadically around the world in 1990-2009, but in 2010, an acute and severe outbreak of PED in piglets occurred in China and spread to other Asian countries, causing large economic losses (2)(3)(4)(5)(6)(7)(8). In April 2013, PED suddenly erupted in the United States, causing many piglets to die, and the mortality rate in suckling piglets reached 100% (9,10). The disease was shown to be caused by a highly pathogenic PEDV variant. The S genes of the classical CV777 strain and the new strain OH851 have the same insertions and deletions (S-INDEL strains), unlike those of the variant PEDV strains (11,12).
The genome of PEDV is ∼28 kb in length and contains seven open reading frames (ORFs), which encode four structural proteins and three non-structural proteins (13). S is the largest structural protein, and contains neutralizing antibody epitopes and a specific receptor-binding site for viral entry (14). At present, four antigenic epitopes have been characterized in the S protein, including the CO equivalent (COE) domain (amino acids 499-638), the epitopes SS2 (amino acids 748-755) and SS6 (amino acids 764-771), and epitope 2C10 (1368-GPRLQPY-1374) (15,16). Because the S protein plays a vital role and the S gene is extensively mutated, it is often used as the target gene in the analysis of viral genetic variation. Based on whether the S gene contains the INDEL sequence or not, PEDV strains can be classified into genogroup II (GII) or genogroup I (GI), respectively. GI is further divided into two subgroups (GI-a, GIb) according to INDEL sequence differences. At present, most isolates recovered in China belong to GII (17). A new mutation in the S gene of PEDV has recently been reported (18). Different GIb strains have also been reported in different areas of China (19,20). Studies have shown that PEDV strains of different genotypes can coexist, in one province in particular. These findings indicate that PEDV has continued to spread widely to most areas of China and has caused serious economic losses in the pig industry, reflecting the complex evolution of the virus. Therefore, extensive research into the evolutionary pathogenic mechanism of these strains in China is essential.
To control the spread of PEDV, a classical-CV777-derived vaccine has been widely used in many areas of China. However, it does not provide adequate protection against PEDV invasion (6,21). In contrast, the wide-scale use of vaccines has increased the environmental stress upon the virus, causing PEDV to mutate to escape its host's immune defenses. To further and fully understand the prevalence and evolution of PEDV in southern China, diarrhea samples were collected from piglets in this study, and the variation of the S genes of the PEDV-positive samples were analyzed with sequence alignment and a phylogenetic tree.
Sample Collection
A total of 375 diarrheic samples from the small intestine tissues or feces were collected from suckling piglets on pig farms in eight provinces of China (Fujian, Guangdong, Guangxi, Guizhou, Jiangxi, Shandong, Hubei, Hu'nan, and Hainan) between June 2015 and October 2018. The piglets suffered severe watery diarrhea and dehydration. The diarrheic feces were resuspended in 1 mL of phosphate-buffered saline (PBS) in 1.5 mL Eppendorf tubes. After centrifugation at 10,000 × g for 5 min, 200 µL of each supernatant was transferred to a new tube for RNA extraction and virus isolation.
RNA Extraction and Sequencing
The total RNA from the collected supernatants was extracted with TRIzol Reagent (TaKaRa), according to the manufacturer's instructions. The extracted RNA was subjected to reverse transcription (RT-PCR) with three pairs of newly designed primers to amplify and detect the PEDV S gene ( Table 1). The three overlapping PCR products were identified with 1.5% agarose gel electrophoresis. The positive PCR products were sequenced by Sangon Biological Engineering Co. Ltd, and the entire sequence of the S gene was determined with the DNAStar software. The complete S gene sequences were submitted to GenBank, under the accession numbers shown in Table 2.
S Gene Sequence Analysis
The complete genome sequences of reference strains available in GenBank were downloaded and used in a phylogenetic analysis ( Table 3). A phylogenetic tree was constructed from all the S genes of the representative strains and isolates, using the neighbor joining method with 1,000 bootstrap replicates, with the Molecular Evolutionary Genetics Analysis (MEGA, version 6.0) software (22).
Virus Isolation
Vero cells grown in a 24-well cell culture plate were infected with the previously prepared supernatants and maintained in Dulbecco's modified Eagle's medium (Thermo Scientific) containing 7 µg/mL trypsin without EDTA (Thermo Scientific). The cells were monitored daily for a cytopathic effect (CPE). When the CPE appeared in 70% of the cells, the cells were fixed with anhydrous ethanol. An immunofluorescence assay (IFA) was then performed with an anti-N protein monoclonal antibody (mAb; cat. # PEDV12-F, Alpha Diagnostic International Inc., USA) diluted 1:1,000 and an Alexa-Fluor R -488-conjugated Affinipure goat anti-mouse IgG(H+L) secondary antibody (SA00013-1; Proteintech, USA) diluted 1:400.
Titer Determination for the Viral Proliferation Curve
Vero cells cultured in a 24-well cell culture plate were infected with PEDV at a multiplicity of infection (MOI) of 0.01. The cells and supernatants were collected at 12, 24, 36, 48, 60, 72, and 96 h post-infection (hpi). The cells were then frozen and thawed three times. After centrifugation at 10,000 × g for 5 min at 4 • C, the supernatants were collected and the median tissue culture infective dose (TCID 50 ) was determined with a microtitration infection assay.
Piglet Challenge Experiment
To determine the virulence of the third-generation isolated strain GDgh16, six healthy 4-day-old colostrum-deprived suckling piglets were artificially fed bovine milk from birth. The colostrum-deprived piglets were randomly divided into two groups, with three piglets in each group. One group was challenged orally with 0.5 mL of PEDV at 10 5.0 TCID 50 /mL. The other group received cell-culture medium. Duplicate samples of small intestine were collected in from all piglets, which had been euthanized at 48 h postchallenge. One of the duplicate samples was crushed in a grinder with 2 mL of PBS. The crushed intestine
Statistical Analysis
The numerical data are expressed as means ± standard deviations (SD), and all data were analyzed with the GraphPad Prism software (version 5.02 for Windows; GraphPad Software Inc.).
PEDV Detection and Phylogenetic Analysis Based on the S Gene
As shown in Sixty-two S genes from the test strains and representative strains downloaded from GenBank were analyzed with a phylogenetic tree. As shown in Figure 1, the phylogenetic analysis divided these strains into two groups, GI and GII, based on whether the S gene contained the S-INDEL (23). GI included the classical strains (CV777 and SM98) and some isolates from China, the USA, and Japan collected after 2010. Therefore, GI was further divided into three subgroups: GI-a, GI-b. GI-a contained classical S-INDEL strains. GI-b contained a new S-INDEL strain. GII contained non-S-INDEL strains and was also divided into two subgroups, GII-a and GII-b, which consisted of a number of extremely virulent strains from all over the world, isolated since 2010. The strains isolated in the present study belonged to GI-a, GI-b, GII-a, and GII-b. GDjm18-2, was categorized as subtype GI-a, which also included the classical vaccine strains CV777-attenuated and JS2008. GDjm17-1 was categorized in GIb cluster. The other strains identified in the present study formed eight clusters. Of these strains, 25 isolates from Guangdong, three isolates from Fujian, and one isolate from Jiangxi formed three clusters and belonged to GII-b, with strong similarity to GD-A and CH-GXNN-2012. The other 34 isolates formed five clusters and belonged to GII-a. Among these 34 strains, JXyc15 was closely related to the C4 cluster (North American strains), whereas the other strains showed closer identity to CH-ZMDZ-11, CH-HNAY-2015, and CH-HNCDE-2016L. As shown in Table 5, all the strains isolated in 2015 belonged to GII-a (100%). In 2016 and 2017, 46.15% and 43.75% of the isolated strains belonged to GII-a, respectively. Compared with GII-a, the rate slightly increased, and in 2016 and 2017, 53.84 and 50% of the isolated strains belonged to GII-b, respectively. However, in 2018, 72.22% of the isolated strains belonged to GII-b, which was much higher than the proportion that belonged to GII-a in 2017 (22.22%). The comparison result show: Variation of PEDV S gene is continuously occurring and GII-b strains may be the dominant strains in China in the future.
Amino Acid Sequence Analysis of Neutralizing Epitopes in the S Protein
Neutralizing antibodies play an important role in the prevention and control of viral infections. Therefore, it is important to identify and analyze the amino acid sequences of the neutralizing epitopes in viral proteins. To analyze the genetic characteristics of the South China PEDV strains, the deduced amino acid sequences of the S proteins detected in our study were aligned and compared with those of representative
Numbers of Mutated Amino Acid in Different Domains of the S Protein
To further analyze the amino acid mutations in the different domains of the S protein in these isolates, the different domains of the S protein were aligned with those of CV777, and the average number of amino acid mutations present in each year was calculated. The S protein can be divided into the S1 protein and the S2 protein. The S1 protein contains four domains: SP (amino acids 1-18), S1-NTD (amino acids 19-233), COE and RBD (amino acids 501-629), whereas the S2 protein contains five domains: SS6 (amino acids 764-771), HR1 (amino acids 978-1117), HR2 (amino acids 1274-1313), TM (amino acids 1324-1346), and 2C10 (amino acids 1368-1374). Previous data have indicated that 2C10 is conserved, so we did not analyze the 2C10 domain. As shown in Figure 3, in these strains, the S1 sequence had more amino acid mutations than the S2 sequence. From 2015 to 2018, the number of mutated amino acids in S1 remained at a high level, whereas that in S2 decreased. Furthermore, the numbers of mutated amino acids in SP (amino acids 1-18) and S1-NTD (amino acids 19-233) increased slightly, whereas the numbers of mutated amino acids in the COE and RBD domains decreased. SS6, HR1, HR2, and TM in the S2 protein did not change obviously from 2015 to 2018.
Pathogenicity of GDgh16
Because the samples of GDgh16 came from a scale pig farm that had experienced high mortality, and it displayed a high viral titer in Vero cells, we investigated its pathogenicity in vivo. As shown in Figure 4A, PEDV-infected cells showed a characteristic green color, indicating that PEDV was isolated successfully. The viral proliferation curve indicated that the titer of strain GDgh16 increased to 10 6.33 TCID 50 /mL at 36 hpi but decreased to 10 4.99 TCID 50 /mL at 96 hpi ( Figure 4B). Six piglets were divided into two groups; one group was challenged orally with GDgh16 and the other group was inoculated with cell culture medium. All the challenged piglets showed classical clinical signs, including vomiting, watery diarrhea, and dehydration, at 16 hpi. The challenged piglets began to die at 24 hpi, and all had died by 48 hpi (Figure 5A). The control pigs remained healthy, with no detectable PEDV shedding. The control piglets were euthanized and necropsy was performed on all the piglets. The viral copy numbers in different parts of the intestine were determined with RT-qPCR. The duodenum, jejunum, ileum, cecum, and colon had higher viral copy numbers than the rectum ( Figure 5B). The duodenums, jejunums, and ileums of the piglets were subjected to an IHC assay. As shown in Figure 5C, the tissues from the piglets in the challenged group showed remarkable levels of viral antigens compared to those in the control group. The results of GDgh16 challenge test indicate that the variant strains are a large threat to the pig industry and that the control of PEDV spread has become a critical issue.
DISCUSSION
PEDV has become an important diarrhea virus, causing extensive damage to pig farms worldwide. Because there is no effective vaccine against the emerging prevalent strains in China, the variant PEDV strains occur frequently on many farms in different areas (24). Because there are extensive viral variants and the protection afforded by commercial vaccines is limited, it is necessary to fully understand the genetic variations and epidemiology of PEDV to facilitate the development of nextgeneration vaccines.
The S gene encodes the largest structural protein of PEDV and stimulates the host body to produce neutralizing antibodies against the virus. Because its variants are extensive, the S gene is commonly used as the target gene in studies of the genomic characteristics of PEDV (25). A phylogenetic analysis showed that strains from four subgroups of PEDV were present from 2015 to 2018, and that GII-a and GII-b were the two most prevalent subgroups in China at that time. From 2015 to 2018, eight strains belonging to four subgroups (GI-a, GI-b, GII-a, and GII-b) were epidemic in Jiangmen (Guangdong), which suggests that PEDV had mutated widely and the PEDV epidemic was becoming more complex. These results are consistent with those of Wen et al. (26). In 2015, all the isolated strains belonged to GII-a, whereas in 2018, 72.22% of strains belonged to GII-b, and only 22.22% of strains belonged to GII-a. Interestingly, unlike GII-a, which includes strains from other countries, such as America, South Korea, and Japan, the GII-b subgroup only contains Chinese-isolated strains. Combined with previous studies, these results suggest that GII-b strains may be the dominant strains in China in the future (27,28).
The S protein is highly variable, and many studies have shown that amino acid changes in the S protein can affect the virulence and pathogenicity of PEDV. Our study has shown that the numbers of amino acid mutations in the SP1 and S1-NTD domains of PEDV increased in 2017 and 2018. It had been suggested that S1-NTD is a vital domain related to viral virulence (29,30) and that conformational changes in S1-NTD are related to the high pathogenicity of PEDV strain FJzz1 (18,27). Increasing numbers of more-virulent PEDV strains have recently emerged (18,27,31). Whether the mutations identified in this study alter the major conformation and thus the pathogenicity of these strains will be investigated further in the future. Our data show that the PEDV positivity rate in the provinces tested increased from 2015 to 2016, but decreased from 2016 to 2018, which might be attributable to improvements in disease prevention and control strategies. Many pig farms use the "feed-back" mode to ensure sow immunity to PEDV and to protect piglets against PEDV infection. This is an effective measure to prevent PED, but there is also a risk of virus dispersal, which is responsible for the many GI-b strains reported to date (19,20,(32)(33)(34).
Four neutralizing epitopes of the PEDV S protein have been determined: the COE domain (499-638), epitope SS2 (748-755), epitope SS6 (764-771), and epitope 2C10 (1368-1374) (15,16). In the present study, we detected amino acid changes at 35 positions in the COE domain. Moreover, one strain, GDhz16, had four continuous amino acid mutations in epitope SS6. Epitopes SS2 and 2C10 also contained amino acid substitutions. The antigenicity, pathogenicity, and neutralization properties of isolated strains are altered by such mutations, especially some insertions and deletions in the S protein (35,36). Therefore, the vaccine derived from prototype strain CV777 protects against the disease induced by classical strains but not the disease caused by variant strains (24,37). Whether these amino acid changes affect the antigenicity and neutralization properties of the four neutralizing epitopes warrants investigation in future studies.
Based on previous epidemiological and clinical observations of field strains since 2010, the emerging GII strains are highly pathogenic (38). To investigate the pathogenicity of the isolated variant strains, three piglets were infected orally with GDgh16. The piglets in the infected group began to show clinical signs of diarrhea at 12 h, and developed the typical symptoms of PED at 16 h. Morbidity reached 100%. The piglets began to die at 24 hpi, and all had died by 48 hpi. Moreover, their small intestines contained high viral copies and many viral antigens, indicating that GDgh16 was a highly pathogenic strain. Other researchers have demonstrated that different types of pigs infected with variant PEDV strains shared consistent outcomes (39)(40)(41)(42). These results indicate that the variant strains are a large threat to the pig industry, and that the control of PEDV spread has become a critical issue.
In conclusion, the PEDV strains circulating in parts of China between 2015 and 2018 clustered into four subgroups: GI-a, GI-b, GII-a, and GII-b. The GII-b strains became dominant in 2018. Compared with previous strains, these strains displayed multiple variations in the SP and S1-NTD domains and the neutralizing epitopes of the S protein. We successfully isolated and identified a new virulent GII-b strain, GDgh16, which is well-adapted to Vero cells and causes a high mortality rate in piglets. Our study provides insight into the genetic characteristics of the prevalent PEDV strains in parts of China, and suggests that the development of effective novel vaccines is both necessary and urgent.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material.
ETHICS STATEMENT
The animal study was reviewed and approved by The National Engineering Center for Swine Breeding Industry (NECSBI 2015-16).
ACKNOWLEDGMENTS
This manuscript has been released as a pre-print at Research Square (43). | 4,365.2 | 2020-12-07T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
Dynamics of the mobile platform with four wheel drive
The dynamics problem of motion of the mobile platform with four wheel drive under the unsteady conditions have been formulated and analysed. The mobile platform prototype have been equipped with four independently driven and steered electric drive units.The theoretical model have been formed for the proposed design concept of the platform. The relations between friction forces in longitudinal and transverse directions in reference to the active forces have been considered. The analysis of the motion parameters for different configurations of the wheel positions has been included. The formulated initial problem has been numerically solved by using the Runge-Kutta method of the fourth order. The sample simulation results for different configurations of the platform elements during its motion have been included and the conclusions have been formulated.
Introduction
The description of motion of the three wheeled mobile platform has been presented in [5]. The preliminary studies of the four wheeled mobile platform have been conducted, and the results have been gathered in [4]. The research about systems such as wheeled mobile robots have been performed by scientists representing different point of view. An adaptive robust trajectory tracking controller for a Mecanum-wheeled mobile robot with Newton-Euler approach has been described in [1]. The robust controller for tracking the trajectory of the four wheeled vehicles with description of the robust control design has been presented in [2]. The output-feedback control strategy for the path following of autonomous ground vehicles has been proposed in [3]. The mathematical model of a 4-wheel skid steering mobile robot with an extension of the kinematic control law at the dynamic and motor levels using the Lyapunov analysis has been developed in [6]. Formulation of the dynamic optimization problem with the kinematic principles has been described through differential equations in [7].
The non-linear dynamics model of the robot with four steered and driven wheels has been established and described in [8], in order to define an accurate observer of the sideslip angle in a high dynamics context (designing the controller).
In this work the dynamics of motion of the mobile platform with four wheel drive is considered and the formulated initial problem has been solved by using the Runge Kutta method of the fourth order. The simulation results have also been included in the further sections.
Model of the mobile platform motion
The model of dynamics of motion of the four wheeled mobile platform in Cartesian coordinates has been formulated. The characteristics of the design of the platform is schematically presented in Fig. 1. Geometric features have been identified as follows: a, b -lengths between wheels in the x direction, c, d -lengths between the wheels in the y direction, hs -distance between the ground and the center of mass of the platform, h -height of the platform, rk -radius of the platform's wheel. The model of the mobile platform constituting the base of all considerations under the dynamics of motion is presented in Fig. 2. OXYZ represents the reference frame. S point, according to Fig. 2, is the centre of mass of the platform, and it has been chosen as the origin of the local coordinate system connected with the platform. The e1, e2, e3 vectors are unit vectors in global coordinate system. The local coordinate OiXiYiZi (i = 1,2,3,4) have also been introduced in the model. The directions of the respective axes of all local coordinate systems have been adopted as parallel so the i, j, k are unit vectors in local coordinate systems, respectively. The forces taken into account in the analysis have been shown in Fig. 3. Model of the platform has been designed as the system with four steered wheels with electric drive units. The possibility to realize motion in any desired direction of motion have been provided. The design concept assumes that each wheel can be steered independently. The dynamics description have been determined with taking into account the acting forces on each wheel of the platform. The forces have been transformed in between the coordinate systems in order to obtain the motion parameters and to present the results in global coordinates. The description has been established by considering the mobile platform as the rigid structure.
The method of determining the motion parameters is gathered in this work and the set of the equations is presented. The Coulomb friction model has been adopted in this work.
The mathematical description of dynamics in a planar motion has been presented in previous work [4]. The model of the dynamics has been developed with taking into account the description of the coordinates transformation. The possibility to fall into the skid has been also included. The active force Fci can be calculated from the formula: (1) where: Mni -the drive torque, rk -radius of the wheel.
The friction forces have been set in the form of the Eq. 2 and 3.
where: μw, μp -the coefficients of friction in the longitudinal and transverse direction, vwi, vpi -the velocity components in the longitudinal and transverse direction, respectively. All of the forces applied to the wheels occurring during motion can be boiled down to the resultant forces according to the Eq. 4.
where: Foi-the other resistance force occurring during motion. The vector equation of progressive motion of the center of mass of the mobile platform can be written in the form: where: m -the total mass of the platform, a -the acceleration of the center of mass of the platform, Wi -the resultant force obtained by considering the active and passive forces. The vector equation of the rotational motion around the center of mass for the platform can be written in the form: (6) where: K -the angular momentum vector of the platform, si -the location vectors of each of the drive wheels, Mi -moment occurring during the rotational motion of the wheels.
Considering the planar motion of the platform on the plane OXY and neglecting the moments Mi, on the bases of Equations 5 and 6, the equations of motion can be written in the form: where: Ẍ -acceleration of center of mass on the X-axis in reference frame, Ÿ -acceleration of center of mass on the Y-axis in reference frame, -angular acceleration around center of mass of the platform. The equations of motion, written in the form of differential equations, together with the initial conditions have been used for determination of the trajectory, velocity and acceleration of the platform.
The solution of the formulated initial problem the Runge-Kutta method of the fourth order has been used. The sample simulation results have been presented in the next section.
The simulation results regarding the chosen cases of motion of the designed mobile platform have been determined on the basis of the potential possible situation. The total observation time of platform motion work is 20 s. During work time of similar systems, the different type of possible unexpected situations may have happened.
In this work every wheel of the mobile platform at first four seconds of motion drives on the same surface, represented in the same coefficients of friction. After four seconds of motion, the left side wheels invaded the slippery surface, e.g. because of frozen puddle, poured oil or ride on a different pavement.
In mathematical description the different value of the coefficients of friction between the left and right side wheels have been introduced. The initial values of the following parameters are gathered in Table 1. The course of the initial drive torque is presented in Fig. 4. The coefficients of friction have been changed from the initial values. The coefficient of friction in the longitudinal direction μw = 0.1, and in the transverse direction μp = 0.05 after four seconds of motion.
Fig. 4. Representation of the drive torque Mni
The dependence between the active and passive forces have been gathered and have been graphically presented in Fig. 5. been occurred. The course of the linear velocity of the progressive motion of the platform is presented in Fig. 10.
The velocity of the progressive motion of the platform, after switching off the drive torque, remains constant. This is a consequence of simplification, that all the resistant forces have been neglecting at this period of motion time. Without this assumption the platform would lose velocity until it stops.
Conclusions
The presented model of dynamics is useful in investigation of the motion of wheeled mobile robots. The consequences of the unexpected situations that may occur during working such machines can be analyzed. The representation of relations between the motion parameters have been included. The developed model is also useful to examine different configurations of the drive units.
The formulated initial problem has been solved numerically by using the Runge -Kutta method of the fourth order. The sample simulation results can be treated as the cases of the platform motion in possible circumstances of platform work. Directions of further studies concern the extension of the model with introducing to the mathematical description other elements of the real object in order to prevent the platform from falling into a skid. | 2,111.2 | 2019-01-01T00:00:00.000 | [
"Engineering"
] |
Energy transfer through third‐grade fluid flow across an inclined stretching sheet subject to thermal radiation and Lorentz force
The heat and mass transfer through the third grade fluid (TGF) flow over an inclined elongating sheet with the consequences of magnetic field and chemical reaction is reported. The impact of activation energy, heat source/sink, and thermal radiation is considered on the TGF flow. Fluid that demonstrate non-Newtonian (NN) properties such as shear thickening, shear thinning, and normal stresses despite the fact that the boundary is inflexible is known as TGF. It also has viscous elastic fluid properties. In the proposed model, the TGF model is designed in form of nonlinear coupled partial differential equations (PDEs). Before employing the numerical package bvp4c, the system of coupled equations are reduced into non-dimensional form. The finite-difference code bvp4c, in particular, executes the Lobatto three-stage IIIa formula. The impacts of flow constraints on velocity field, energy profile, Nusselt number and skin friction are displayed through Tables and Figures. For validity of the results, the numerical comparison with the published study is performed through Table. From graphical results, it can be perceived that the fluid velocity enriches with the variation of TGF factor and Richardson number. The heat source parameter operational as a heating mediator for the flow system, its influence enhances the fluid temperature.
List of symbols
The fluid flow through a stretching sheet holds substantial significance within the domain of fluid dynamics, owing to its wide array of uses in various engineering and industrial domains.Abolbashari et al. 1 analytically examined the behavior of fluid flow.Their study focused on a flow scenario where a stretching sheet was involved with velocity slip condition.Zeeshan et al. 2 deliberated the heat transmission on the motion of a ferromagnetic fluid over an extending surface.This ferromagnetic fluid consisted of a well-blended mixture of magnetic solid particles all of this occurs in the presence of an electromagnetic dipole state.Shit et al. 3 conducted a study that explored the dynamics of unsteady boundary layer magnetohydrodynamic flow and convective heat source.Sandeep A mathematical model was developed with the aim of improving the rates of energy transference, enhancing the efficiency and effectiveness of thermal energy propagation.Dogonchi et al. 12 described the entropy and thermal analyses of the nanoliquid flow within a porous cylinder.Some remarkable results recently presented by Ref. [13][14][15][16][17][18] .
The flow of a mixture of fluid and solid particles is inherently complex and can be influenced by numerous variables.To better understand and study these intricate flows, one common approach is to treat the mixture as a NN fluid.Considerable research has been dedicated to the analysis of various transport phenomena occurring within non-Newtonian fluids, including substances like coal slurries.Among these processes, heat transfer is of particular significance in the context of handling and processing these fluids.It plays a pivotal function in the efficient management and treatment of such complex mixtures 19 .Ariel 20 conducted a study on the laminar flow and steady of a TGF over a permeable flat conduit.Ellahi and Riaz 21 carried out an investigation to examine the TGF with changing viscosity in a conduit.This study also considered the heat diffusion features of the fluid in the context of the analysis.Bilal et al. 22 explored the MHD motion of Carreau Yasuda liquid initiated by an exponentially extending surface.Adesanya et al. 23 performed a study on the intrinsic irreversibility linked with the motion of third-grade fluid through a conduit exposed to convective heating.This study recognizes that the heat generated leads to continuous entropy generation within the channel.Reddy et al. 24 conducted an investigation to understand the effect of the Prandtl number on TGF around a vertically oriented cylinder that is uniformly heated.Mahanthesh and Joseph 25 examined the steady-state behavior of third-grade liquid flowing over a pressure-type die in the existence of nanoparticles.The fluid is dissipative and its properties are considered to be constant throughout the analysis.7][28][29][30] .
Magnetohydrodynamics (MHD) is the discipline that analyzes the behavior of electrically conductive substances including plasmas, ionized gases and liquid metals when subjected to magnetic fields.This area of research investigates the interaction between fluid motion and electromagnetic forces and it possesses extensive applications in geophysics, engineering, plasma physics and astrophysics.The impact of fluctuating viscous flow within a narrowing channel was scrutinized by Al-Habahbeh et al. 31 .Rashidi et al. 32 provided an extensive overview of the utilization of MHD and biological systems.The investigation of MHD fluid motion in diverse orientations linked to human anatomical structures is a significant scientific domain given its relevance and applications in the field of medical sciences.Ellahi et al. 33 examined the concurrent impacts of MHD, heat transfer and slip over a flat plate in motion.Furthermore, this study also assessed the influence of entropy generation within this context.Lv et al. 34 explored the effects of various physical phenomena, including diffusion-thermo, radiationabsorption in the context of MHD free convective spinning flow of nanoliquids.Kumam et al. 35 studied the MHD Radiative unsteady fluid flow with the upshot of heat source across a channel placed in absorbent medium.Tian et al. 36 studied the energy transfer though fluid flow surrounded by a rectangular enclosure having a heat sink filled with hybrid nanofluids and the exploration focused on the joint effects of forced and natural convection.Bhatti et al. 37 conducted research into the unsteady flow within the confines of parallel spinning spherical disks placed in a permeable medium.The influence of magnetization on lubrication had attracted consideration due to their important roles in various industrial applications.One notable example was their increased use in hightemperature bearings with liquid metal lubricants.Alharbi et al. 38 carried out a computational examination of the influence of different geometric factors on an extending cylinder.Hamid and Khan 39 investigated the upshot of magnetic flux on NN Williamson fluid flow.The flow was induced by an elongating cylinder in the existence of nanocomposites.Shamshuddin et al. 40 studied the upshot of chemical reactions Couette-Poiseuille nanoliquid flow through a gyrating disc.Kumam et al. 41 explored the MHD unsteady radiative flow.Khan and Alzahrani 42 focused on optimizing entropy and understanding heat transport in the flow of a magneto-nanomaterial.This investigation took into account the influence of MHD within the fluid.Adnan and Ashraf 43 and Li et al. 44 evaluated the nanoliquid flow across a permeable surface.
The originality of the proposed model is to examine the heat and mass transfer through the TGF flow over an inclined elongating sheet.The impact of magnetic field, activation energy and thermal radiation is considered on the TGF flow.Fluid that demonstrate NN properties such as shear thickening, shear thinning, and normal stresses despite the fact that the boundary is inflexible is known as TGF.In the proposed model, the TGF model is conveyed in form of nonlinear coupled PDEs.Before employing the numerical package bvp4c, the system of coupled equations are reduced into non-dimensional form.The significances of flow factors on velocity field, energy profile and Nusselt number are presented through Tables and Figures.For validity of the results, the numerical comparison with the published existing study is performed through Table .In the upcoming section, the problem is designed in form of PDEs and numerically solved.
Formulation of the problem
We have considered the mass and energy transfer through the steady and incompressible flow TGF over an inclined elongating sheet.The two-dimensional TGF flow is inspected under the impacts of chemical reaction, magnetic field, activation energy and thermal radiation.The surface of the sheet is assumed to be Darcy permeable.The x-axis and y-axis is the horizontal and normal axis to an inclined stretching sheet as shown in Fig. 1.Here, g, T w and C w is the gravitational acceleration, surface temperature and concentration respectively.By keeping in view, the above suppositions, the TGF flow equations are expressed as 45,46 : Boundary conditions (BCs) are 45 : (1) ( Here, ( α 1 ,β 3 ,α 2 ) are the moduli of material.U w and δ is the sheet stretching velocity and electrical conductivity, qr = − 4σ * 3k * ∂T 4 ∂y ,K 0 and Q 0 is the thermal radiation, surface penetrability and heat source.B(x) = B 0 e x L and Ea is the magnetic field and activation energy, k r and C p is the chemical reaction and specific heat competence, ,ν and µ is the inertia constant, kinematic and dynamic viscosity, α m and D m is the mass and thermal diffusivity.
The Nusselt number, drag force, Sherwood number are: where Table 1.The list of dimensionless parameters.
Richardson number
Buoyancy ratio factor Third-grade fluid factor Permeability factor The dimensionless form of Eq. ( 11) is:
Results and discussion
We have calculated the mutual effect of magnetic force and chemical reaction on the energy and mass conduction through the TGF across a stretching sheet.
Figures 2, 3, 4, 5 and 6 revealed the effect of Richardson number Ri , TGF factor β , magnetic factor M , per- meability factor K * and local inertial constant Fr versus f ′ (η) .Figures 2 and 3 reports that the velocity curves develop for the rising values of Richardson number Ri and third-grade fluid factor.Richardson number is the ration between Grashof and Reynold number.The Reynold number has an transposed relation with Ri, therefore the fluid velocity f ′ (η) improves with the variation of Ri.Similarly, the action of third-grade fluid factor β also enhances the velocity as presented in Fig. 3. Physically, the kinetic viscosity drops, while the stretching velocity of fluid develops with the effect of β , which results in such scenario.The influence of magnetic factor M , perme- ability factor K * and local inertial constant Fr, all diminish the fluid velocity as publicized in Figs. 4, 5 and 6.Physically, the resistive force opposes the fluid velocity f ′ (η) , which is produced due to magnetic effect (Fig. 4).On the other hand, the rising permeability of the sheet resists to the flow field, which causes in the reducing of velocity field (Fig. 5).The consequences of inertial forces also decline the velocity curve f ′ (η) as exposed in Fig. 6.
Figures 7, 8 and 9 highlight the significances of Prandtl number, Rd and Q e on the energy θ (η) field.Figure 7 exposes that the temperature curve drops with the effect of Prandtl number.Physically, the thermal diffusivity of higher Prandtl fluid is less, that's why, the effect of Pr drops the energy field (Fig. 7).The radiation effect transfers thermal energy form heat source to the system, which results in the elevation of temperature field θ(η) (Fig. 8).Similarly, the heat source working as heating agent for the flow system, there effect rises the fluid temperature θ(η), as displayed in Fig. 9. Figures 10 and 11 Table 2. Numerical evaluation of the present study with the existing works, while taking M = 0, Ri = 0, L = 0, Sc = 0, Fr = 0, K * = 0.
Pr
Magyari and Keller 50 Abbas et al. 45 chemical reaction effect augments the concentration field.Correspondingly, the significance of Sc controls the mass transfer, because the kinetic viscosity improves, which lessens the mass φ(η) outline as discovered in Fig. 12.
The amount of the chemical reaction has a direct impact on the intensity of mass transfer, because it makes fluid atoms move more quickly, which causes the mass gradient φ(η) to rise as publicized in Fig. 11.Table 3 disclosed the numerical outputs for skin friction f ′′ (0) , Sherwood number −φ ′ (0) and Nusselt number −θ ′ (0) .It has been noticed that the Nusselt number and Skin friction rises for the mounting values of Schmidth number.• The radiation effect transfers thermal energy form heat source to the system, which results in the elevation of energy field θ(η).
The heat source working as heating agent for the flow system, there effect rises the fluid temperature θ(η).• The consequence of chemical reaction boosts the concentration field, while declines with the Schmidth number.
8nd Sulochana4developed a new mathematical model to investigate energy and heat transmission in non-Newtonian fluids on a stretched surface.Results showed that the Jeffrey nanofluid outperformed Maxwell and Oldroyd-B nanofluids in terms of heat transfer.Bsthapu et al.5conducted an examination of velocity slip on a extending sheet with convectively non-uniform characteristics.Alqahtani et al.6conducted a 3D simulation on MHD behavior of hybrid fluid flow across double stretching surfaces.Chu et al.7investigated a 2D continuous laminar movement of a TGF past a flow over a shrinking surface containing gyrotactic microorganisms.The flow was electrically conductive due to an applied electric field and the Buongiorno nanoliquid model was used for mathematical modeling.Te study also incorporated chemical reactions with activation energy effects.Kumar et al.8and Li et al.
11nvestigated the energy transmission rate in a hydromagnetic Williamson nanoliquid flow thru a absorbent strained sheet.Khan et al.10conducted a discussion on the hybrid nanoliquid flow consisting of Cu and Al 2 O 3 nanoparticles in water. T flow occurred from a centrifugally porous surface that could either shrink or stretch.Elattar et al.11scrutinized the steady flow of hybrid nanoliquid over an impermeable stretchable sheet.
Table 2
displays the numerical estimation of the present study with the existing works.It has been observed that the present results are precise and consistent. | 3,052 | 2023-11-10T00:00:00.000 | [
"Engineering",
"Physics"
] |
Ln(iii)-complexes of a Dota Analogue with an Ethylenediamine Pendant Arm as Ph-responsive Paracest Contrast Agents †
A novel macrocyclic DO3A derivative containing a linear diamine pendant arm, H 3 do3aNN, was prepared and its protonation and complexation properties were studied by means of potentiometry. It determined ligand consecutive protonation constants log Kplexes could be protonated on the pendant amino group(s) with log K(HLM) ≈ 5.6 and log K(H 2 LM) ≈ 4.8. Solution structures of both complexes were studied by NMR spectroscopy. The study revealed that the complex species exist exclusively in the form of twisted-square-antiprismatic (TSA) isomers. The complexes show significant pH dependence of the Chemical Exchange Saturation Transfer (CEST) between their amino groups and the bulk water molecules in the pH range of 5–8. Thus, the pH dependence of the magnetization transfer ratio of CEST signals can be used for pH determination using magnetic resonance imaging techniques in a pH range relevant for in vivo conditions.
Introduction
Magnetic resonance imaging (MRI), due to its non-invasive character and spatial resolution (down to a mm 3 at clinical magnetic fields), is currently one of the most important diagnostic methods used in clinical medicine. 1Relevant diagnostic information from MRI images can be obtained even with the natural contrast between various tissues.However, for further improvement of image contrast and resolution, exogenous contrast agents (CAs) based on complexes of highly paramagnetic metal ions or superparamagnetic nanocrystalline materials altering the relaxation times of bulk water are widely used. 1,2n addition to common MRI T 1 -and T 2 -contrast agents, which shorten the longitudinal (T 1 ) and transversal (T 2 ) relaxation times, 2 a new class of CAs based on a Chemical Exchange Saturation Transfer (CEST) mechanism was introduced in the past decade. 3,4he principle of the CEST effect is based on saturation of the proton signal of the contrast agent molecule by a selective radiofrequency pulse.This saturation is transferred to the surrounding water molecules via chemical exchange of the labile protons between the contrast agent and bulk water resulting in a decrease in the water signal intensity and, therefore, darkening of the corresponding area in the MR image.][6] These agents contain a paramagnetic metal ion chelated by a multidentate ligand.Most often, Ln(III) complexes of ligands derived from DOTA (thus ensuring high stability and kinetic inertness of the complexes) have been used. 6,7As alternatives, complexes of transition metal ions having suitable magnetic properties, such as Ni(II), Fe(II) or Co(II), with ligands based on cyclam, cyclen, 1,4,7-triazacyclononane or 1,4,10-trioxa-7,13diazacyclopenta-decane, etc. have also been reported (Fig. 1). 8ne of the major advantages of CEST agents is the possibility to modulate water signal intensity by a selective presaturation pulse and, therefore, image contrast produced by these CAs can be switched "on" or "off" at will by selecting the appropriate irradiation frequency.This fact makes it possible to detect several agents in the same sample. 9Another advan-tage lies in the sensitivity of the proton exchange rate (k ex ) to a number of external factors and, thus, the CEST complexes are suitable for measuring various physiological parameters, such as temperature, pH, metabolite or metal ion concentration, etc. 4c,5,6 Recently, a lot of effort has been invested into developing MRI CAs capable of reporting on in vivo changes of pH in a tissue as they could serve as valuable biomarkers of disease progression or indicators for the choice of treatment. 10Several studies have demonstrated the unique ability of PARACEST CAs to act as pH sensors and nowadays ratiometric methods are being explored to make the assessments independent of the local concentration of the CAs. 11For example, a Yb(III) analogue of the clinically approved MRI CA [Gd(do3a-hp)] (Pro-Hance®, ligand shown in Fig. 1) shows two independent wellresolved PARACEST peaks at 71 and 99 ppm originating from the protons of the coordinated alcohol group of individual complex isomers.11a The ratio of these two PARACEST signals is pH-dependent, which can be used to develop a concentration-independent method of pH measurement, and the Yb(III) complex has been already tested for measuring extracellular pH in murine melanoma.11b Similarly, the PARACEST peaks of a Co(II) complex of tetam (Fig. 1) have distinct pH dependencies and the two most shifted signals (at 95 and 112 ppm) were shown to be suitable for pH mapping.8e It was shown that Ln(III) complexes of cyclen derivatives with pendant arms containing an amido-amine pendant arm, 11g,h or a (semi)coordinating amino group 12 produced a pH-sensitive PARACEST effect in the pH region relevant for living systems.Based on these findings, we decided to synthesize a new macrocyclic ligand H 3 do3aNN (Fig. 1) containing a semi-labile coordinating pendant arm with two ( primary and secondary) amino groups (as two potentially independent proton exchanging pools), and to investigate the PARACEST properties of its Ln(III) complexes.
Synthesis
The synthesis of H 3 do3aNN is shown in Scheme 1.The alkylation agent 3 was prepared by CBr 4 /PPh 3 bromination of ethylcarbamate-protected N-(2-aminoethyl)ethanolamine 2. The alkylation of tBu 3 do3a•HBr was performed using a slight excess of the alkylation agent as, under the reaction conditions, the alkylation agent undergoes elimination of HBr.The tBu-ester groups were removed by reflux in a CF 3 CO 2 H : CHCl 3 1 : 1 mixture and the ethyl-carbamate protection groups were removed by hydrolysis in 10% aq.NaOH.Surprisingly, in this reaction step, preferential formation of the urea-derivative 6 was observed, with only trace amounts of the required compound H 3 do3aNN.However, the intermediate 6 can be isolated by crystallization in a zwitterionic form with 42% overall yield (based on tBu 3 do3a).The identity of the intermediate 6 was confirmed by a single-crystal X-ray diffraction study (see ESI and Fig. S1 †).Hydrolysis of 6 with aq.HCl produced H 3 do3aNN with a high yield.The best way to obtain the product in the solid form was trituration of the evaporated reaction mixture in dry THF or EtOH overnight.However, the resulting off-white solid is very hygroscopic and has to be stored in a desiccator over P 2 O 5 .All other attempts (different organic solvents used for trituration or crystallization) led to the isolation of the title ligand as oil.To prevent possible esterification by EtOH, the use of THF was preferred for trituration.
Thermodynamic behaviour of H 3 do3aNN and its Ln(III) complexes
Potentiometric titrations of the ligand performed in the pH range of 1.6-12.2revealed six consecutive protonation processes in this region (Tables 1 and S1 †).Based on comparison with the literature data, 12,13 the first protonation step (log K P (HL) = 12.6) can be attributed to the protonation of one of (or to sharing of a proton over several of ) the macrocycle amino groups.The next three protonation steps proceed in part simultaneously due to the similarity of the constants (log K P (H 2 L) = 10.3, log K P (H 3 L) = 9.7 and log K P (H 4 L) = 8.3) and occur on one other macrocycle amino group and two amino groups of the pendant N-(2-aminoethyl)-2-aminoethyl moiety (the value reported for analogous protonation of a 2-aminoethyl pendant moiety for H 3 do3a-ae is log K P = 8.9). 12urther protonations of H 3 do3aNN proceed on the carboxylate groups and lie in the usual range.
Stability constants of [Ln(do3aNN)] (23.16 and 22.76 for the Eu(III) and Yb(III) complexes, respectively, Tables 1 and S2 †) were obtained by the out-of-cell titration technique.The values are slightly lower compared with those reported for H 4 dota itself, but lie in the expected range, as can be seen from a comparison with the values reported for the related ligand H 3 do3aaealthough, in that case, stability constants were determined for other lanthanides: La(III), log K LaL = 20.02, and Gd(III), log K GdL = 22.23. 12However, according to the distribution diagram of the Eu(III)-H 3 do3aNN system shown in Fig. 2, the metal complexation is not quantitative until pH ≈ 6 due to a combination of low affinity of the amino groups for Ln(III) ions and high donor-site basicity.
Equilibrated protonation steps of [Ln(do3aNN)] proceed with log K P (HLM) = 6.03/6.22 and those of the [Ln(Hdo3aNN)] + species occur with log K P (H 2 LM) = 5.09/5.07for the Eu(III)/ Yb(III) complexes, respectively (Tables 1 and S2 †).They are close to the values of analogous protonation constants reported for [Ln(do3a-ae)] complexes (log K P (HLM) = 6.06 and 5.83 for La(III)/Gd(III) systems). 12The observed values are slightly higher than the protonation constants found for the pre-formed complexes under "non-equilibrium" conditions: in such experiments, complexes were pre-formed at pH ≈ 7 and were titrated employing the standard ("fast") acid-base titration method.The corresponding observed protonation constants were log K P (HLM) = 5.57 and 5.67, and log K P (H 2 LM) = 4.84 and 4.85 for Eu(III)/Yb(III), respectively (Table S3 †).From these slight differences between the protonation constants, one can conclude that, during out-of-cell titrations, protons in the [Ln(Hdo3aNN)] + and [Ln(H 2 do3aNN)] 2+ species are probably located not only on the amine pendant arm but, at least partly, also on the macrocycle amino groups.The suggested structures of individual species with tentative protonation sites are shown in Schemes S1 and S2.† Unfortunately, the [Ln(do3aNN)] complexes are not fully kinetically inert, and slowly dissociate at pH < 6.This was confirmed by a xylenol orange test: after the addition of a solution of the pre-formed complex (at pH = 7.5) to a buffered solution of xylenol orange at pH = 5.5, the colour gradually changed on standing from orange to orange-violet as a result of free metal appearance in the solution.A quantitative measurement revealed the dissociation of about 9-10% of the complex after standing for one week at room temperature (compare Fig. S16 and S17 †).From a thermodynamic point of view, the extent of complex dissociation should be less than 20% at pH = 5 (see the distribution diagram shown in Fig. 2).It was confirmed by independent experiments that neither the free metal aqua ion nor the free ligand interferes with the 1 H NMR or CEST measurements.Therefore, conclusions drawn from PARACEST experiments (see below) are fully valid even at pH 5-6.
Solution structure of the [Ln(H n do3aNN)] ‡ complexes
It is well-known that in Ln(III) complexes of DOTA-like ligands the central Ln(III) is coordinated between two planesone formed by the macrocycle amino groups (N 4 -plane), and the other by the oxygen atoms of the carboxylate pendant moieties (O 4 -plane), and these species exhibit two types of isomerisms in solution. 15The first type is connected with the conformation of the macrocycle ethylene bridges, i.e. with the sign of the torsion angle around the C-C bond (δ/λ), and the second one is related to the direction of rotation of the pendant arms (Δ/Λ).A combination of these isomerisms leads to the formation of two diastereomeric pairs of enantiomers (i.e.four isomers): Δλλλλ/Λδδδδ (SA, square-antiprismatic) and Δδδδδ/ Λλλλλ (TSA, twisted-square-antiprismatic). 2 The isomer ratio in solution can be determined from the 1 H NMR spectra using the "axial" protons of the macrocyclic chelate ring, which are the ones closest to the Ln(III) ion and to the principal magnetic axis, and usually can be easily found in the 1 H NMR spectra. 16herefore, the solution structures of the [Eu(do3aNN)] and [Yb(do3aNN)] complexes were investigated by variable-temperature 1 H NMR spectroscopy (Fig. S2 and S3 †).The pD of the samples in D 2 O was adjusted to the alkaline region to ensure full deprotonation and coordination of the pendant amino group.In both complexes, only one set of signals was detected pointing to the presence of only one diastereomer.The signals of "axial" protons appear in the range typical for the TSA isomers (Eu(III): 9-13 ppm, Yb(III): 45-62 ppm; with respect to the signal of bulk water referenced to 0 ppm).No 1 H NMR signals of "axial" CH 2 protons attributable to an SA isomer were observed (such signals typically lie in the chemical shift regions of 25-40 ppm and 100-150 ppm for Eu(III) and Yb(III) complexes, respectively). 16,17Thus, based on the 1 H NMR data, exclusive formation of the TSA isomer is expected.With increasing temperature, the 1 H NMR signals become broader, pointing to the occurrence of a conformational change of the complex molecules (Fig. S2 and S3 †).
To identify the signals of exchangeable (N-H) protons in the 1 H NMR spectra, samples of the [Eu(H n do3aNN)] and [Yb(H n do3aNN)] complexes were investigated in H 2 O at 25 and 5 °C (Fig. 3, 4, S4 and S5 †).In the 1 H NMR spectra of the [Eu(H n do3aNN)] complex recorded in H 2 O at pH = 6.75 (Fig. 3A), three main signals of exchangeable protons (one narrow signal at 22.2 and two broad signals at 43.3 and 46.5 ppm; with respect to the bulk water signal) can be observed, which disappear upon bulk water presaturation (Fig. 3B).Of these, only the signals at 43.3 and 46.5 ppm are influenced by water presaturation at pH = 11.7,whilst the signal at 22.2 ppm remains unaffected (Fig. 4A and B).When recording 1 H NMR spectra in a D 2 O solution ( pD = 10.7),none of the three signals are observable (Fig. 4C and S2 †).Based on this behaviour, the narrow signal at 22.2 ppm is attributed to the coordinated secondary amino group.The assignment is supported by the similarity of the chemical shift of this signal to that of one of the -NH 2 protons of the [Eu(do3a-ae)] complex (19.5 ppm). 12The two broad signals are attributed to the coordinated primary amino group and this assignment is supported by their coalescence at higher temperatures (Fig. S6 †).
The primary amino group is expected to be coordinated in a position capping the O 3 N-plane formed by the pendant donor atoms and, thus, close to the magnetic axis of the complex.Therefore, the corresponding protons are markedly influenced by the paramagnetic ion and their chemical shifts lie in the range typical for a coordinated water molecule. 5,18owever, the presence of these signals in 1 H NMR spectra also in an alkaline solution (and the presence of a corresponding CEST effect in Z-spectra at alkaline pH, see below) clearly excludes the possibility that these signals belong to a coordinated water molecule, the signal of which disappears in the alkaline region. 19In slightly acidic solutions where protonation of the uncoordinated primary amino group (and thus, its decoordination) is expected, even a proton of the secondary amino group is exchanged with bulk water on an NMR time scale.In contrast, in an alkaline solution, only the exchange of the terminal primary amino group protons is observable.
Besides the three signals of exchangeable protons of the [Eu(do3aNN)] complex discussed above, a small signal at ≈35 ppm was found in the 1 H NMR spectra (Fig. 3 and 4), better seen at a lower temperature (37.5 ppm, Fig. S5A †).
At this chemical shift, a minor exchangeable pool of protons was found also in the CEST experiments (see discussion of Z-spectra below), accompanied by two other peaks in Z-spectra lying at 10 and 15 ppm, which are visible especially at low saturation power (Fig. 5 and S7A †).Due to the absence of any 1 H NMR signals of "axial" CH 2 protons attributable to the SA isomer, the presence of this isomer in the solution can be excluded.Therefore, these minor exchangeable proton pools were attributed to another TSA isomer that originates from the chirality of the nitrogen atom of the secondary amino group caused by coordination of this group.Judging by the similarity of the chemical shift of the exchangeable proton pool at 35 ppm to one of the signals attributed to the coordinated amino group in [Eu(do3a-ae)] (34 ppm), 12 one can suggest that this signal belongs to the secondary amino group of the TSA isomer with reverse orientation of H vs. the CH 2 CH 2 NH 2 substituents (i.e. with opposite chirality of the coordinated secondary amine).The results of simple molecular modelling shown in Fig. S9 † suggest that apical coordination of the primary amino group is possible only in the Δδδδδ-S/ Λλλλλ-R enantiomeric pair, and thus, this isomer is suggested to be the major one, leaving the Δδδδδ-R/Λλλλλ-S species as the minor isomer.In the case of this low-abundance isomer, the position of the primary amino group is not suitable for coordination close to the magnetic axis and, therefore, the signals of the primary amino group in Z-spectra are significantly closer (at 10 and 15 ppm, Fig. S7 †) to the free water signal.As both protons of the primary amino group have individual signals, their resolution triggered by coordination or by fixing in an intramolecular hydrogen bond system is expected.
A similar behaviour was observed also for the [Yb(do3aNN)] complex.In an alkaline solution, there are two signals disappearing in D 2 O, see Fig. S4 †a narrow signal of the proton of the secondary amino group at 35 ppm (this assignment is supported by the similarity of the chemical shift of the analogous 1 H NMR signal of [Yb(do3a-ae)], 42 ppm) 12 and a very broad signal of NH 2 protons at 82-104 ppm (the signals cannot be distinguished at 25 °C, but split at 5 °C, Fig. S5B †).As in the previous case, only the signal attributable to the primary amino group is affected by water presaturation in an alkaline solution, Fig. S4B †.Minor signals of another TSA isomer are also observable in 1 H NMR spectra, and minor exchangeable pools of protons were found in CEST experiments at low saturation power (three other peaks in Z-spectra at 17, 26 and 57 ppm, Fig. S7B †).
CEST experiments
Saturation transfer experiments in solutions ranging from slightly acidic to slightly alkaline ( pH 5.7-8.3)revealed two signals in the 1 H Z-spectra of each complex.These signals are centred at +22.2 and +44.4 ppm for the Eu(III) complex (Fig. 5A) and +35 and +95 ppm for the Yb(III) complex (Fig. 6A).
The broad signals at the higher chemical shifts (44.4 and 95 ppm for the Eu(III) and Yb(III) complex, respectively) correspond to the averaged signals of the primary amino group.This broad signal splits into two distinct signals of magnetically non-equivalent protons at a lower intensity of presaturation pulses and at low temperatures (Fig. S7A †), similar to the behaviour of this group found in 1 H NMR spectra (Fig. 3, 4, S5 and S6 †).The Z-spectra signals with lower chemical shifts (22.2 and 35 ppm for the Eu(III) and Yb(III) complex, respectively) were attributed to the signal of the proton of the secondary amino group.Thus, the Z-spectra of both complexes clearly confirm the presence of proton-exchanging pools that belong to the protons of the primary and secondary amino groups as they were identified in the 1 H NMR spectra (see above).
Besides the signals attributable to the major isomer, a set of minor signals (at 10, 15 and 35 ppm, distinguishable especially when low saturation power was applied) appears in the Z-spectra of the [Eu(H n do3aNN)] complex (Fig. S7A †).At slightly acidic to neutral pH, all three Z-signals are apparent.In contrast, in the alkaline region only the signals at 10 and 15 ppm remain in the Z-spectra, implying their assignment to the primary amino group, with the last one (at 35 ppm) belonging to the secondary amino group.These signals belong to the less abundant isomer with opposite chirality of the coordinated secondary amino group (see discussion of 1 H NMR spectra above).A similar set of minor signals (at 17, 26 and 57 ppm) appears also in the Z-spectra of the [Yb(H n do3aNN)] complex (Fig. S7B †).
The shape of the Z-spectra of the [Ln(H n do3aNN)] complexes (Ln = Eu, Yb) has significant pH dependence in slightly acidic to neutral regions (Fig. 5A and 6A).To see the differences in Z-spectra more clearly, the magnetization transfer ratio (MTR) spectra were constructed (Fig. 5B and 6B).At pH < 5.5, the CEST effect of the coordinated primary amino group gradually disappears as a consequence of protonation and decoordination of the group.Simultaneously, a new, very broad CEST signal appears centred at ≈25-30 ppm for both complexes.Although partial dissociation of the complexes occurs in this pH region (see above for discussion of thermodynamic properties), the free metal aqua ions as well as the free ligand are CEST-silent (as proved by an independent experiment) and, therefore, these new signals can be attributed to the chemical exchange of the protonated primary amino group of the complex.Such a groupwhilst uncoordinatedis still paramagnetically shifted, but not as much as when the group is coordinated.On the other hand, an effective CEST of the secondary amino group was detected for the Eu(III) and Yb(III) complexes in the pH region of ≈5.5-8.5.At higher pH values, the chemical exchange of the NH proton becomes too slow to transfer saturation to bulk water and, thus, the CEST effect of the secondary amino group is not observable (Fig. 5A, 6A and S8 †).It is consistent with the 1 H NMR spectra of the studied complexes (Fig. 4 and S4 †), where the signals of secondary amino groups are observable even in alkaline solutions ( pH > 11) (i.e.their chemical exchange with bulk water is slow) and remain unaffected after water presaturation.A graphical representation of the suggested processes giving rise to the peaks in Z-spectra is shown in Scheme 2.
It is evident that the two pools of exchanging amine protons show different dependences of their CEST effects in the pH range relevant to the physiological conditions.The applicability of the [Eu(H n do3aNN)] and [Yb(H n do3aNN)] complexes as pH-sensitive MRI probes was tested for solutions with different pH values (HEPES/MES buffers) and concentrations of the complex (Fig. 7).
To define the pH-dependent but concentration-independent function, the ratio of MTR intensities was calculated.However, this ratio can be defined reasonably only for the Yb(III) complex (35/95 ppm), as in the case of the Eu(III) complex there is a significant overlap of the low-shift signal of the coordinated secondary amino group (22.2 ppm) with the new signal appearing in the acidic region (attributable to the protonated and uncoordinated primary amine, Fig. 5B).
To prove the suggested concept of ratiometric pH determination, samples of [Yb(H n do3aNN)] with different complex concentrations and different pH values were measured by both NMR and MRI techniques.The concentration range used covers about one order of magnitude (7.7-8.7 mM).All calibration curves were very similar (see Fig. 8 and S10 †), although standard deviations of the data points from MRI experiments acquired for the low concentration were relatively high due to a high background noise and, thus, a low signal-to-noise ratio was obtained under these conditions.The final curves are compiled in Fig. 8.Although the method has high ESDs with respect to the determination of an exact pH value, the shape of calibration curves enables distinguishing between samples with pH > 7 and those with pH < 6.Such a finding is relevant for the design of contrast agents useful e.g. for distinguishing between normal and hypoxic tissues.
Materials and methods
All reagents and solvents were commercially available, had synthetic purity and were used as received.Water used for potentiometric titrations was deionized by using a Milli-Q (Millipore).
NMR characterization data (1D: 1 H, 13 C{ 1 H}; 2D: HSQC, HMBC, 1 H-1 H COSY) were recorded on a VNMRS300 or Bruker Avance III 600, using 5-mm sample tubes.Chemical shifts are reported as δ values and are given in ppm.Coupling constants J are reported in Hz.Unless stated otherwise, NMR experiments were performed at 25 °C.For samples dissolved in D 2 O, the pD value was calculated by correcting the pH-electrode reading by +0.4,i.e. pD = pH reading +0.4.For the 1 H and 13 NMR measurements of diamagnetic compounds in D 2 O, tBuOH was used as an internal standard (δ H = 1.25, δ C = 30.29).For the measurements in CDCl 3 , TMS was used as an internal standard (δ H = 0.00, δ C = 0.00).In the case of paramagnetic complexes, chemical shifts were referenced to the water signal of the sample (δ H = 0.00) to keep the chemical shift values in 1 H NMR spectra consistent with the scale of Z-spectra.The abbreviations s (singlet), t (triplet), q (quartet), m (multiplet) and br (broad) are used in order to express the signal multiplicities.Lanthanide(III) concentrations in solutions were determined by measurement of the bulk magnetic susceptibility (BMS) shifts. 22The ESI-MS spectra were recorded on a Bruker ESQUIRE 3000 spectrometer equipped with an electrospray ion source and ion-trap detection.Measurements were carried out in both the positive and negative modes.UV-Vis solution spectra were recorded using a SPECORD® 50 PLUS (ANALYTIC JENA AG) spectrophotometer at 25 °C in the range of 300-1000 nm with data intervals of 0.2 nm and integration time of 0.04 s.Elemental analysis was performed at the Institute of Macromolecular Chemistry of the Czech Academy of Sciences (Prague, Czech Republic).
13 C{ 1 H} NMR (150.9MHz, CDCl Synthesis of 4. A solution of the alkylating reagent 3 (1.79g, 5.75 mmol, 1.35 eq.) in dry MeCN (10 ml) was added dropwise to a well-stirred suspension of K 2 CO 3 (2.94g, 21.3 mmol, 5 eq.) and tBu 3 do3a•HBr (2.54 g, 4.26 mmol) in dry MeCN (40 ml) at room temperature.The reaction mixture was stirred at 60 °C for 24 h, filtered, and the filtrate was evaporated in a rotary evaporator.The oily residue was dissolved in CHCl 3 (25 ml) and extracted with distilled water (4 × 10 ml).The organic layer was dried over anhydrous Na 2 SO 4 and concentrated in vacuo to yield a yellow oil (3.76 g) containing a crude compound 4 contaminated with an excess of the alkylating reagent 3. The excess alkylating reagent was not removed, and the crude product 4 was used in the next step without purification.
Synthesis of 5. A portion (3.70 g) of the crude compound 4 obtained above was dissolved in a mixture of CF 3 CO 2 H and CHCl 3 (1 : 1, 30 ml).The resulting solution was refluxed for 18 h and evaporated in a rotary evaporator.The oily residue was dissolved in a small amount of distilled water and evaporated (this procedure was then repeated three more times) to produce a yellow oil (3.10 g) containing compound 5, which was used in the next step without purification.
Synthesis of 6.The crude product 5 (3.00 g) was dissolved in 10% aq.NaOH (50 ml) and stirred for 24 h at 90 °C.Then, the solution was loaded onto a strong anion exchange column (Dowex 1, OH − -form, 1.5 × 20 cm).Impurities were removed by elution with water and the product 6 was eluted with 5% aq.AcOH.Fractions containing the product (as checked by 1 H NMR) were combined, filtered and evaporated to give compound 6 (2.21 g) as a brownish oil.The crude product was dissolved in a water : MeOH mixture (1 : 5, v : v, ≈5 ml) and overlaid with EtOH (≈5 ml) and the mixture was left to stand for 2 d.After this period, the solid product was filtered off and dried under vacuum to yield 6•2.5H 2 O (900 mg, 42% based on tBu 3 do3a) as a white powder.16.12 (16.69).
Synthesis of H 3 do3aNN.Compound 6•2.5H 2 O (415 mg, 0.824 mmol) was dissolved in aq.HCl (10 ml, 1 : 1) and the resulting solution was stirred for 7 d at 95 °C and evaporated in a rotary evaporator.The oily residue was dissolved in a small amount of distilled water and evaporated to dryness, leaving a glassy solid, which was triturated in dry THF overnight.Next, the product was collected by filtration, and stored in a desiccator (P 2 O 5 ) to give H 3 do3aNN in the form of a hydrochloride hydrate (500 mg, 95%) as a white powder.
Mother liquors after crystallization of the intermediate 6 can be also used for the preparation of the title ligand.After acid hydrolysis of impure 6, the crude H 3 do3aNN was converted to its ammonium salt by chromatography on a strong cation exchanger (Dowex 50, 50-100 mesh, H + -form).Acids were eluted by water and the crude product was collected by 5% aq.ammonia.After evaporation of volatiles, the oily residue was dissolved in water and poured onto a column of a weak cation exchanger (Amberlite CG50, 200-400 mesh, H +form). Impurities were eluted with water and the H 3 do3aNN compound was collected by 3% aq.HCl.Fractions containing the product were combined and evaporated to dryness leaving a glassy solid, which was triturated as described above.Synthesis of [Ln(H n do3aNN)] complexes.The Ln(III) complexes of H 3 do3aNN for NMR, NMR CEST and MRI CEST experiments were prepared by mixing the lanthanide(III) chloride hydrate (Eu 3+ , Yb 3+ ) with 1.1 equiv. of the ligand in a small amount of distilled water, adjusting the pH to ≈7 with 1 M aq.
LiOH, and stirring overnight at 60 °C.Then, the pH was readjusted to ≈7 with 1 M aq.LiOH and the solution was again stirred overnight at 60 °C.
All the prepared samples were checked using a xylenol orange test (acetate buffer, pH = 5.7) to exclude the presence of free Ln(III) ions.The exact concentration of the Ln(III) complexes in the solution was determined using Evans's method. 22
PARACEST experiments
All Z-spectra were recorded using a VNMRS300 spectrometer operating at 299.9 MHz (B 0 = 7.05 T); 5 mm sample tubes and a coaxial capillary with D 2 O and tBuOH as an external standard were used.Solutions of the complexes for PARACEST NMR experiments were prepared in pure water with the pH adjusted using aq.HCl/LiOH solutions and had concentrations in the range of 14-87 mM.Standard pulse sequences for presaturation experiments were used.Saturation offsets were set using the array function (increment 200-250 Hz).Data from the PARACEST experiments were plotted as the dependence of normalized water signal intensity (M z /M 0 %) on saturation offset.Here, M 0 represents the magnetization (i.e.intensity) of the water signal without RF saturation and M z corresponds to the water signal when a presaturation pulse is applied.Other experimental parameters are specified in the figure captions.
The magnetization transfer ratio (MTR) was calculated using MTR = M Δω /M 0 -M -Δω /M 0 in which M ±Δω is the magnetization (i.e.intensity) of the water signal with the use of a presaturation frequency ±Δω away from the bulk water signal.
MRI PARACEST images were measured with a phantom consisting of one vial containing an aqueous solution of buffers [a mixture of 0.025 M 2-(N-morpholino)ethanesulfonic acid (MES) and 0.025 M 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES)] as a standard and nine vials containing solutions of the Eu 3+ or Yb 3+ -complexes dissolved in buffers (0.025 M MES and 0.025 M HEPES) with different pH values and concentrations.The innocence of the chosen buffers was confirmed by silence of pure buffer solutions in the CEST experiment (i.e., no signal in the Z-spectra of the pure buffers was found).All MRI PARACEST images were acquired on a 4.7 T scanner (Bruker BioSpec, Germany) using a modified spinecho sequence (Rapid Acquisition with Refocused Echoes -RARE).Experimental conditions: repetition time (TR) = 5000 ms, echo time (TE) = 8.9 ms, resolution 0.35 × 0.35 × 2 mm 3 , turbo factor = 4.Other experimental parameters are specified in the figure captions.For all MR experiments, a resonator coil with an inner diameter of 70 mm was used.MTR maps were normalized to the signal acquired at a frequency offset -Δω and reconstructed from a manually outlined region of interest on a pixelwise basis using a custom script written in Matlab (Mathworks, Natick, MA, USA).MTR maps were visualized on a false-colour scale in percentage units.
Potentiometry
Potentiometric titrations 23 were carried out in a thermostatted vessel at 25.0 ± 0.1 °C at a constant ionic strength I(NMe 4 Cl) = 0.1 M. The measurements were taken with an HCl excess added to the initial mixture and the mixtures were titrated with a stock NMe 4 OH solution.An inert atmosphere was maintained by constant passage of argon saturated with water vapour.The ligand concentration in the titration vessel was ≈0.004 M.
Ligand protonation constants were determined by standard potentiometric titrations performed in the pH range of 1.6-12.2(80 points per titration, titrations were carried out four times).
In the cases of the Ln(III)-H 3 do3aNN systems, the equilibria were established slowly and, therefore, the out-of-cell technique was used in the pH range of 1.6-7.2(two titrations per system, 25 points per titration).The metal : ligand ratio was 1 : 1 in all cases.The waiting time was 7 weeks.Then, the potential at each titration point (tube) was determined with a freshly calibrated electrode.
Pre-formed complexes for the determination of their protonation constants were prepared in the following way: in an ampoule, equimolar molar amounts of the ligand and metal stock solutions were mixed and a calculated amount (based on the out-of-cell titration data) of a stock solution of NMe 4 OH was gradually added to reach pH ≈ 7, which corresponds to full complexation according to the out-of-cell titration.Ampoules were flame-sealed and left at 55 °C for 3 d.Aliquots were taken from the final solution, a defined amount of an HCl stock solution was added into these samples and the mixtures were immediately titrated by an NMe 4 OH stock solution in a way analogous to the procedure described above for the determination of ligand protonation constants in the pH range of 2.3-12.1.The initial volumes were ≈5 cm 3 for the conventional titrations and ≈1 cm 3 for the out-of-cell ones, respectively.
The constants with their standard deviations were calculated by using the OPIUM program package. 24Overall protonation constants are defined as β h = [H h L]/{[H] h •[L]}, and they can be transferred to the consecutive protonation constants log K P by log K P (H h L) = logβ h − logβ (h−1) ; it should be noted that log K P = pK A of the corresponding protonated species H h L. The overall stability constants β hlm are concentration constants defined as The water ion product used in the calculations was pK w = 13.81.Stability constants of metal hydroxido complexes were taken from the literature. 14In the text, pH means −log[H + ].The best fits of experimental data are shown in Fig. S13-S15 † and the results are compiled in Tables S1-S3.†
Conclusions
The present study revealed significant pH dependence of the Chemical Exchange Saturation Transfer (CEST) effect of selected Ln(III) complexes with the novel macrocyclic ligand H 3 do3aNN containing a linear diamine pendant arm.The pH dependence is substantial in the pH range relevant for biological systems ( pH ≈ 5. 5-8.5).Based on these findings, we have shown that the magnetization transfer ratio of CEST signals of the complexes can be used for pH determination by MRI, and it is independent of the concentration of the probes.
Unfortunately, the studied complexes are not fully kinetically inert in acidic solutions and slowly release the free metal ions, which excludes their direct use in medical applications.However, the study brings proof-of-principle of possibility to use a linear diaminic fragment for pH determination using MRI ratiometry.
Fig. 1
Fig. 1 Structural formulas of the ligands discussed in the text.
Fig. 3
Fig. 3 (A) 1 H NMR spectrum of the [Eu(H n do3aNN)] complex (0.09 M solution in H 2 O, B 0 = 7.05 T, 25 °C, pH = 6.75).(B) The same sample, the signal of water was saturated.Arrows show the positions of exchangeable (N-H) protons.The chemical shift of H 2 O in the sample solution was referenced to 0 ppm.
Fig. 6
Fig. 6 (A) Z-Spectra of an 87 mM aqueous solution of the [Yb(H n do3aNN)] complex (B 0 = 7.05 T, B 1 = 21.7 μT (920 Hz), RF presaturation pulse applied for 2 s, 25 °C).(B) Corresponding MTR spectra.Scheme 2 A suggested mechanism of origin of pH-dependent CEST effects on [Ln(H n do3aNN)] complexes.In hepta/octa-coordinated species, binding of a water molecule(s) to the central ion giving the coordination number to 8-9 is expected, but it is not shown for the sake of clarity. C | 8,402.8 | 2016-02-16T00:00:00.000 | [
"Chemistry"
] |
Cyclohexanecarboxylic acid degradation with simultaneous nitrate removal by Marinobacter sp. SJ18
Naphthenic acid (NA) is a toxic pollutant with potential threat to human health. However, NA transformations in marine environments are still unclear. In this study, the characteristics and pathways of cyclohexanecarboxylic acid (CHCA) biodegradation were explored in the presence of nitrate. The results showed that CHCA was completely degraded with pseudo-first-order kinetic reaction under aerobic and anaerobic conditions, accompanied by nitrate removal rates exceeding 70%, which was positively correlated with CHCA degradation (P < 0.05). In the proposed CHCA degradation pathways, cyclohexane is dehydrogenated to form cyclohexene, followed by ring-opening by dioxygenase to generate fatty acid under aerobic conditions or cleavage of cyclohexene through β-oxidation under anaerobic conditions. Whole genome analysis indicated that nitrate was removed via assimilation and dissimilation pathways under aerobic conditions and via denitrification pathway under anaerobic conditions. These results provide a basis for alleviating combined pollution of NA and nitrate in marine environments with frequent anthropogenic activities.
Introduction
Naphthenic acid (NA) is a ubiquitous functional group in the environment, mainly derived from crude oil, and can also be synthesized via the incomplete degradation of petroleum hydrocarbons by microorganisms or in situ degradation of petroleum (Brient et al. 2000;Whitby 2010). NAs and their metal salts are widely used in paint desiccants, corrosion inhibitors, emulsifiers, surfactants, lubricants, and wood preservatives, as well as to catalyze the production of alkyl and polyester resins (Clemente andFedorak 2005, Kannel andGan 2012). In recent years, NAs have been considered as toxic pollutants (Jie et al. 2015;Scarlett et al. 2013). NAs can penetrate the cell wall, destroy membrane lipid bilayers, or alter membrane properties due to their surfactant characteristics. It has been reported that NAs are acutely and chronically toxic to a variety of organisms, such as fish, amphibians, phytoplankton, and mammals (Frank et al. 2008;Kannel andGan 2012, Melvin andTrudeau 2012).
In general, NAs can migrate to terrestrial, aquatic, and marine environments via sewage discharge, crude oil leakage, precipitation runoff, and riverbank oil layer erosion (Kannel and Gan 2012;Scarlett et al. 2012). NAs have been Responsible Editor: Robert Duran Highlights • CHCA degradation in the presence of nitrate in marine environment was firstly studied • Strain SJ18 could rapidly degraded the combined pollutants of CHCA and nitrate • CHCA degradation pathways were proposed under aerobic and anaerobic conditions • Strain SJ18 contained functional genes for CHCA degradation and nitrate removal detected in aquatic systems, for example, the natural concentrations of NAs in the upper reaches of Athabasca River are generally < 1 mg L −1 (Schramm et al. 2000), while the NAs level can be reach as high as between 4 and 110 mg L −1 in the unconfined groundwater aquifers and tailings waters where oil sands are man-mined (Ahad et al. 2013;Hewitt et al. 2020;Kannel and Gan 2012). Furthermore, by employing improved method, NAs concentrations ranging from 2.29 to 132.91 mg kg −1 have been found in contaminated soil from oilfields in China (Wang et al. 2013(Wang et al. , 2015, and different types of NAs have been detected in seawater and sediments as a result of an oil tanker leakage in 2007 (Wan et al. 2014). In our previous study, 1-4 mg L −1 NAs were detected in the sediments of Dalian Bay, which was more than tenfold higher than the levels of polycyclic aromatic hydrocarbons (PAHs) (Zan et al. 2019).
To date, NAs attenuation in the environment includes physical adsorption, natural mineralization, plant degradation, and microbial degradation, of which microbial degradation is considered to be predominant (de Oliveira Livera et al. 2018, Huang 2011Lu et al. 2011;Quesnel et al. 2011). Numerous studies have explored the biodegradation of NAs by a variety of microorganisms, and the degradation pathways have long been ascertained (Blakley 1978;Del Rio et al. 2006a;Johnson et al. 2012;Presentato et al. 2018;Wang et al. 2015). For instance, Alcaligenes, Arthrobacter, and Corynebacterium have been found to degrade cyclohexanoic acid under aerobic conditions (Ougham and Trudgill 1982). The pathways proposed for the aerobic degradation of NAs mainly include β-oxidation, combined α-and β-oxidation, and aromatization pathways, with β-oxidation being the predominant biodegradation pathway (Quagraine et al. 2005;Whitby 2010). The α-and β-oxidation pathways have been detected during the aerobic transformation of cycloacetic acid by Alcaligenes sp. (Quagraine et al. 2005). Furthermore, analysis of hydronaphthoic acid degradation by sedimentary microorganisms revealed that the degradation process of decahydronaphthoic acid was completed by β-oxidation and Baeyer-Vil-liger oxidation (Del Rio et al. 2006b). The aromatization pathway was first described in Arthrobacter (PRL-W15), which could degrade cyclohexanoic acid by attacking alicyclic rings and form p-hydroxybenzoic acid through para-hydroxylation (Whitby 2010).
With regard to anaerobic biodegradation of NAs, the effects of different conditions (nitrate reduction, sulfate reduction, iron reduction, and methanogenesis) on anaerobic biodegradation of NAs have also been investigated (Clothier and Gieg 2016;Ghattas et al. 2017;Misiti et al. 2013). Clothier and Gieg (2016) found that some surrogate NAs (including cyclohexanecarboxylic acid (CHCA) and cyclohexaneacetic acid (CHAA)) were either fully or partially metabolized under nitrate-and sulfate-reducing conditions, while only CHAA was metabolized under methanogenic conditions and NAs were partially biodegraded in initial enrichments under ironreducing conditions. Some anaerobic microorganisms that could produce methane through β-oxidation were detected during the degradation of medium-and long-chain carboxylic acids; for example, microorganisms in the tailing sands environment could metabolize the cyclohexyl valerate side chain of 3-cyclohexyl propionic acid and 4-cyclohexyl butyric acid to methane (Holowenko et al. 2002). Gunawan et al. (2014) showed that a model surrogate NA could be readily metabolized under nitrate-reducing conditions in bioreactors.
However, it must be noted that these studies had predominantly focused on aerobic biodegradation of NAs in terrestrial environments, and knowledge about anaerobic degradation pathway and the potential for anaerobic NA biodegradation is still limited. Besides, biotransformation of NAs is even less understood in marine environment (Zan et al. 2022). It has long been recognized that nitrate pollution in coastal environment is mainly caused by intensive anthropogenic activities (Guo et al. 2020), and nitrate has been confirmed to be widely distributed in the offshore environment, coexisting with pollutants such as petroleum hydrocarbons and aromatic compounds (Capone and Bautista 1985, Hee-Sung Bae et al. 2002, Laufer et al. 2016. Studies have shown that the nitrate concentrations in the sediments of the Pearl River Estuary ranged from 6.6 to 92.1 mg kg −1 (Hong et al. 2019), while those in the groundwater of India were as high as 630.7 mg L −1 (Rina et al. 2013), indirectly posing a potential risk to the marine environment. In the Pearl River Delta, nitrate has been found to affect the biotransformation properties of microorganisms by interacting with a range of enzymes involved in metabolism or biodegradation (Su et al. 2018;Xu et al. 2014). Therefore, it is necessary to further study the biotransformation of NAs in marine environment, especially in the coastal areas that are highly affected by human activities.
In the present study, a facultative NA-degrading bacterium, widely distributed in the marine environment, was isolated from marine sediment, and its ability to transform surrogate NAs in the presence of nitrate under aerobic and anaerobic conditions was studied. Investigations of nitrate conversion, degradation intermediates, and draft genome sequencing were performed to explore NAs biodegradation characteristics in the marine environment as well as NAs biodegradation and nitrate removal pathways. The results obtained could help to improve our understanding of the characteristics of NAs biodegradation by marine bacteria.
Materials and methods
Surrogate NAs and microbial strain CHCA (C 7 H 12 O 2 , CAS: 98-89-5, purity: 99%, Meilunbio, China) was selected as a common model NA (Clothier and Gieg 2016;Demeter et al. 2014), and was dissolved in 0.1 mol L −1 NaOH solution to prepare a 10.0 g L −1 stock solution for later use. The microbial strain was obtained from the sediments of Dalian Bay (39°05′ N, 121°66′ E), China. The specific methods of strain enrichment and screening are provided in Supplementary Text S1.
Degradation of CHCA by the microbial strain
The CHCA degradation experiments were performed as follows: the microbial strain was pre-cultured in sterile 2216E medium for 24 h at 25 °C, and then subjected to centrifugation (10,000 × g, 10 min) in the logarithmic phase of growth (OD 600 = 1.5). The cell pellets were washed twice with sterile artificial sea water (ASW) and resuspended in 100-mL Erlenmeyer flasks containing 50 mL of ASW with 0.1 mL of trace medium (see Text S2 for ASW and trace medium composition) for aerobic CHCA degradation (the flasks were sealed with sterilized cotton stopper for oxygen exchange), and in 100-mL serum bottle for anaerobic degradation (the reaction system was exposed to highpurity He for 10-15 min to ensure an oxygen-free environment). The initial OD 600 of both the degradation systems was 0.05, initial CHCA concentration was 20 mg L −1 , and nitrate concentration was 50 mg L −1 . The experiments were performed in dark, and the pH was adjusted to 8.0 ± 0.2. The control groups were inactivated by high temperature (121 °C for 20 min), and all the experiments were performed in triplicate.
Extraction and detection of NAs
Samples were collected at 0, 12, 36, 60, 84, and 96 h of aerobic and anaerobic degradation of CHCA, respectively. Then, 1 mL of the collected samples was added to a 2-mL centrifuge tube containing 50 μL of NaOH (1 mol L −1 ), thoroughly mixed for 1 min, and centrifuged (8000 × g for 10 min). The supernatants obtained were collected and acidified with H 2 SO 4 (1 mol L −1 ) to pH < 2, and extracted thrice with dichloromethane (v/v = 1:1). The extracts were concentrated, dried with anhydrous Na 2 SO 4 , and quickly evaporated at 35 °C. Finally, the extracts were transferred to a 2-mL vial, and the solvent was evaporated in a gentle stream of nitrogen. Subsequently, 100 μL of derivatization reagent tert-butyldimethylsilyl (Sigma-Aldrich, St. Louis, MO, USA) were added to 100 μL of the extractive concentrates and the mixture was placed in a water bath at 60 °C for 20 min. Lastly, the derivatized samples were quantified by gas chromatography-mass spectrometry (GC-MS, Shimadzu Corporation GCMS-QP2020, Japan) (see Text S3 for detailed method).
Determination of chemical indices
Nitrate, nitrite, and ammonium contents were evaluated by sulfamic acid spectrophotometry, N-(1-naphthyl)-2-ethylenediamine spectrophotometry, and Nessler reagent spectrophotometry, respectively. The total organic carbon (TOC) was measured by an organic carbon meter (multifunctional N/C 3100, Analytik Jena GmbH, Germany), and the optical density of the microbial strain was determined by an ultraviolet spectrophotometer (SP-756P, Spectrum of Shanghai, China) at a wavelength of 600 nm (OD 600 ). The gaseous nitrogen was analyzed by gas chromatograph-thermal conductivity detector (GC7900-TCD, Techcomp of Shanghai, China). A total of 1 mL of headspace gas was injected, the temperature of the injection port and detector was 120 °C, and the carrier gas was high-purity He with a flow rate of 1.0 mL min −1 .
Strain identification and genome sequencing
The 16S rRNA of the microbial strain was amplified by the general PCR method of prokaryotic bacteria, and the gene primers 27F (5'-AGA GTT TGA TCC TGG CTC AG-3') and 1492R (5'-GGT TAC CTT GTT ACG ACT T-3') were used. The PCR conditions were as follows: pre-denaturation at 94 °C for 5 min, followed by 35 cycles of denaturation at 94 °C for 1 min, annealing at 55 °C for 1 min, and extension at 72 °C for 1 min, and a final repair and extension at 72 °C for 10 min. The 16S rRNA gene sequence was compared in NCBI (http:// www. ncbi. nlm. nih.), and the phylogeny was established using MEGA5 software program (https:// mega51. softw are. infor mer. com/).
The Illumina HiSeq 2500 sequencing platform and PE150-100X library-building sequencing strategy were employed for draft genome sequencing. The clean data were obtained by removing the adapters and processing lowquality sequences from the raw data. SOAP denovo version 2.04 software was employed to assemble the sequence, select the best assembly result, and fill the hole and optimize the assembly result (Lim et al. 2016), and all these processes were accomplished by Novogene Technology Co., Ltd. (see Text S4 for specific methods).
Characteristics of CHCA degradation and nitrate removal
The isolated microbial strain, labeled as strain SJ18, was grown in 2216E medium and then inoculated onto 2216E solid agar plate containing 50 mg L −1 NAs to examine its colony characteristics. The colony appeared round, translucent, and white on the 2216E solid agar plate (Fig. S1). The 16S rRNA sequencing results showed that the similarity between strain SJ18 and Marinobacter profundi was > 99%. The gene size of strain SJ18 was 4,364,528 bp, a total of 3951 genes were annotated, and the average GC content was 59.84%. Based on these characteristics, strain SJ18 was identified as Marinobacter sp. strain SJ18 (GenBank Accession No. MH458950.1). Figure S2 of Supplementary Material illustrates the phylogenetic tree of strain SJ18.
The aerobic and anaerobic degradation of CHCA are shown in Fig. 1a, b. Under aerobic conditions, CHCA was rapidly degraded after the start of the experiment, and was completely degraded within 60 h. The maximum degradation rate appeared at around 24 h (degradation rate of about 0.43 mg h −1 ). In contrast, no decrease in CHCA was noted in the control (heat-killed group), indicating that abiotic factors have negligible influence on aerobic CHCA degradation. Moreover, the TOC content in the experimental group decrease from 18.9 ± 0.5 to 7.0 ± 0.1 mg L −1 , and the OD 600 gradually increased with the progress of degradation, reaching a maximum of 0.74 at the end of CHCA degradation (about 60 h). Subsequently, the OD 600 gradually decreased, suggesting that strain SJ18 could utilize CHCA as the only carbon source and that the cell density slowly decreased after complete consumption of CHCA. In addition, the nitrate concentration gradually decreased, ammonium concentration increased, and nitrite concentration showed no significant changes in the experimental group. The nitrate removal rate was > 70% throughout the experiment, with the rate of nitrate consumption being consistent with that of aerobic degradation of CHCA, and no nitrogen was captured during the degradation process, indicating that nitrate was reduced to ammonium, which was utilized by the bacterial cells. In contrast, no significant changes in the corresponding indicators were noted in the blank control group.
Under anaerobic conditions, CHCA was completely degraded within 84 h, with maximum degradation rate of about 0.37 mg h −1 (around 36 h), implying that the anaerobic degradation efficiency was lower than aerobic degradation efficiency. The TOC concentration in the experimental group decreased from 18.7 ± 0.3 to 8.5 ± 0.1 mg L −1 , and the change in OD 600 value was consistent with that noted under aerobic conditions. However, nitrite concentration continued to increase with the consumption of nitrate, and no significant changes were detected in the ammonium content. At the end of the CHCA degradation process, the nitrite concentration gradually decreased and nitrogen production was detected (data not shown). The nitrate removal rate reached 70%, and the nitrate consumption rate was consistent with the CHCA anaerobic degradation rate. These findings suggested the occurrence of denitrification during anaerobic degradation of CHCA. The fitting results of degradation curves showed that aerobic and anaerobic degradation of CHCA conformed to the pseudo-first-order kinetic reaction (Fig. S3). The reaction rate constant k was 0.0342 (R 2 = 0.936) and 0.0292 (R 2 = 0.946) for aerobic and anaerobic degradation, respectively. Pearson correlation analysis showed that both aerobic and anaerobic degradation of CHCA were significantly positively correlated with nitrate removal, with correlation coefficients of 0.871 (P < 0.05) and 0.843 (P < 0.05), respectively.
Intermediates and pathways of CHCA degradation
To explore the aerobic and anaerobic degradation pathways of CHCA, samples collected at 0, 36, 60, and 84 h of the experiment were subjected to GC-MS to detect the intermediate products. The results showed several intermediate products (Fig. 2), and the mass spectral data revealed the presence of cyclohexenecarboxylic acid (m/z = 198, 183) in both aerobic and anaerobic groups. The appearance of cyclohexenecarboxylic acid indicated that CHCA might have lost two electrons through dehydrogenation and formed a carbon-carbon double bond on the cyclohexane ring. This degradation pathway is generally considered to be typical β-oxidation and most of the aromatic or cycloalkane pollutants are known to be degraded via this pathway Whitby 2010). Glycerol (m/z = 205,147,73) was detected in the aerobic group, while lactic acid (m/z = 191, 147, 117) was found in the anaerobic group. Glycerol and lactic acid are presumed to be the oxidative hydrolysis products of fatty acids under aerobic and anaerobic conditions, respectively, and their presence implied that hydrolysis could open the ring of cyclohexenecarboxylic acid and eventually form short-chain fatty acids. However, no specific fatty acid products were detected during the degradation process, which might be owing to the fact that fatty acids are readily utilized by microorganisms, resulting in their short life during degradation, and this speculation has also been confirmed in some reports. Quesnel et al. (2011) found that an intermediate product of CHCA existed for a short period of time during the cyclohexylacetic acid biodegradation process and could not be captured. Nevertheless, although the product after ring opening could not be captured, it could be essentially confirmed that CHCA was mainly biodegraded through β-oxidation.
The whole genome sequencing was also employed to explore the CHCA degradation process. The COG, GO, and KEGG databases of strain SJ18 were analyzed to predict the biological process, functional genes, and degradation pathways of CHCA under aerobic and anaerobic conditions. The COG analysis results (Fig. S4) showed that the functional genes, including those involved in energy production and conversion, amino acid transport and metabolism, carbohydrate transport and metabolism, coenzyme transport and metabolism, lipid transport and metabolism, and cell wall/membrane/envelope biogenesis, were significantly enriched. The results of GO analysis (Fig. S5) revealed that the enriched genes in the genome of strain SJ18 were mainly involved in cellular process, metabolic process, binding, catalytic activity, nitrogen utilization, antioxidant activity, enzyme regulator activity, transporter activity, and other functions. The KEGG annotation results (Fig. 3) demonstrated that the pathways of membrane transport, coenzyme metabolism, carbohydrate metabolism, and amino acid metabolism were also enriched. In addition, pathways such as ABC transporters, fatty acid degradation and biosynthesis, degradation of ketone bodies, pyruvate metabolism and styrene degradation, chlorocyclohexane and chlorobenzene degradation, benzoate degradation, toluene degradation, and naphthalene degradation were also obviously enriched in strain SJ18. Some genes associated with CHCA degradation in these pathways, such as genes encoding long-chain acyl-CoA synthase (fadD), benzoate 1,2-dioxygenase (benA), catechol 1,2-dioxygenase (catA), 3-hydroxyacyl-CoA dehydrogenase (HADH), hydroxycyclohexene carboxylic acid dehydrogenase (benD), alcohol dehydrogenase (adh), and 3-hydroxy-3-methylglutaryl coenzyme A lyase (hmgL), have been confirmed to participate in β-oxidation and metabolism of alkanes and PAHs under aerobic and anaerobic conditions Cameron et al. 2019;Kung et al. 2013;McKew et al. 2021). These results indicated that the abovementioned functional genes may play an important role in the biodegradation of CHCA. Subsequently, the aerobic and anaerobic degradation pathways of CHCA were inferred based on the abovementioned findings as follows (Fig. 4): First, two hydrogen atoms from CHCA were removed by dehydrogenase, forming a carbon-carbon double bond (cyclohexenecarboxylic acid) on the cyclohexane ring. In the aerobic degradation of CHCA, the cyclohexenecarboxylic acid ring was opened by specific dioxygenase (such as benzoate 1,2-dioxygenase or catechol 1,2-dioxygenase) in the presence of oxygen to form fatty acids. In the anaerobic degradation of CHCA, cyclohexenecarboxylic acid was hydrolyzed and dehydrogenated to sequentially generate 1-hydroxy-CHCA and cyclohexanonecarboxylic acid, which were subsequently hydrolyzed and opened to form short-chain fatty acids. Finally, glycerol and lactic acid were formed through multiple β-oxidation under aerobic and anaerobic conditions, respectively.
Nitrate metabolism pathway during CHCA degradation
Nitrate reduction to ammonia and denitrification were considered to accompany aerobic and anaerobic degradation of CHCA, respectively. Hence, the transformation of nitrate was further elucidated using genome sequencing. The genome sequence of strain SJ18 was compared with the KEGG database, and the nitrogen metabolism pathway of strain SJ18 was determined (Fig. 5). A total of seven genes related to nitrogen metabolism were found, including nitrate reductase (NarGHI), nitrate assimilation reduction catalytic subunit (NasAB), nitrite reductase (NirS), nitrite reductase subunits (NirBD), nitric oxide reductase (NorBC), nitrous oxide reductase (NosZ), and nitrite oxidoreductase (NxrAB). Among them, NarGHI, NirS, NorBC, and NosZ are considered as common denitrification genes that could perform complete denitrification process, whereas NarGHI and NirBD could convert nitrate to ammonium by dissimilatory reduction (Lu Marchant et al. 2017). However, NarGHI is oxygen-sensitive and its expression is inhibited under aerobic conditions . As strain SJ18 lacks gene encoding NapAB (periplasmic nitrate reductase, which can reduce nitrate to nitrite under aerobic conditions), nitrate is converted to nitrite by NasAB under aerobic conditions.
Based on the inorganic nitrogen indicators and genomic results, the nitrate transformation process during aerobic and anaerobic biodegradation of CHCA was proposed as follows: under aerobic conditions, nitrate as an electron donor is transported from extracellular to intracellular region through nitrate transporter (Nrt), and NasAB reduces nitrate to nitrite. Then, nitrite is reduced to ammonium by NirBD. Finally, ammonium is transformed to glutamate by glutamine synthase (glnA) and glutamate dehydrogenase (gudB/rocG), and glutamate is subsequently metabolized. Under anaerobic conditions, nitrate is reduced to nitrite by NarGHI, nitrite is further reduced to nitric oxide (NO) by NirS, NO is reduced to nitrous oxide by NorBC, and nitrous oxide is reduced to nitrogen by NosZ.
In general, strain SJ18 not only exhibited the ability to reduce nitrate to ammonium via dissimilated nitrate reduction to ammonium (DNRA) pathway and assimilated nitrate reduction to ammonium (ANRA) pathway, but also showed denitrification ability. During the nitrate removal process, the first step of the DNRA pathway is denitrification by NarGHI to reduce nitrate. In the ANRA pathway, NasAB assimilates and reduces nitrate to nitrite. However, owing to the lack of NirA-or NIT-6-coding genes, strain SJ18 was unable to reduce nitrate to ammonium through the ANRA pathway. Therefore, the nitrite produced in the first step of the ANRA pathway had to be further metabolized through other nitrogen metabolism pathways such as denitrification or the DNRA pathway. In addition, strain SJ18 also encoded nitrite oxidoreductase, which could nitrify nitrite to form nitrate.
Overall, the nitrate metabolic pathways under aerobic and anaerobic biodegradation of CHCA by strain SJ18 were obviously different. Under aerobic conditions, nitrate was gradually consumed with the degradation of CHCA, along with cell growth (increase in OD value), indicating that nitrate was converted to ammonium via the ANRA and DNRA pathways, and that the ammonium produced was used for cell growth. Under anaerobic conditions, the nitrite concentration rapidly increased with nitrate consumption, and nitrogen was detected at the end of the process. Denitrification is considered to be an important pathway in nitrate metabolism under anaerobic conditions. It is believed that nitrite accumulation in the degradation system might be owing to the faster rate of nitrate reduction (by NarGHI) than nitrite reduction (by NirS), NO reduction (by NorBC), and nitrous oxide reduction (by NosZ).
Conclusion
This study demonstrated that CHCA was completely degraded by Marinobacter sp. SJ18 within 60 and 84 h under aerobic and anaerobic conditions, respectively, and a significant positive correlation was found between nitrate removal and CHCA degradation. The CHCA degradation pathways were speculated as follows: cyclohexane ring is first dehydrogenated to form cyclohexene, and the cyclohexene carboxylic acid ring is opened via the action of dioxygenase to generate fatty acid under aerobic conditions and via β-oxidation under anaerobic conditions. In addition, nitrate is utilized through ANRA and DNRA pathways under aerobic conditions and removed by denitrification under anaerobic conditions. Thus, this study provides a basis for alleviating combined pollution of NAs and nitrate in marine environment with frequent anthropogenic activities, although the behavior and biotransformation of NAs in marine environment require further exploration.
Author contribution Shuaijun Zan: investigation, experimental design, methodology, writing-original draft, writing-review/editing. Jing Wang: planned research, involved in supervised all analyses, data interpretation and discussion as a Project Leader. Jingfeng Fan: participated in project cooperation. Yuan Jin: assisted in conducting some experiments. Zelong Li: assisted in conducting some sample analyses and writing editing. Miaomiao Du: assisted in conducting some experiments. | 5,623.8 | 2022-12-13T00:00:00.000 | [
"Engineering"
] |
Free-Radical Propagation Rate Coefficients of Diethyl Itaconate and Di-n-Propyl Itaconate Obtained via PLP–SEC
The propagation step is one of the key reactions in radical polymerization and knowledge about its kinetics is often vital for understanding and designing polymerization processes leading to new materials or optimizing technical processes. Arrhenius expressions for the propagation step in free-radical polymerization of diethyl itaconate (DEI) as well as di-n-propyl itaconate (DnPI) in bulk, for which propagation kinetics was yet unexplored, were thus determined via pulsed-laser polymerization in conjunction with size-exclusion chromatography (PLP-SEC) experiments in the temperature range of 20 to 70 °C. For DEI, the experimental data was complemented by quantum chemical calculation. The obtained Arrhenius parameters are A = 1.1 L·mol–1·s–1 and Ea = 17.5 kJ·mol−1 for DEI and A = 1.0 L·mol–1·s–1 and Ea = 17.5 kJ·mol−1 for DnPI.
Introduction
Esters of itaconic acid (2-methylenesuccinic acid) show the potential to replace styrene as a comonomer in the synthesis of polymer resins [1] or as composite material with cotton [2]. This is especially interesting given the fact that itaconic acid can be produced by many microbiological organisms such as Aspergillus terreus and thus can be obtained from biomass while styrene is petrol based [3][4][5].
In order to fully utilize itaconic acid esters in free-radical polymerizations, the exact knowledge of the reaction rate coefficients is highly advantageous. Here, we are focusing on the propagation rate coefficient, k p . While in a number of publications [6][7][8][9][10][11][12][13], data for k p for a few members of the series of itaconic acid homo-diesters are already available, this series is lacking some data, most notably for the diethyl and the di-n-propyl ester. This work intends to fill these gaps by applying pulsed-laser polymerization in conjunction with the size-exclusion chromatography (PLP-SEC) method as well as quantum chemical prediction in order to obtain Arrhenius expressions for k p of these monomers.
PLP is a highly specialized technique for studying the kinetics of radical polymerization reactions. It involves using a pulsed laser to initiate polymerization in short bursts, allowing for precise control over reaction conditions and reaction kinetics. One of the key advantages of PLP is that it allows for the measurement of propagation rate coefficients of radical polymerization with high precision and accuracy, and is thus IUPAC-recommended. The propagation rate coefficient, k p , is one of the kinetic key parameters that describe how fast the chain-growth reaction proceeds and how sensitive it is to changes in reaction conditions such as temperature and concentration of reactants. Measuring kinetic coefficients in radical polymerization is important for understanding and optimizing radical polymerization processes, as it allows to predict and control the behavior of the reaction under different conditions and in addition enables to predict the properties of the resulting polymer. The kinetic information can thus be used to design new polymers with specific properties, optimize reaction conditions for industrial-scale production, and troubleshoot problems that arise during polymerization.
PLP experiments: All samples were degassed by a gentle flow of argon for at least 10 min. Pulsed-laser-polymerizations were performed in silica glass cells which were tempered by a heating bath for at least 10 min prior to the start of irradiation. The laser used for irradiation is an ATLEX 1000 I XeF exciplex laser operating at 351 nm with a pulse energy of 1-7 mJ. Repetition rates varied from 0.25 to 2 Hz. The samples were prepared by dissolving 20 mM to 75 mM of DMPA in the respective itaconate ester. No further solvent was added.
After irradiation, the mixture was used directly for SEC analysis without isolation of the polymer.
Density measurements: A gas pycnometry system (AccuPyc II from Micromeritics) was used to measure room temperature densities of the monomers. Since the instrument did not have a temperature control system, room temperature densities were used for the whole temperature range. The following densities were obtained: 1.044 g•mL −1 (DEI) and 1.025 g•mL 1 (DnPI).
Size-exclusion chromatography: Size-exclusion chromatography was performed using an Agilent 1260 G1310B iso pump with a PSS degasser. Injection was performed with an Agilent 1260 ALS G1329B autosampler onto three PSS SDV 5 μm bead columns (10 6 ,10 4 ,10 3 Angstrom) which was embedded in a PSS TCC6000 column oven at 35 °C. THF at a flow rate of 1 mL•min −1 was used as the solvent. The detector was an Agilent 1260 RID G1362A. Access to absolute molecular weight distributions was obtained by a PS-PMMA calibration in conjunction with the Mark-Houwink-constants of DEI and DnPI, which have been determined previously [15]. These values refer to the solvent toluene, but may also be used for THF, as suggested by Szablan et al. [7]. Scheme 1. Carbon atom numbering for DnPI used in the NMR assignment.
PLP experiments: All samples were degassed by a gentle flow of argon for at least 10 min. Pulsed-laser-polymerizations were performed in silica glass cells which were tempered by a heating bath for at least 10 min prior to the start of irradiation. The laser used for irradiation is an ATLEX 1000 I XeF exciplex laser operating at 351 nm with a pulse energy of 1-7 mJ. Repetition rates varied from 0.25 to 2 Hz. The samples were prepared by dissolving 20 mM to 75 mM of DMPA in the respective itaconate ester. No further solvent was added.
After irradiation, the mixture was used directly for SEC analysis without isolation of the polymer.
Density measurements: A gas pycnometry system (AccuPyc II from Micromeritics) was used to measure room temperature densities of the monomers. Since the instrument did not have a temperature control system, room temperature densities were used for the whole temperature range. The following densities were obtained: 1.044 g·mL −1 (DEI) and 1.025 g·mL −1 (DnPI).
Size-exclusion chromatography: Size-exclusion chromatography was performed using an Agilent 1260 G1310B iso pump with a PSS degasser. Injection was performed with an Agilent 1260 ALS G1329B autosampler onto three PSS SDV 5 µm bead columns (10 6 ,10 4 ,10 3 Angstrom) which was embedded in a PSS TCC6000 column oven at 35 • C. THF at a flow rate of 1 mL·min −1 was used as the solvent. The detector was an Agilent 1260 RID G1362A. Access to absolute molecular weight distributions was obtained by a PS-PMMA calibration in conjunction with the Mark-Houwink-constants of DEI and DnPI, which have been determined previously [15]. These values refer to the solvent toluene, but may also be used for THF, as suggested by Szablan et al. [7].
Quantum chemical calculations: A propagating DEI radical was mimicked by a DEI molecule with a methyl group bound to the former olefinic double bond. Minimum structures of the reactants (consisting of the radical and a monomer) and the product were found using the crest program by Grimme [16]. Repeated runs on different conformers of reactant and product were performed and the global minima were identified. These global minima were optimized with ORCA 4.2.1 and subsequently a NEB-TS-calculation was performed to find the TS structure [17,18].
Results
The Arrhenius parameters for the propagation step of the sterically hindered monomer DEI were calculated using quantum chemistry and transition state theory (TST). In order to obtain experimental values for k p , PLP-SEC experiments were performed for DEI in bulk in the temperature range between 20 and 70 • C and for DnPI in the temperature range of 30 to 70 • C. For the basic principles of the IUPAC-recommended PLP-SEC method and for experimental peculiarities, please also refer to [19,20].
Prediction of Arrhenius Parameters for DEI
k p as function of temperature, k p (T), can be calculated from the partition functions Q n of the involved molecules n and the electronic barrier height E 0 (see Equation (1)). κ denotes a correction factor in order to account for tunneling (assumed to be equal to 1 here and will be left out in the following), c is the inverse of the volume used in the translational partition function, k B is the Boltzmann constant, R the universal gas constant, and m is the molecularity of the reaction [21].
This expression can also be reformulated using the thermodynamical properties entropy of activation, ∆S ‡ , and enthalpy of activation, ∆H ‡ : From Equation (2), the Arrhenius parameters for k p can be calculated directly by Equations (3) and (4). The formulae for calculating the individual entropy components can be found in the Supporting Information.
ZPVE is the vibrational zero-point energy and ∆∆H ‡ a temperature correction, which is calculated for a single species by Equation (5) where ν j denotes the frequency of the j-th vibration and h is the Planck constant.
Itaconates are 1,1-disubstituted ethylene derivatives and thus four different propagation variants are possible. The attacking radical as well as the resulting macroradical can each be centered on the tertiary or the primary carbon. Radical stability favors the radical to be centered on the tertiary carbon, while steric hindrance favors an attack on the primary carbon, resulting in the formed radical also to be tertiary. Consequently, only this reaction has been considered in this work. One might improve the results presented here by considering all variants and Boltzmann-weighing them.
Geometries and frequencies are rather insensitive to the level of theory, so using a cheap method is recommended. The electronic barrier, on the other side, is typically very sensitive to the level of theory used, so a high-quality method should be used. However, for reasons of computational power, in this work only UHF/6-31G(d) and B3LYP/def2-TZVP were used. The calculations were carried out in the gas phase. More accurate results can be obtained by including a solvent model to include interactions between the solute and solvent. These can influence for example electronic energy levels and vibrational frequencies [22]. Since the monomer in question is rather exotic, no implicit solvent model parameters exist for it, to our best knowledge. Consequently, an explicit solvent model would have to be used. However, this comes at a largely increased computational cost and is far beyond the available resources. Consequently, no solvent model could be used.
The vibrational frequencies were used unscaled as well as scaled. For UHF/6-31G(d), the average of the scaling factors reported by Scott et al. [23] was used, which amounts to 0.90064. For B3LYP/def2-TZVP, the average of the scaling factors reported by Kesharwani et al. [24] was used, which amounts to 0.9870.
The obtained minimum structures (on UHF/6-31G(d) level of theory) are depicted in the Appendix B and their .xyz-coordinates are also given. It is clearly visible that both molecules are approximately orthogonally aligned to each other in the reactant structure, in order to minimize steric repulsion. In the transition state (TS), however, both molecules are aligned in parallel, in order to facilitate overlap of the involved orbitals. The product structure is very similar to the TS structure, indicating a late TS. This is unexpected since exothermic reactions as these typically exhibit an early TS.
Obtained electronic barrier heights and Arrhenius parameters are shown in Table 1. The Arrhenius activation energy is very similar for both levels of theory. Likely, a high level of theory such as CCSD(T) would increase the accuracy significantly. The Arrhenius prefactors differ by approximately one order of magnitude for both levels of theory. Both, however, are significantly lower compared to other typical classes of monomers such as acrylates, where they are in the order of 10 6 L · mol −1 · s −1 [25]. This can be explained by the bulky substituents at the olefinic bond, which hinder the approach of a new monomer for reaction. Table 1. Calculated electronic barriers, E 0 , and Arrhenius parameters (prefactor, A, and activation energy, E A ) for the propagation step of DEI at 20 • C. The frequencies used for the calculation were used unscaled as well as scaled by their respective scaling factors.
PLP Experiments
The PLP-SEC-method introduced by Olaj and coworkers [26] offers a very easy way to determine propagation rate coefficients in radical polymerization. A sample consisting of monomer, photoinitiator, and solvent (if applicable) is subjected to repeated, short laser pulses. The laser pulse initiates polymerization by instantaneous formation of radicals. In between the pulses, the radicals propagate (and also terminate to some extent) but upon the following laser pulse, the radical concentration rises sharply again, and the termination probability is thus greatly enhanced. The length L of the chains which were initiated by a laser pulse and were terminated in the moment of the following laser pulse corresponds to L = k p · c M · t 0 with t 0 being the time between two consecutive laser pulses and c M the monomer concentration. However, not all radicals terminate at the first following laser pulse, but may be stopped at the second or third (i-th) subsequent pulse. The corresponding chain length is then L i = i · k p · c M · t 0 , provided the monomer conversion is kept relatively low. In any case, a polymer with that specific chain length of L, 2L, 3L, . . . is more frequently occurring than that having other chain lengths, and thus, structured chain-length distributions with multiple maxima are typically observed. The structure can be evaluated to obtain estimates for L i and the inflection point at the low molecular weight side of the peak typically yields the most precise results [26]. The corresponding chain length is then ⋅ ⋅ ⋅ , provided the monomer conversion is kept relatively low. In any case, a polymer with that specific chain length of L, 2L, 3L,… is more frequently occurring than that having other chain lengths, and thus, structured chain-length distributions with multiple maxima are typically observed. The structure can be evaluated to obtain estimates for Li and the inflection point at the low molecular weight side of the peak typically yields the most precise results [26]. The inflection points were determined by evaluating the maxima of the derivative of the molar mass distribution, as shown in Figure 1. Itaconates propagate slowly and also terminate slowly [8], which presents a challenge for the PLP method. In this work, we addressed this challenge by choosing an extremely slow laser repetition rate, which allowed us to obtain successful results.
Diethyl Itaconate
The PLP-SEC data obtained for DEI are presented in Table 2. For each temperature, two separate solutions of the photoinitiator 2,2-dimethoxy-2-phenylacetophenone (DMPA) in DEI were prepared and from each solution two experiments were performed. The molar masses at the inflection points, MInf, were extracted from complete molar mass distributions obtained via size-exclusion chromatography. Itaconates propagate slowly and also terminate slowly [8], which presents a challenge for the PLP method. In this work, we addressed this challenge by choosing an extremely slow laser repetition rate, which allowed us to obtain successful results.
Diethyl Itaconate
The PLP-SEC data obtained for DEI are presented in Table 2. For each temperature, two separate solutions of the photoinitiator 2,2-dimethoxy-2-phenylacetophenone (DMPA) in DEI were prepared and from each solution two experiments were performed. The molar masses at the inflection points, M Inf , were extracted from complete molar mass distributions obtained via size-exclusion chromatography.
To verify the integrity of the data, the obtained values of k p shall be independent of the initiator concentration and the repetition rate. Furthermore, the position of the second additional PLP peak shall be at double the molar mass of the first peak and analogously the third peak at triple the molar mass of the first peak (or 1.5 times of the second peak). This requirement rests on the assumption that k p is chain-length independent and is called the "consistency criterion" of PLP-SEC. It was introduced for distinguishing the additional PLP peaks from any other occurring peaks that are not kinetically relevant.
As can be seen in Figure 2, the ratio of the second peak position divided by the first is systematically lower than 2, indicating a significant chain length dependence of propagation for chain lengths up to ca. 150 to 200. For the ratio between the third and second peak, however, the agreement with the expected ratio of 1.5 is good. This observation has been made for many other systems studied via PLP-SEC [27][28][29][30] and there still is discussion, whether this effect-which is clearly going beyond the generally accepted chain length dependency of k p for short macroradicals up to chain lengths of ca. 5 [31,32]-is reflecting a real chain-length dependency up to longer chain-lengths or is an SEC artifact [33,34]. PLP-SEC alone is apparently not capable of deciding this open question and it is now generally accepted that the k p data obtained are estimates subject to uncertainty, due to such as yet unexplained effects. To verify the integrity of the data, the obtained values of shall be independent of the initiator concentration and the repetition rate. Furthermore, the position of the second additional PLP peak shall be at double the molar mass of the first peak and analogously the third peak at triple the molar mass of the first peak (or 1.5 times of the second peak). This requirement rests on the assumption that kp is chain-length independent and is called the "consistency criterion" of PLP-SEC. It was introduced for distinguishing the additional PLP peaks from any other occurring peaks that are not kinetically relevant.
As can be seen in Figure 2, the ratio of the second peak position divided by the first is systematically lower than 2, indicating a significant chain length dependence of propagation for chain lengths up to ca. 150 to 200. For the ratio between the third and second peak, however, the agreement with the expected ratio of 1.5 is good. This observation has been made for many other systems studied via PLP-SEC [27][28][29][30] and there still is discussion, whether this effect-which is clearly going beyond the generally accepted chain length dependency of kp for short macroradicals up to chain lengths of ca. 5 [31,32]-is reflecting a real chain-length dependency up to longer chain-lengths or is an SEC artifact [33,34]. PLP-SEC alone is apparently not capable of deciding this open question and it is now generally accepted that the kp data obtained are estimates subject to uncertainty, due to such as yet unexplained effects. Figure 3 shows the obtained k p values for different temperatures and different laser pulse frequencies as a function of initiator concentration. It is clearly visible that k p is independent of initiator concentration-a strict criterion for a successful PLP-SEC experiment -, but, however, is not independent of the repetition rate, which alters the chain length of the produced polymer, as discussed above. With increasing repetition rate, k p increases. Figure 3 shows the obtained values for different temperatures and different laser pulse frequencies as a function of initiator concentration. It is clearly visible that is independent of initiator concentration-a strict criterion for a successful PLP-SEC experiment -, but, however, is not independent of the repetition rate, which alters the chain length of the produced polymer, as discussed above. With increasing repetition rate, increases. In order to extract the Arrhenius parameters from the obtained kp data, a regular Arrhenius fit is employed (see Figure 4). The observed chain-length dependency of kp is not taken into account, but is accepted as uncertainty; consequently, the Arrhenius parameters, given in Table 3, will be average values over the whole chain length regime. It can already be noted here that these Arrhenius parameters fit very well to the ones obtained for other itaconates, but this will be discussed in greater detail below. In order to extract the Arrhenius parameters from the obtained k p data, a regular Arrhenius fit is employed (see Figure 4). The observed chain-length dependency of k p is not taken into account, but is accepted as uncertainty; consequently, the Arrhenius parameters, given in Table 3, will be average values over the whole chain length regime. It can already be noted here that these Arrhenius parameters fit very well to the ones obtained for other itaconates, but this will be discussed in greater detail below. Table 3. Table 3.
Di-n-Propyl Itaconate
Other than for DEI, only two stock solutions of DnPI with different concentrations of DMPA were prepared. The pulse energy was not recorded explicitly during the irradiation but was in a similar range as for the DEI experiments. For all samples, 1000 pulses were irradiated onto the sample, guaranteeing that the overall monomer conversion was low, as required for a successful PLP-SEC-experiment. The resulting PLP-data are collated in Table 4. Table 4. PLP data for DnPI: initiator concentration, c Ini , temperature T, total number of pulses, n pulses , laser pulse frequency, ν, molecular weight at the various inflection points, M inf , and the average k p of all obtained inflection points. The same consistency criteria as for DEI were checked (see Figure 5). When inspecting the ratio of molar masses of the inflection points, clearly a higher scattering is observed compared to DEI. The data for the ratio of peaks 2 and 1 also shows that most samples are well below the anticipated value of 2, again pointing towards a chain-length dependency of k p .
As can be seen in Figure 6, just as for DEI, the obtained k p values are independent of initiator concentration and increase with increasing laser repetition rate. The same reasoning as for DEI can be applied here.
Since for 30 • C only one data point exists, the uncertainty for the Arrhenius fit, which is shown in Figure 7, was assumed to be similar to that of the other data points. (1.0 ± 0.5) · 10 4 17.5 ± 1.2 The same consistency criteria as for DEI were checked (see Figure 5). When inspe ing the ratio of molar masses of the inflection points, clearly a higher scattering is observ compared to DEI. The data for the ratio of peaks 2 and 1 also shows that most samples well below the anticipated value of 2, again pointing towards a chain-length dependen of . As can be seen in Figure 6, just as for DEI, the obtained values are independ of initiator concentration and increase with increasing laser repetition rate. The same r soning as for DEI can be applied here. Since for 30 °C only one data point exists, the uncertainty for the Arrhenius fit, whi is shown in Figure 7, was assumed to be similar to that of the other data points. Since for 30 °C only one data point exists, the uncertainty for the Arrhenius fit, which is shown in Figure 7, was assumed to be similar to that of the other data points. Table 5. Table 5.
Discussion
Inspecting the experimental results of the Arrhenius fits, one notices that the results for DEI and DnPI are extremely close to each other. A correlation analysis for the Arrhenius parameters has not been performed, however. Within the series of itaconate esters (see Table 6), DEI and DnPI are on the lower end of prefactor and activation energy values. The quantum chemical calculations show good agreement with the experimental results. Compared to the experimental values, the prefactors match nearly perfectly on B3LYP/def2-TZVP level. The predicted activation energy is about 5 kJ/mol too high which is still a very good agreement, considering the rather low level of theory. This good agreement is likely due to error compensation.
Haehnel et al. [35] investigated the family behavior of linear alkyl methacrylate monomers. They stated that high frequency factors are accompanied by high values of the activation energy, making it difficult to identify any trends with Arrhenius parameters. Therefore, the actual values of k p at a given temperature were used for revealing family behavior of monomers. In the case of linear alkyl methacrylates, k p (at 50 • C) increases linearly with the length of the side chain [34]. For itaconates, we find an opposite trend: as indicated in Figure 8, k p clearly decreases with increasing size of the ester groups. This can possibly be reasoned by a pre-structuring effect of the monomer, as has also been put forward by Haehnel et al. [35]. For alkyl methacrylates, longer side chains lead to a more structured monomer bulk. Due to stronger dispersive interactions, the longer side chains align more pronouncedly with each other. This effect also aligns the olefinic bonds such that propagation is easily possible and promoted with ester groups size. Itaconates, however, have two side chains. This allows for two different patterns of alignment. The first is identical to the one discussed for methacrylates. The second pattern is not stacked, but shifted, in a zig-zag motif. This alignment increases the distance of the olefinic bonds, leading to decreased propagation rates. Apparently for itaconates, the second pattern is preferred. This interesting effect might be confirmed by MD simulations in the future. Table 6.
Conclusions
Arrhenius expressions for the propagation step in free-radical polymerization of diethyl itaconate (DEI) as well as di-n-propyl itaconate (DnPI) in bulk, for which propagation kinetics was yet unexplored, were determined via pulsed-laser polymerization in conjunction with size-exclusion chromatography (PLP-SEC) experiments in the temperature range of 20 to 70 °C. The values were found to be very similar for both monomers. Together with literature data for other itaconate monomers, a family behavior was found, where kp clearly decreases with increasing size of the ester groups. This behavior is different to other monomers and can possibly be explained by a pre-structuring effect of these rather sterically hindered monomers.
Data Availability Statement:
The data presented in this study are available in this work.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
Formulae for Calculating the Individual Entropy Components 2 ℎ 1 3 2 Figure 8. Values of k p at 50 • C for the family of itaconates using Arrhenius parameters from Table 6.
Conclusions
Arrhenius expressions for the propagation step in free-radical polymerization of diethyl itaconate (DEI) as well as di-n-propyl itaconate (DnPI) in bulk, for which propagation kinetics was yet unexplored, were determined via pulsed-laser polymerization in conjunction with size-exclusion chromatography (PLP-SEC) experiments in the temperature range of 20 to 70 • C. The values were found to be very similar for both monomers. Together with literature data for other itaconate monomers, a family behavior was found, where k p clearly decreases with increasing size of the ester groups. This behavior is different to other monomers and can possibly be explained by a pre-structuring effect of these rather sterically hindered monomers. | 6,175.4 | 2023-03-01T00:00:00.000 | [
"Materials Science"
] |
Prenatal Progestin Exposure-Mediated Oxytocin Suppression Contributes to Social Deficits in Mouse Offspring
Epidemiological studies have shown that maternal hormone exposure is associated with autism spectrum disorders (ASD). The hormone oxytocin (OXT) is a central nervous neuropeptide that plays an important role in social behaviors as well as ASD etiology, although the detailed mechanism remains largely unknown. In this study, we aim to investigate the potential role and contribution of OXT to prenatal progestin exposure-mediated mouse offspring. Our in vitro study in the hypothalamic neurons that isolated from paraventricular nuclei area of mice showed that transient progestin exposure causes persistent epigenetic changes on the OXT promoter, resulting in dissociation of estrogen receptor β (ERβ) and retinoic acid-related orphan receptor α (RORA) from the OXT promoter with subsequent persistent OXT suppression. Our in vivo study showed that prenatal exposure of medroxyprogesterone acetate (MPA) triggers social deficits in mouse offspring; prenatal OXT deficiency in OXT knockdown mouse partly mimics, while postnatal ERβ expression or postnatal OXT peptide injection partly ameliorates, prenatal MPA exposure-mediated social deficits, which include impaired social interaction and social abilities. On the other hand, OXT had no effect on prenatal MPA exposure-mediated anxiety-like behaviors. We conclude that prenatal MPA exposure-mediated oxytocin suppression contributes to social deficits in mouse offspring.
INTRODUCTION
Autism spectrum disorders (ASD) are a series of neurodevelopmental disorders characterized by symptoms including social deficits and restricted or repetitive behaviors (1,2). While the potential mechanism for ASD remains unclear, many factors, including environmental exposure, sex, and epigenetic modifications, are reported to be associated with ASD development (1,3,4). It has been reported that ASD patients have increased steroidogenic activity and that abnormal steroid levels may be involved in ASD development (5,6). We have previously reported that maternal exposure to either progestin (7,8) or androgens (9) contribute to autism-like behaviors in offspring; and the epidemiological study shows that maternal hormonal exposure may be associated with autism development (10).
Oral contraceptive hormones, primarily including estrogens and progestins, were originally used starting around 60 years ago for birth control by preventing ovulation; this time period has been reported to coincide with the dramatic increase in ASD prevalence (8,10). Our epidemiological study has shown that the following 3 risk factors are highly associated with ASD: 1) Use of progestin to prevent threatened abortion, 2) Use of progestin contraceptives at the time of conception, and 3) prenatal consumption of progestin-contaminated food (10). We then hypothesize that maternal exposure to oral contraceptive hormones, especially progestin, may be associated with autism development.
Oxytocin (OXT) is a neuropeptide primarily secreted by h y p o t h a l a m i c n e u r o n s t h a t l o c a t e d i n e i t h e r t h e paraventricular nuclei (PVN) or supraoptic nuclei (SON) (11). OXT, in conjunction with oxytocin receptor (OXTR) (12), has been reported to play an important role in regulation of social recognition and anxiety-like behaviors (13)(14)(15)(16) as well as many other kinds of pathophysiological processes (17). OXT/OXTR signaling abnormalities have been associated with ASD (18,19). We have previously reported that maternal diabetes-mediated OXTR suppression contributes to social deficits in mouse offspring (20), while the detailed mechanism for the role of OXT in ASD development remains largely unknown (21).
Estrogen receptor b (ERb) is widely expressed in a variety of brain regions and has been reported to be associated with anxiety-like behaviors and ASD development (8,(22)(23)(24). We have previously reported that ERb expression is reduced in the amygdala, contributing to prenatal progestin exposure-mediated autism-like behaviors in rat offspring (7,8). Additionally, ERb regulates the expression of superoxide dismutase 2 (SOD2), modulating cellular oxidative stress (25). Interestingly, both ERb and SOD2 are suppressed in maternal diabetes-mediated autism-like mouse offspring (26). ERb is highly expressed and co-localized in OXT neurons in the hypothalamic region, and OXT may be regulated directly or indirectly by ERb, while the possible mechanism remains largely unknown (12,27,28).
In this study, we aim to investigate the role and mechanisms for maternal progestin exposure-mediated OXT suppression and its contribution to social behaviors in offspring. Our in vitro study in mouse hypothalamic neurons showed that transient treatment by 10µM of medroxyprogesterone acetate (MPA) for 3 days triggers persistent OXT suppression through epigenetic modifications and subsequent dissociation of ERb and retinoic acid-related orphan receptor a (RORA) (29) from the OXT promoter, indicating that ERb and RORA may play a role in progestin-mediated OXT suppression. We then conducted the in vivo mouse study, and we found that prenatal exposure to MPA triggers OXT suppression as well as autism-and anxiety-like behaviors in offspring. Prenatal OXT deficiency had no effect on prenatal MPA exposure-induced anxiety-like behavior, but it partly mimicked prenatal MPA exposure-mediated social deficits in offspring. We next conducted postnatal gene manipulation of ERb and RORA targeting to hypothalamic OXT neuron-located PVN area, and we found that postnatal ERb expression partly ameliorated prenatal MPA exposure-induced social deficits, while postnatal RORA expression had no effect. Furthermore, postnatal OXT peptide injection to the third ventricle partly ameliorated prenatal MPA exposure-induced social deficits in offspring as well. We conclude that prenatal MPA exposuremediated oxytocin suppression contributes to social deficits in mouse offspring. Supplementary Information (see Data S1), and the details for used primers are available in Table S1.
Generation of Expression Lentivirus
The mouse ERb expression lentivirus was prepared previously in our lab (20). The cDNA for mouse RORA was purchased from Open Biosystems and then amplified using the following primers with underlined restriction sites: RORA forward primer: 5'-gtac -gggccc-atg gag tca gct ccg gca gcc -3' (ApaI) and RORA reverse primer: 5'-gtac -tctaga-tta ccc atc gat ttg cat ggc -3' (Xba1), and then subcloned into the pLVX-Puro vector (from Clontech). The lentivirus for ERb, RORA, and empty control were expressed using Lenti-X ™ Lentiviral Expression Systems (from Clontech) and concentrated according to manufacturers' instructions (26).
DNA Methylation Analysis
The DNA methylation on the OXT promoter was evaluated using a methylation-specific PCR-based method as described previously with minor modifications (31)(32)(33). The mouse genomic DNA was extracted and purified from primary hypothalamic neurons, and then treated by bisulfite modification through EpiJET Bisulfite Conversion Kit (#K1461, from Fisher). The treated DNA was then amplified using the following primers: Methylated primer: forward 5'-tga aaa ata gtt ttt ggt tag ggc -3' and reverse 5'-ctc tta aat caa att att cca cgc t -3'; Unmethylated primer: forward 5'-gaa aaa tag ttt ttg gtt agg gtg t -3' and reverse 5'-ctc tta aat caa att att cca cac t -3'. Product size: 198bp (methylated) & 197bp (unmethylated); CpG island size: 227bp; Tm: 68.4°C. The final DNA methylation results were normalized by DNA unmethylated results as input.
In Vivo Mouse Experiments
Generation of neuron-specific OXT knockout mice. The OXT fl/fl mouse, which has loxP flanking sites targeting exon 3 of the OXT gene, was generated by in vitro fertilization and was obtained for this study as a generous gift from Dr. Haimou Zhang (Hubei University). The Oxytocin-Ires Cre mice (Oxt Cre , #024234), which expresses Cre recombinase under the control of the oxytocin promoter, was obtained from Jackson Laboratories. To generate neuron-specific OXT -/null mice (Oxt Cre -OXT fl/fl ), OXT fl/fl mice were cross-bred with Oxt Cre mice for over 4 generations on the C57BL/6J background. Positive offspring were confirmed by genotyping through PCR using specific primers (see Table S1) for the presence of both loxP sites within OXT alleles and Cre recombinase (34,35). The experimental animals were either OXT wild type (WT) or OXT null (OXT -/-) mice with C57BL/6J genetic background as described above.
Mouse Protocol 1: Prenatal treatment by progestin MPA or OXT deficiency. Female mice (3-month old) were mated with males, and the pregnant dams were verified, then received either MPA treatment (20 mg/kg body weight, which is similar or equal to high-dose of women exposure) or control (CTL) group that received vehicle only, which containing 1% ethanol in organic sesame oil, and 0.1 ml of drugs were given every 2 days by peritoneal injection from day 1 until offspring delivery for~21 days in total. The above treated dams were then randomly assigned to the below 4 groups: Group 1: OXT WT background dams receiving CTL injection (CTL/WT); Group 2: OXT WT background dams receiving MPA injection (MPA/ WT); Group 3: OXT null background dams receiving CTL injection (CTL/OXT -/-); Group 4: OXT null background dams receiving MPA injection (MPA/OXT -/-). 10 dams were assigned for each group, and one representative offspring was selected randomly from each dam for experiments and analysis. Nine representative offspring were selected from the 10 in total in order to account for potential death of an experimental animal during the process. Hypothalamic neurons from PVN area were isolated on embryonic day 18 (E18), and the offspring were then fed by normal chow until 7-8 weeks old, after which they were given behavior tests. The offspring were then sacrificed; the serum and CSF were collected for OXT analysis and various brain tissues, including the amygdala, hypothalamus (PVN area) and hippocampus, were isolated for further biological assays, including gene expression and oxidative stress.
Mouse Protocol 2: Postnatal manipulation of ERb/RORA lentivirus-carried expression. At 6-week of age, offspring of OXT wild type background that received either the CTL or MPA treatment as described in Mouse Protocol 1 were anesthetized by a mixture of ketamine (90 mg/kg) and xylazine (2.7 mg/kg) and implanted with a guide cannula targeting the PVN area by the direction of an ultra-precise stereotax (Kopf Instruments) using the coordinates of 0.85 mm posterior to the bregma, 0.15 mm lateral to the midline, and 4.8 mm below the skull surface (36). The lentivirus for expression of ERb (↑ERb), RORA (↑RORA), or empty (EMP) was infused immediately by a flow rate of 0.5 µl/h after placement of the cannula and minipump, and in total, 0.5ml of (2×10 3 cfu) lentivirus was infused in 1 hour, and the lentivirus was dissolved in artificial cerebrospinal fluid (aCSF), which containing 140 mM NaCl, 3 mM KCl, 1.2 mM Na2HPO4, 1 mM MgCl2, 0.27 mM NaH2PO4, 1.2 mMCaCl2, and 7.2 mM dextrose in pH 7.4. The experimental animals were randomly separated into the following 4 groups (10 mice each group). Group 1: CTL treated offspring received vehicle lentivirus infusion (CTL/P-EMP); Group 2: MPA treated offspring received vehicle lentivirus infusion (MPA/P-EMP); Group 3: MPA treated offspring received ERb lentivirus infusion (MPA/P-↑ERb); Group 4: MPA treated offspring received RORA lentivirus infusion (MPA/P-↑RORA). To confirm a successful lentivirus injection into PVN area, the cannula placement was checked histologically postmortem by injection of 0.5ml India ink. Animals whose dye injections were not located in the PVN area were excluded from the final analysis, and the offspring were used for behavior tests after two-week of lentivirus infusion followed with biological assays as indicated in Mouse Protocol 1 (37).
Mouse Protocol 3: Postnatal administration of OXT peptides. The offspring (6-week old) from Mouse Protocol 1 were anesthetized and implanted with a guide cannula targeting the third ventricle at the midline coordinates of 1.8 mm posterior to the bregma and 5.0 mm below the skull surface (36). Two weeks were allowed for mice to recover from surgery, and each mouse then received injection with either aCSF as vehicle (VEH) control or oxytocin peptide (OXT, dissolved in aCSF) via pre-implanted cannula (36,38). The experimental animals were then randomly separated into the following 4 groups (10 mice each group). Group 1: CTL treated offspring received vehicle injection (CTL/ P-VEH); Group 2: MPA treated offspring received vehicle injection (MPA/P-VEH); Group 3: CTL treated offspring received OXT peptide injection (MPA/P-OXT); Group 4: MPA treated offspring received OXT peptide injection (MPA/P-OXT). The oxytocin (0.1 mM, diluted in aCSF, 1 mg/20ml aCSF) or vehicle was locally administered via the installed catheter (39). 20 min (including a period for 5 min-adaptation in the test cage) after the injection, the offspring were used for behavior tests followed by biological assays, as indicated in Mouse Protocol 1 (37).
Animal Behavior Tests
The animal behavior tests were evaluated at ages of 7-8 weeks old from offspring unless otherwise mentioned. Anxiety-like behavior was determined by the marble-burying test (MBT) and the elevated plus maze (EPM) tests (7). Autism-like behavior was determined by ultrasonic vocalization (USV), social interaction (SI) test and a three-chambered social test (40)(41)(42), and the details for these tests are described in Supplementary Information.
Isolation of Brain Tissues
The brain tissues were isolated from experimental offspring for further biological assays. The experimental mouse was deeply anesthetized through free breathing of isoflurane vapor (> 5%). The whole blood was then withdrawn by heart puncture for PBMC isolation and the mouse was perfused transcardially by 20 ml cold perfusion solution for 5 min. The skull was cut using a pair of small surgical scissors and the brain was carefully freed from the skull before being transferred to a petri dish (60 mm×15 mm) that was filled with ice-cold DPBS solution. The targeted brain regions, including the amygdala, hypothalamus (PVN area) and hippocampus, were dissected under the surgical microscope under the referred location from the atlas outlined in The Mouse Brain in Stereotaxic Coordinates (3rd Edition). A separate petri dish was prepared for each of the target regions. The whole dissection process was carried out in the span of no more than one hour. The dissected tissues were then frozen at -80°C for either immediate use or later biological assays (43,44).
Collection of Cerebrospinal Fluid
The procedure for CSF collection is based on a previously established protocol with minor modifications. In brief, the mouse was anesthetized and the shaved head was clamped in place for dissection under a dissecting microscope. The layers of muscles were carefully dissected away using forceps and the dura over the cisterna magna was exposed. This area has large blood vessels running through, which is optimal for capillary insertion and CSF collection. The angle of the glass capillary was carefully adjusted and the sharpened tip of glass capillary was aligned and eventually tapped through the dura to collect CSF using a micromanipulator control. Approximately 20 µl of CSF was automatically drawn into the capillary tube once the opening was punctured. The glass capillary was gently removed from the mouse by micromanipulator control and the CSF was then mixed with 1 µl of 20x protease inhibitor in a 1.5 ml centrifuge tube for a quick centrifugation (pulse spin for 5 seconds at maximal speed), and the CSF samples were aliquoted for either immediate analysis or stored at -80°C (45).
In Vitro Primary Culture of Hypothalamic Neurons
The isolation of hypothalamic neurons was carried out following a previously described procedure with minor modifications. Three to five hypothalami from PVN area of mice on embryonic day 18 (E18 rats) were isolated, pooled, and then dissociated into single cell suspension by trituration. They were then transferred to a culture dish, which containing primary DMEM culture medium, 10% FBS, 10% heat-inactivated horse serum, 20mM D-glucose and combined antibiotics (from Invitrogen). The osmolarity of medium was then adjusted to 320-325 mOsm using glucose. The subsequent cell suspension was then split into tissue culture flasks that coated with 100mg/ml of poly-L-lysine (Sigma). 24 hours of incubation were allowed for cells to attach to the flask at 37°C with 5% CO2, the medium was then refreshed for cells to growth until confluent for further biological assays (46). The isolated primary hypothalamic neurons were used for in vitro cell culture study until passage 3. For mapping of progestin-responsive element on the OXT promoter, the cells were immortalized by an hTERT lentivirus vector for a longer life span (up to passage 12) to achieve better transfection efficiency and higher experimental stability as described previously (47,48).
Transient Progestin Treatment Causes Persistent OXT Suppression and Oxidative Stress; ERb Expression Completely, While RORA Expression Partly, Reverses This Effect
We first determined the possible effect of MPA treatment on OXT expression. Mouse hypothalamic neurons were treated by MPA for 3 days and then cultured for another 3 days in the absence of MPA, but with the infection of either ERb (↑ERb) or RORA lentivirus (↑RORA) for biological assays. Our results showed that 3-day MPA treatment significantly suppressed OXT mRNA levels and that OXT mRNA remained low after removal of MPA. Infection of ERb lentivirus completely, while RORA expression partly, reversed this effect (see Figures 1A, B). We also measured mRNA expression of these genes at the end of the treatment on day 6, and the results showed that lentivirus infection of either ERb or RORA was successful. Transient MPA treatment significantly suppressed expression of ERb, SOD2 and RORA, and the expression remained low during subsequent MPA absence (see Figure 1B). We then evaluated protein levels of these genes by either western blotting (see Figures 1C, D, S1A) or ELISA for OXT (see Figure 1E), and the expression pattern was similar to that of mRNA levels. In addition, we conduct immunostaining of OXT for the hypothalamic neurons that isolated from PVN area of mice, and the results showed that almost all the neurons had OXT expression (see Figure S2), indicating a successful OXT neuron preparation. We also evaluated the potential effect of MPA on OXTR expression and the results showed that MPA had no effect, while ERb expression significantly increased OXTR mRNA levels (see Figure S3). We then measured the effect of MPA on oxidative stress, and the results showed that MPA treatment significantly decreased SOD2 activity (see Figure 1F) and increased ROS formation (see Figure 1G) and 3-nitrotyrosine formation (see Figure 1H). Again, ERb expression completely, while RORA expression partly, reversed this effect. Furthermore, we determined the potential effect of other progestins on OXT expression and epigenetic changes. The results showed that estrogen (E2), progesterone (P2) and NGM had no significant effect, while almost all transient treatments of progestin, including LNG, NES, NET, NETA, NEN and OHPC, induced persistent OXT suppression and increased H3K27me2 modification on the OXT promoter (see Table 1). We conclude that transient progestin treatment causes persistent OXT suppression and oxidative stress in hypothalamic neurons.
MPA Induces OXT Suppression by Epigenetic Modifications and Subsequent Dissociation of ERb and RORA From the OXT Promoter
We evaluated the potential molecular mechanism for MPAinduced OXT suppression. The conditionally immortalized hypothalamic neurons from PVN area were transfected by either OXT full length (pOXT-2000) or deletion reporter constructs and then treated by MPA for luciferase reporter assay. Our results showed that MPA-induced OXT suppression had no significant changes in the constructs of -2000, -1600, -1200, -800, -600, -400 and -200, while the suppression was significantly diminished in deletion constructs of -100 and -0, indicating that the MPA-responsive element is located in the range of -200~-100 on the OXT promoter (see Figure 2A). We then searched all the potential binding motifs in the range of -200~-100 on the OXT promoter and found that there were two RXRa motifs at -188 and -105, two estrogen response element (ERE) motifs at -182 (marked in red) and -169, one motif for RORA at -163 (marked in red) and one for p53 at -135, respectively (see Figure 2B). We then mutated these potential binding motifs respectively in the OXT full length reporter constructs and transfected them for reporter assay. The results showed that single mutants (marked in green, see Figure 2B) of ERE at -162 (M-182/ERE) and RORA at -163 (M-163/RORA) significantly diminished MPA-induced OXT suppression, while other single mutants had no effect (see Figure 2C) completely, reversed MPA-induced suppression. ERb expression completely, but RORA expression partly, reversed MPA-induced suppression (see Figure 2D). We also evaluated the binding ability of these motifs by ChIP techniques, and the results showed that MPA treatment significantly decreased the binding abilities of ERb and RORA on the OXT promoter. Again, ERb expression completely, but RORA expression partly, reversed MPA-induced suppression (see Figure 2E). We finally evaluated MPA-mediated epigenetic changes on the OXT promoter by ChIP techniques. The results showed that MPA treatment significantly increased H3K27me2 modifications on the OXT promoter, but had no effect on H3K9me2, H3K9me3 or H3K27me3. ERb expression completely, while RORA expression partly, reversed this effect (see Figure 2F).
In addition, we found that MPA treatment had no effect on the OXT promoter for DNA methylation (see Figure S4), histone 4 methylation (see Figure S5A) and histone 3 acetylation (see Figure S5B). We conclude that MPA induces OXT suppression by epigenetic modifications and the subsequent dissociation of ERb and RORA from the OXT promoter.
Prenatal OXT Deficiency Mimics Prenatal MPA Exposure-Mediated OXT Suppression and Oxidative Stress
We determined the effect of prenatal OXT deficiency on prenatal MPA exposure-mediated OXT suppression and oxidative stress. The OXT wild type (WT) or OXT null (OXT -/-) background dams were exposed to either control (CTL) or MPA and the hypothalamic neurons or tissues from PVN area of offspring were isolated for analysis. We first evaluated gene expression in hypothalamic tissues, and found that MPA exposure significantly decreased mRNA levels of ERb, SOD2, RORA and OXT in hypothalamic tissues. Prenatal OXT deficiency showed no further effect, although it decreased OXT mRNA levels in the control (CTL) group (CTL/OXT-/-), indicating that OXT knockdown in these animals was successful (see Figure 3A). We also measured protein levels for the genes through either western blotting (see Figures 3B, C, S1B) or ELISA for OXT (see Figure 3D), and the expression pattern was similar to that of mRNA levels. In addition, we measured gene expression in tissues of both the amygdala (see Figure S6A) and hippocampus (see Figure S6B), and the results showed that MPA exposure decreased mRNA levels of ERb, SOD2 and RORA in the amygdala but had no effect in the hippocampus. OXT knockdown showed no further effect. We also evaluated the effect of MPA and OXT deficiency on oxidative stress in hypothalamic tissues, and the results showed that prenatal MPA exposure significantly increased superoxide anion release (see Figure 3E) and 8-oxo-dG formation (see Figures 3F, G), while prenatal OXT deficiency showed no effect. We then evaluated OXT peptide levels in both the CSF (see Figure 3H) and serum (see Figure 3I), and found that prenatal MPA exposure significantly decreased OXT levels, and prenatal OXT deficiency achieved a further decrease. We conclude that prenatal OXT deficiency mimics prenatal MPA exposure-mediated OXT suppression and oxidative stress.
Prenatal OXT Deficiency Partly Mimics Prenatal MPA Exposure-Mediated Social Deficits in Mouse Offspring
We determined the potential effect of prenatal MPA exposure and OXT deficiency on animal behaviors. We first evaluated anxiety-like behaviors, and our results showed that offspring in the prenatal MPA exposure (MPA/WT) group buried less marbles in the marble-burying test (MBT) test (see Figure 4A) and spent less time in the Open Arm and more time in Closed Arm during the elevated plus maze (EPM) test (see Figure 4B) compared to the control (CTL/WT) group. We then evaluated autism-like behaviors, and the results showed that mice in the MPA/WT group had fewer ultrasonic vocalizations in the USV tests (see Figure 4C) and spent significantly less time sniffing, mounting and interacting in total during the social interaction (SI) tests (see Figure 4D). They spent less time sniffing in the Stranger 1 side and more time in the Empty side for sociability (see Figure 4E); additionally, they spent more time in the Stranger 1 side and less time in the Stranger 2 side for social novelty (see Figure 4F) during the three-chambered social test compared to the CTL/WT group. OXT deficiency had no effect on the MBT, EPM or USV tests, while it slightly decreased sniffing and total interaction time in the SI test and slightly decreased social ability and social novelty in the threechambered social tests. We conclude that prenatal OXT deficiency partly mimics prenatal MPA exposure-mediated social deficits in mouse offspring.
Postnatal ERb Expression Completely, While Postnatal RORA Expression Partly, Reverses Prenatal MPA Exposure-Mediated OXT Suppression and Oxidative Stress in Offspring
Pregnant dams were given either control (CTL) or MPA treatment, and the subsequent offspring received either empty (EMP), ERb (↑ERb) or RORA (↑RORA) lentivirus in the PVN area before then being sacrificed for analysis. We first determined gene expression in the hypothalamic tissues that isolated from PVN area of offspring, and found that infection of either ERb or RORA lentivirus significantly increased mRNA levels, respectively, indicating a successful gene manipulation. Additionally, ERb expression (MPA/P-↑ERb) completely reversed MPA exposure-mediated gene suppression of ERb, SOD2, RORA and OXT. RORA expression (MPA/P-↑RORA) showed no effect on ERb and SOD2, while it partly reversed MPA exposure-mediated OXT suppression (see Figure 5A). We also measured protein levels for the genes using either western blotting (see Figures 5B, C, S1C) or ELISA for OXT (see Figure 5D), and the expression pattern was similar to that of mRNA levels. Moreover, we measured gene expression in the other brain regions, and the results showed that ERb expression completely reversed MPA exposure-mediated gene suppression of ERb, SOD2 and RORA in the amygdala, while RORA expression showed no effect (see Figure S7A). Neither prenatal MPA exposure nor postnatal gene manipulation showed any effect on gene expression in the hippocampus (see Figure S7B). We also evaluated the effect of MPA exposure and postnatal gene manipulation on oxidative stress in hypothalamic tissues, and the results showed that postnatal ERb expression completely, while RORA expression partly, reversed prenatal MPA exposuremediated increased superoxide anion release (see Figure 5E) and 8-OHdG formation (see Figure 5F). We then evaluated OXT peptide levels in both the CSF (see Figure 5G) and serum (see Figure 5H), and the results showed that postnatal ERb expression completely, while RORA expression partly, reversed prenatal MPA exposure-mediated OXT suppression. We conclude that postnatal ERb expression completely, while postnatal RORA expression partly, reverses prenatal MPA exposure-mediated OXT suppression and oxidative stress in offspring.
Postnatal ERb Expression Partly
Ameliorates Prenatal MPA Exposure-Mediated Social Deficits in Mouse Offspring, While Postnatal RORA Expression Has no Effect We evaluated animal behaviors of offspring with prenatal MPA exposure and postnatal gene manipulation. Our results showed that postnatal expression of either ERb or RORA showed no effect on MPA exposure-mediated anxiety-like behaviors, as measured using the marble-burying test (MBT) test (see Figure 6A) and elevated plus maze (EPM) test (see Figure 6B). We also evaluated autism-like behaviors, and the results showed that postnatal expression of either ERb or RORA showed no effect on MPA exposure-mediated decreased ultrasonic vocalization in USV tests (see Figure 5C). On the other hand, postnatal ERb expression partly ameliorated MPA exposure-mediated impaired social interaction, including sniffing and total interaction time, as measured in the social interaction (SI) tests (see Figure 6D). Additionally, it partly ameliorated MPA exposure-mediated impaired sociability (see Figure 6E) but not social novelty (see Figure 6F) during the three-chambered social test. Postnatal RORA expression showed no effect on MPA exposure-mediated behaviors in offspring (see Figures 6C-F). We conclude that postnatal ERb expression partly ameliorates prenatal MPA exposure-mediated social deficits in mouse offspring.
Postnatal Injection of OXT Peptide Partly Reverses Prenatal MPA Exposure-Mediated Social Deficits in Mouse Offspring
Pregnant dams were treated with either control (CTL) or MPA, and the subsequent offspring received either vehicle (VEH) or OXT peptide injection through the third ventricle for biological assays. We first determined gene expression in hypothalamic tissues that isolated from PVN area, and found that OXT peptide showed no effect on MPA exposure-mediated gene suppression of ERb, SOD2, RORA or OXT (see Figure 7A). We also measured OXT peptide levels after OXT injection, and found that postnatal OXT injection significantly increased OXT levels in the CSF compared to the control (CTL/P-VEH) group (see Figure 7B) and also partly reversed MPA exposure-mediated decreased OXT serum levels (see Figure 7C). We evaluated animal behaviors in the offspring, and our results showed that postnatal OXT injection showed no effect on MPA exposuremediated anxiety-like behaviors, as measured through the marble-burying test (MBT) test (see Figure 7D) and elevated plus maze (EPM) test (see Figure 7E). We also evaluated autismlike behaviors, and the results showed that postnatal OXT injection showed no effect on MPA exposure-mediated decreased ultrasonic vocalization in USV tests (see Figure 7F). On the other hand, postnatal OXT injection partly ameliorated MPA exposure-mediated impaired social interaction, as indicated through sniffing and total interaction time during the social interaction (SI) tests (see Figure 7G). Additionally, it partly ameliorated MPA exposure-mediated impaired sociability (see Figure 7H) but not social novelty (see Figure 7I) during the three-chambered social test. We conclude that postnatal OXT injection partly ameliorates prenatal MPA exposure-mediated social deficits in mouse offspring.
DISCUSSION
In this study, we found that transient progestin treatment triggers persistent epigenetic changes and OXT suppression in hypothalamic neurons. Prenatal MPA exposure induces OXT suppression, oxidative stress and social deficits in offspring. OXT knockdown mice partly mimics, while postnatal ERb expression or postnatal OXT peptide injection partly ameliorates, prenatal MPA exposure-mediated social deficits in mouse offspring.
Effect of Prenatal Progestin Exposure
Our in vitro study in hypothalamic neurons showed that transient progestin treatment induces persistent epigenetic modifications even after removal of progestin and subsequently dissociates both ERb and RORA from the OXT promoter, triggering OXT suppression. The in vivo study in mouse models showed that prenatal MPA exposure induces OXT suppression, partly contributing to social deficits in mouse models. In addition, the regular MPA dose for treatment of women contraception is reported as 150mg (49), and the high MPA dose for tumor suppression is in the range of 400-2000mg daily (50), and those doses can be calculated as 2.5-33.3mg/kg body weight if the average weight of women is considered as 60kg. Given the fact that the practical human exposure time can be 3 months (first trimester is most sensitive for ASD development) or more during the pregnancy (10, 51), while the exposure time of pregnant dams is much less, can only reach to 21 days in maximum, we finally chose MPA dose of 20mg/kg body weight for prenatal treatment of pregnant dams to mimic the possible high dose of MPA for human exposure. Furthermore, our in vitro and in vivo study showed that progestin exposure induces suppression of ERb and RORA in addition to OXT suppression, which is consistent with our previous finding in rat models (7,8), indicating that ERb may play an important role in prenatal progestin exposure-mediated social deficits in mouse offspring. In addition, our results showed that prenatal progestin exposure triggers social deficits in rodents, which is consistent with previous reports that maternal hormone exposure is a potential risk factor for ASD (52,53), modulating a neurogenic response and social recognition during development (54,55).
Role of ERb and RORA in OXT Expression
Our in vitro study showed that the OXT promoter has the potential binding sites of ERb and RORA, which are responsible for progestin treatment-mediated OXT suppression. This indicates that RORA may also play a role in OXT expression that adds to the significant effect of ERb, which is consistent with previous findings that RORA plays a critical role in embryo development (35) and is associated with autism development (56 dissociation from ERE binding motif on the OXT promoter, triggering OXT suppression. Furthermore, the progestinresponsive ERE binding motif identified in this work is different with previous study (57), and the progestin exposure-mediated OXT suppression is epigenetic modification-based persistent suppression. In this study, a novel mechanism for progestinmediated OXT suppression through ERb and RORA is reported.
Role of OXT and Social Deficits
OXTR is expressed in a variety of human tissues and is highly expressed in limbic regions such as the amygdala (12,20). It has been reported that the OXT/OXTR signaling pathway plays a role in regulation of a variety of social behaviors (11,16) as well as ASD etiology (18,19,59) and is involved with anxiety-like behaviors (13,14). Our results showed that OXTR expression does not change in response to progestin treatment, while OXT expression is reduced persistently. Furthermore, prenatal OXT deficiency in OXT knockdown mice partly mimics prenatal MPA exposure-mediated social deficits, including impaired social interaction and social ability, but showed no effect on anxietylike behaviors, as measured in MBT and EPM tests. Furthermore, postnatal expression of ERb in the PVN area or through postnatal OXT peptide injection in the third ventricle partly ameliorates prenatal MPA-exposure-mediated social deficits; again, there is no effect on anxiety-like behaviors. This can be partly explained through the hypothesis that postnatal OXT manipulation is only effective in certain OXT-responsive areas, but cannot mimic the whole endogenous OXT-responsive area (60). However, it is clear that OXT peptides do have some effect on modulating social behaviors in mouse offspring. On the other hand, recent placebo-controlled trial using intranasal OXT therapy showed no significant effect on ASD children and adolescents, which can be explained because intranasal OXT administration may not reach sufficient OXT concentrations in OXT-responsive areas of the central nervous system (61).
CONCLUSIONS
Transient progestin treatment induces epigenetic changes, triggering persistent OXT suppression. Postnatal ERb expression in hypothalamic regions or postnatal OXT peptide injection partly ameliorates postnatal MPA exposure-mediated impaired social interaction and social abilities in mouse offspring. We conclude that maternal progestin exposuremediated oxytocin suppression contributes to social deficits in mouse offspring.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding authors.
ETHICS STATEMENT
The animal study was reviewed and approved by The Institutional Animal Care and Use Committee from Foshan Maternity & Child Healthcare Hospital at Southern Medical University. | 7,758.8 | 2022-03-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
The Effect of Omega-3 Rich Fish Oil on the Kidney Changes in Mice Induced by Azoxymethane and Dextran Sodium Sulfate
The Effect of Omega-3 Rich Fish Oil on the Kidney Changes in Mice Induced by Azoxymethane and Dextran Sodium Sulfate. 259-266. Background: The study aimed to investigate the effect of omega-3 rich fish oil to kidney of mice induced by Azoxymethane (AOM) and DSS using histopathology parameters. Method: The experimental mice were induced using 10 mg/kg AOM and 2% DSS for 2 weeks randomly allocated randomly into four groups as follows; Control Group: mice that not received fish oil, Low Dose Group: mice that received 1.5 mg/day fish oil, Medium Dose Group: mice that received 3 mg/day fish oil, and High Dose Group: mice that received 6 mg/day fish oil. The omega-3 rich fish oil was given for 12 weeks. Result: The administration of high dose omega-3 rich fish oil was able to reduced necrosis and inflammation foci compared to the control group (p<0.05). Furthermore, the administration of low, medium, and high dose omega-3 rich fish oil was able to significantly reduced vascular edema and cell degeneration foci (p<0.05). The administration of medium and high dose of omega-3 rich fish oil were able to reduce the amount of fibrosis foci compared to the control group (p<0.05) compared to the control group. Conclusion: The result suggested anti-nephrotoxic effect of omega-3 rich fish oil in mice induced by azoxymethane and DSS. words
INTRODUCTION
Colorectal cancer is a type of malignancy with a high incidence. According to cancer statistics in 2012, colorectal cancer has the third highest incidence with a high mortality rate. In fact, in developed countries the number of new cases of colorectal cancer has exceeded the number of new cervical cancers in women. 1 Colorectal cancer occurs due to the interaction of external and internal factors. External factors include environment and food, while internal factors are genetic differences of each individual. Modeling of colorectal cancer in animals can be reached by inductions of Azoxymethane (AOM) and dextran sodium sulfate. In addition to its effects on colon, azoxymethane is also known to have an effect on the liver, lungs and kidneys. 2 Azoxymethane is known to be nephrotoxic and carcinogenic in the kidneys. 3 The histopathological features of the kidneys of mice given azoxymethane are very diverse, ranging from inflammatory cell infiltration, tubular and glomerular degeneration, to the occurrence of renal carcinoma. 3 Two types of renal carcinoma most commonly found in mice given azoksimetan are renal cell carcinoma and renal medullary carcinoma. 2,3 One factor that can influence the occurrence of colorectal cancer is food intake. Low-fiber and highfat diets are known to be risk factors for colorectal cancer. In addition, read meat and burnt food are also risk factor for colorectal cancer. In contrast, a diet with sufficient fiber and high omega-3 fatty acids is a protective factor for colorectal cancer. 4 The Ministry of Trade lists 10 potential Indonesian commodities, with fish and its processed goods being one of them. 5 One of the processed fish products that have potential in the health world is fish oil rich in omega-3. Omega-3 is a substance that has been shown to have immunomodulatory effects in the human body by reducing oxidative stress and potentiating the antioxidant system. 6 Although the amount is abundant in Indonesia and its effects are generally tested, the use of omega-3 fish oil by the Indonesian people is still very limited.
One study has proven the positive benefits of omega-3 administration in colorectal carcinogenesis by AOM and DSS. 6 Although it is well known that AOM and DSS can also cause damage to other organs such as the kidneys, there are still no studies examining the effects of omega-3 in preventing kidney damage. by AOM and DSS. 2,3 However, several studies have shown positive effects of omega-3 administration on kidney damage induced by other substances, such as cyclosporine-A and takrolimus. 7,8 Therefore, researchers want to know the effects of omega -3 in preventing kidney damage by AOM and DSS.
according to the Guide for the Care and Use of Laboratory Animals of Animal Care and Use Committee. Mice are maintained under controlled temperature conditions of 25 ° C, humidity of 55% with a light / dark cycle of 12 hours. All mice were feeded using standard food and drink ad libitum mineral water.
Treatment
Samples are kidneys from experimental animals that have been given intraperitoneal azoxymethane 10 mg / kg. A week later the mice were given standard food and drinks containing 2% dextran sodium sulfate every day for a week. After administering azoxymethane and dextran sodium sulfate, mice were given omega-3-rich fish oil every day for 10 weeks. Mice will be sacrificed 12 weeks after induction.
Male Balb / c mice are divided into 4 groups, namely: A group of positive control mice induced with AOM and DSS, then not given omega-3-rich fish oil.
Groups of mice induced with AOM and DSS, then given low-dose omega-3 fish oil (1.5 mg per day).
A group of mice induced with AOM and DSS, then given a medium dose of omega-3 fish oil (3 mg per day).
Mice group induced with AOM and DSS, then given high-dose omega-3 fish oil (6 mg per day).
Azoxymetane and dextran sodium sulphate induction and omega-3 rich fish oil treatment
Azoxymethane is given in a single dose in mice through intraperioneal injection at a dose of 10 mg / kg body weight dissolved in 0.9% NaCl solution. Then, for one-week mice were given standard feed and drink mineral water. The following week the drink was replaced with mineral water containing 2% sodium sulfate dextran. 9 Administration of fish oil rich in omega-3 is done by force-feeding group mice treated with low doses (1.5 mg), medium (3 mg), and high (6 mg) every day. Administration of fish oil rich in omega-3 starts at the end of the administration of dextran sodium sulfate (end of week 2) until mice are sacrificed at 12 weeks after induction with azoxymethane ( Figure 1). In addition, every 4 weeks the weight of mice is weighed to find out the body weight index. Body weight index is calculated by dividing the weight of the mice in the treatment group with the control group mice.
Tissue preparation
Mice were sacrificed after 12 weeks after administration of azoxymethane. Kidney mice are taken and cleaned with water, then then fixed by using 10% formalin phosphate buffer and ready to be made histopathological preparations.
Hematosiklin-Eosin (HE) staining
Kidney tissue pieces were made from Fixed Parafin Formalin Embedded (FFPE) and cut 4 μm thick slices and glued to the object glass for HE coloring. HE staining is done by the following steps. Preparation is initialized with three sequential dips in xylol for 5 minutes each followed by rehydration process using grading concentrations from absolute alcohol, 96% and 70% for 5 minutes each. The preparation was soaked in running water for 5 minutes. The preparation was dipped in a hematocycline solution (Meyer's solution) for 7 minutes and rinsed with running water for 10 minutes. The preparation is dipped in 2-3 saturated lithium carbonate, then rinsed with running water for 5 minutes. If the blue color was not sufficient, the preparation is dipped back into the hematocycline solution for 2 minutes, then rinsed in running water. The preparation was soaked in eosin for 1-2 minutes, then dehydrated using gradual concentration of 70%, 80%, 96%, and absolute alcohol for 3 minutes each. Clearing the preparation with xylol I, II, and II, then dripping the paste.
Interpretation of histopathological observations
Observation of kidney tissue in mice was carried out in ten visual fields randomly with 100x magnification of kidney tissue. Nephrotoxicity signs were assessed, namely the degree of degeneration of cell degeneration and the degree of fibrosis in ten visual fields with 100x magnification in the renal cortex. Degrees of cell degeneration and vascular congestion follow the method in the study by Tan et al. For cell degeneration: 0 = no degeneration or at least one field of view; 1 = mild degeneration in ≤20% of cells in one field of view; 2 = moderate degeneration in 21-50% cells; 3 = severe degeneration in> 50% of cells in one field of view. For vascular congestion: 0 = no vascular congestion focusses or very minimal; 1 = mild congestion (congestion focus includes one location); 2 = moderate congestion (congestion focus is found in several locations); 3 = severe congestion (congestion in almost all fields of view). The preparations to be assessed are blinding by random numbering by a third party (unknown to researchers and supervisors). So that both researchers and mentors do not know which treatment group is being assessed. The assessment was carried out by the researcher which was then validated by the research supervisor.
Data analysis
Data analysis was carried out using SPSS version 20 statistical test software on the Windows 7 operating system. The results of histopathological observation in the control group and test group were compared by calculating the significance value or p value. Meaningful results were obtained if the significance value was p <0.05.
The effect of giving three doses of fish oil rich in omega-3 (1.5; 3; and 6 mg / day) to the occurrence of nephrotoxicity can be seen by looking at the degree of cell degeneration and vascular congestion. The data obtained is tested as an ordinal variable. Non-parametric Kruskal-Walli's analysis was performed to determine the difference in the degree of cell degeneration and significant vascular congestion between treatment groups. If the data obtained was statistically significant (p <0.05), the Mann-Whitney test was performed in each combination of treatment groups. Mann-Whitney test is used to find out which groups have significant differences with other groups.
Effects of omega-3 rich fish oil on kidney mouse cell degeneration
Histopathological observations on the degree of cell degeneration were performed at 100x magnification of kidney preparations with H & E staining. The degree of cell degeneration in each field of view from the 10 visual fields selected, is stated in low degree (1+), medium (2+), and heavy (3+). Cell degeneration in the renal cortex is characterized by enlarged cells and cloudy cytoplasm, and can be accompanied by vacuolization. (Figure 2) The data of cell degeneration degree in each field of view obtained were then statistically analyzed using the Kruskal-Walli's test to determine the significant differences between groups. Independent variables are the four groups of fish oil rich in omega-3, namely: 1) control; 2) low dose (1.5 mg / day); 3) moderate dose (3.0 mg / day), 4) high dose (6.0 mg / day). The dependent variable is the degree of cell degeneration per field of view, which is assessed in 10 fields of view (x100) for each mouse on histopathological examination.
The Kruskal-Walli's test showed significant differences in the degree of cell degeneration between treatment groups (p = 0.003), with the mean rank of cell degeneration degrees 98.65 for the control group, 83.85 for The degree of cell degeneration between treatment groups was then compared using the Mann-Whitney test. Mann-Whitney test results showed a significant difference in the degree of cell degeneration between the control and moderate dose groups (p = 0.005) and between the control and high dose groups (p = 0.001). There was no significant difference in the degree of cell degeneration between the Control and Low Dosage groups (p = 0.066). There is no significant difference between giving a dose with another dose to the degree of cell degeneration.
Effects of omega-3 rich fish oil on vascular congestion of kidney mice
Histopathological observations on the degree of vascular congestion were performed at 100x magnification of kidney preparations with H & E staining. The degree of vascular congestion in each field of view from the 10 visual fields selected is expressed in low degrees (1+), moderate (2+), and weight (3+). Vascular congestion in the renal cortex is characterized by a buildup of red blood cells in the blood vessels. (Figure 3) Data on the degree of vascular congestion in each field of view obtained were then analyzed statistically using the Kruskal-Walli's test to determine the significant differences between groups. Independent variables are the doses of fish oil rich in omega-3, namely: 1) control; 2) low dose (1.5 mg / day); 3) moderate dose (3.0 mg / day), 4) high dose (6.0 mg / day). The dependent variable is the degree of vascular congestion per field of view, which is assessed in 10 fields of view (x100) for each mouse on histopathological examination.
The Kruskal-Walli's test showed a significant difference in the degree of degeneration of vascular congestion between treatment groups (p = 0.017), with the mean rank was 99.26 for the control group, 74.76 for
Description of other histopathological changes in the kidney
In addition to cell degeneration and vascular congestion which is used as a variable in determining the effects of fish oil rich in omega-3, there are also several other histopathological features, namely necrosis and inflammation in kidney tissue of mice. Both of these histopathological features occurred in some mice in the control group, low doses, and moderate doses. Necrosis occurs in some kidney tissues from 3 mice in the control group, 1 low-dose group mice, and 1 high-dose group mice. Inflammation occurred in part of the kidneys of mice from 1 control group mice and 1 low-dose mice. (Figure 4)
Effects of omega-3 on the degrees of kidney degeneration postinduction AOM / DSS
The results showed that medium and high doses of omega-3 fish oil were able to reduce the degree of cell degeneration in the kidneys of mice induced by AOM and DSS. Administration of low doses of omega-3 fish oil cannot significantly reduce the number of fibrotic foci in the kidneys of mice. The effect of fish oil rich in omega-3 doses in reducing the degree of cell degeneration is not better than high doses, and vice versa.
Cell degeneration can occur due to damage in mitochondria, cessation of ATP production, and sodium pump failure causes an increase in intracellular osmotic pressure. 10 This change leads to increased permeability of the cellular membrane. 10 Mitochondrial damage can be caused by azoxymethane and its metabolites. 11 In addition, the formation of free radicals during the inflammatory process can also cause cell degeneration. 10 Several studies have studied the effects of omega-3 antioxidants on the kidneys. Garrel et al showed that omega-3 increased the activity of the enzyme dismutase superoxide in mitochondria. In a study that studied cadmium-induced nephrotoxicity, administration of omega-3-rich fish oil improved mitochondrial function. 12 Free radicals can cause fat peroxidation in cell membranes. Cell membranes which are mostly composed by fat become impaired in function. Increased permeability of cell membranes, so that sodium and water can enter more easily and cause cell swelling. 13 Omega-3 oils are known to not increase the occurrence of fat peroxidation, but are thought to reduce damage to cells due to fat peroxidation. 14 In addition to increasing the number of free radicals, tissue hypoxia also causes cell degeneration. Azoximetan is known to cause vascular changes in kidney tissue in the form of dilation, edema, and congestion. 3 These three changes, especially congestion, can reduce the supply of oxygen to kidney tissue and cause hypoxia. Hypoxia causes reduced ATP production, which is needed to maintain cell osmolarity through the Na + / K + . 15 pump function Omega-3 is known to improve hypoxia in kidney tissue that has been chronically damaged through immunomedulatory pathways. 16 TGF-β is also associated with the occurrence of cell degeneration through apoptotic pathways and transdifferentiation in chronic kidney disease. 17 Azoxymethane and its metabolites can trigger an increase in Omega-3 TGF-β. 18 receptors known to reduce the expression of TGF-β receptor gene in kidney cells. 19 The results of this study were consistent with the results of previous studies that showed impact of omega-3 administration to reduce the degree of cell degeneration in the kidneys through various mechanisms.
Omega-3 effect on the degree of vascular congestion after AOM / DSS induction
The results showed that the administration of fish oil rich in omega-3 doses of low, medium, and high can reduce the degree of vascular congestion in the kidneys of mice induced by azoxymethane and DSS. There is no difference between administering a dose with another dose in reducing the degree of vascular congestion.
Vascular congestion can be caused by active or passive processes. Active processes that can cause vascular congestion include inflammatory reactions. While the passive process of venous congestion is caused by obstruction of venous return. Renal tubular cell swelling due to hydropic degeneration can contribute to venous return flow obstruction. 15 Enlarged tubular cells can suppress surrounding blood vessels and cause obstruction. During inflammatory reactions, vasoactive amines such as histamine and serotonin, and also eicosanoids such as prostaglandin E2 and leukotriene B4 were relased by macrophages to modify local vascularization. Macrophages and endothelial cells can also release NO in response to pro-inflammatory cytokines. Mediators this can cause vasodilation and increase the permeability of capillary blood vessels that transfer of blood plasma from vessels to tissues. 20 Azoxymethane can increase NO secretion by endothelial cells by increasing the enzyme expression of inducible-NO synthase (iNOS). 21 In addition, azoxymethane also increases the expression of the cyclooxygenase-2 (COX-2) gene that plays a role in prostaglandin production. 22 Another study has shown a dose dependent between DHA administration (one type of omega-3) to iNOS reduction. 23 In this study no dose-dependent relationship between vascular congestion and omega-3-rich fish oil was found. This can be caused by the extraction of fish oil which is not fully composed of DHA, and also the mechanism of iNOS which is not directly related to the occurrence of vascular congestion.
The results of this study are consistent with several other studies that show that administration of omega-3 can reduce vascular congestion in kidney tissue through various pathways described above.
Description of other histopathological changes in kidney mice
Inflammation and necrosis found in the kidneys of mice in accordance with the results of previous studies regarding the effects of azoxymethane on renal histopathology of mice. 2,24,25 Inflammation was found in the kidney tissue of the control group mice more consistently (3 mice) than in the treatment group (1 lowdose mice group and 1 high-dose mice group). This indicates the effect of omega-3 in inhibiting inflammation. EPA and DHA can inhibit leukocyte chemotaxis, adhesin expression, adhesin-leukocyteendothelial interactions, production of arachidonic acid derivative mediators such as prostaglandins and leukotrienes, as well as the production of pro-inflammatory cytokines (TNF-α, IL-1, and IL-6). 6 The mechanisms of EPA and DHA in inhibiting inflammation have been widely studied, and it is concluded that there are a variety of pathways that can contribute to these effects. The first needle is through the GPR-120 receptor, where EPA and extracellular DHA can inhibit the activation of the NF-κB gene which functions to increase the synthesis of pro-inflammatory proteins. 26 The second needle is through intracellular EPA and DHA bonds to PPAR-γ which also inhibits NF-κB gene activation. 27 The third pathway is eicosanoids formation by EPA and DHA indirectly reduced the formation of inflammatory mediators of arachidonic acid derivatives, such as prostaglandins and leukotrienes. 28 The results of this study were consistent. These findings were consistent with the results of previous studies on immunoomega-3 modulator.
Necrosis was found in the kidney tissue of the control group mice (1 mouse) and the low dose group (1 mouse). No necrosis in kidney tissue was found in mice in the medium and high dose groups. These findings indicate the effect of fish oil rich in omega-3 in reducing the occurrence of necrosis in the kidneys of mice induced by AOM and DSS. Research by Singer et al demonstrated that administration of fish oil reduced the degree of necrosis in rat kidneys due to severe inflammatory processes and reperfusion after ischemia. 29 This finding is consistent with the results of previous studies on the effects of omega-3 in reducing necrosis in kidney tissue.
CONCLUSIONS
The conclusion of this study is: Administration of fish oil rich in omega-3 at moderate and high doses can significantly reduce the degree of cell degeneration on the kidneys of mice induced by azoxymethane and DSS.
Administration of fish oil rich in omega-3 at low, medium and high doses can significantly reduce the degree of vascular congestion in the kidneys of mice induced with azoxymethane and DSS. | 4,771.2 | 2022-09-05T00:00:00.000 | [
"Chemistry"
] |
In vivo measurement of skin heat capacity: advantages of the scanning calorimetric sensor
Measurement of the heat capacity of human tissues is mainly performed by differential scanning calorimetry. In vivo measurement of this property is an underexplored field. There are few instruments capable of measuring skin heat capacity in vivo. In this work, we present a sensor developed to determine the heat capacity of a 4 cm2 skin area. The sensor consists of a thermopile equipped with a programmable thermostat. The principle of operation consists of a linear variation of the temperature of the sensor thermostat, while the device is applied to the skin. To relate the heat capacity of the skin with the signals provided by the sensor, a two-body RC model is considered. The heat capacity of skin varies between 4.1 and 6.6 JK−1 for a 2 × 2 cm2 area. This magnitude is different in each zone and depends on several factors. The most determining factor is the water content of the tissue. This sensor can be a versatile and useful tool in the field of physiology.
Introduction
Temperature is the magnitude of major interest in the study of the thermal behavior of the human body. This magnitude relates the physiology of the human body to environmental conditions [1]. In addition, febrile states [2] and pathologies such as allergies [3], inflammations [4], skin cancer [5] or infections [6] produce detectable changes in temperature, which allow non-invasive monitoring. The human body can regulate its own temperature through energy exchange mechanisms [7].
The energy dissipated by the human body is also a magnitude of interest. Heat flux sensors can be used to directly measure the heat dissipation of the human body in localized areas. However, the most common technology is indirect calorimetry [8], which consists of measuring the volume of O 2 absorbed and CO 2 released by a subject to assess the metabolic rate at rest [9][10][11] or during an activity [12,13]. The energy response of the human body is variable. Heat flux can vary between 4 and 77 mWcm −2 depending on the subject and the activity performed [14,15].
Calorimetry also includes the study of various thermal properties, such as heat capacity, thermal diffusivity or thermal conductivity. This work deals with the measurement of heat capacity, which is usually determined by in vitro procedures. Until the 1980s [16], several studies measured in vivo the thermal inertia of the skin (a quantity that depends on the heat capacity), with the aim of finding its thermal conductivity. Later, differential scanning calorimetry (DSC) techniques were used to measure heat capacity in vitro [17]. DSC measurements have been efficient, so in vivo measurement of heat capacity has not been developed much.
In the mid-1990s, the use of the 3ω method for measuring heat capacities of solids and liquids became popular [18,19]. In the following decade, the use of this method to measure the thermal conductivity of skin in vivo was discussed [20]. In the last decade, sensors based on 3ω technology have been developed to measure the thermal conductivity of skin [21]. This technology is also able to determine the heat capacity of the skin in vivo with an appropriate signal processing [22]. In 2013, Webb R.C. began to investigate the 1 3 application of ultra-thin conformal arrays for skin thermal characterization [23]. In 2014, photonic devices were used to perform these measurements [24]. Later, devices consisting of arrays of metallic filamentary structures, constructed with gold and chromium conductors, which function as both sensor and actuator (technology very similar to the 3ω method), were used. Initially, the work of Webb et al. focused on the determination of thermal conductivity and thermal diffusivity, but in 2015 they presented results of heat capacities measured in vivo. From thermal diffusivity, they determined the heat capacity in different areas of the human body [22], with a thermal penetration depth of approximately 0.5 mm. In one of the latest works of this research group, the 3ω method was used [25]. These methods resulted in a patent in 2017 [26].
The calorimetric sensor used in this work has been built in our laboratory [27]. This sensor consists of a thermopile equipped with a programmable thermostat. The principle of operation consists of a linear variation of the temperature of the sensor thermostat, while the device is applied to the skin. By means of an appropriate treatment of the signals given by the sensor, we can determine the skin heat dissipation (W), the heat capacity (JK −1 ) and the thermal resistance (in KW −1 ) of a skin area of 4 cm 2 . The thermal penetration depth of these measurements varies from 3 to 4 mm. This paper begins with a brief description of the sensor and the measurement method. Heat capacity obtained in different areas of the skin in some subjects is presented in specific and absolute form, and our results are compared with those obtained by other authors. This sensor has the advantage of allowing non-invasive in vivo measurements. In addition, the possibility of programming the thermostat at the researcher criteria makes it possible to determine several thermal magnitudes of the skin simultaneously, and to study the thermal response of the skin to different excitations.
Calorimetric sensor
The calorimetric sensor consists of a measuring thermopile (part 2 in Fig. 1), located between an aluminum plate (part 1 in Fig. 1) and a thermostat (part 3 in Fig. 1). The measuring thermopile provides the calorimetric signal by Seebeck effect. The aluminum plate rests on the area where the heat capacity is measured: the skin of the human body, or a calibration surface. The thermostat consists of an aluminum block that contains a heating resistor and an RTD sensor (resistance temperature detector). The cooling system is based on a Peltier element, a heatsink and a fan (parts 4, 5 and 6 in Fig. 1). The perimeter of the sensor is surrounded by a thermal insulation of expanded polystyrene (part 7 in Fig. 1) which reduces external disturbances. For sensor calibration, a calibration base is used. This base consists of a block of insulating material (part 8 in Fig. 1) which has a copper plate with a resistor for Joule calibrations (part 9 in Fig. 1). In addition, the calibration base has a magnetic holding system (part 10 in Fig. 1) for easy handling of the sensor. A thermistor on the outside of the sensor (part 11 in Fig. 1) allows measuring the skin temperature in the vicinity of the sensor. The operation of the calorimetric sensor and its calibration is described in previous works [27].
Heat capacity measurement
To measure the heat capacity, the calorimetric sensor is operated as a scanning calorimeter. While the measurement is performed, the device is applied on the skin as shown in Fig. 2. The measurement procedure is as follows. Initially, the sensor is on the calibration base, and the thermostat is set at 26 °C (Fig. 1). When the sensor reaches the steady state, it is applied on the skin (Fig. 2) and the thermostat temperature is maintained at 26 ºC for 300 s. Next, a linear variation of the thermostat temperature from 26 to 36 ºC in 150 s (rate of 4 Kmin −1 ) is programmed. Then, the thermostat temperature is maintained at 36 ºC for 300 s. Finally, the sensor is returned to the calibration base, keeping the thermostat temperature constant at 36 ºC. Figure 3 shows an experimental measurement on the temple of a healthy 26-year-old male subject, seated and at rest. The thermostat temperature programming is shown in Using an error minimization method described in a previous work [27], the heat power ( Fig. 3c) and the heat capacity of the skin, C 1 , are determined. This method reconstructs the calorimetric signal and the thermostat temperature (Fig. 3a,b). For this purpose, a twobody RC model is used. This model is based on two heat capacities, connected to each other and to the outside, by thermal couplings of given thermal conductance. The first element represents the place where the heat dissipation occurs and the second one represents the sensor thermostat. The Joule calibrations performed on the calibration base allow the determination of all the parameters of this model.
To test the validity of the method for the heat capacity determination, five aluminum blocks of 4-mm thickness and known masses (0.67, 1.11, 1.56, 2.00 and 2.44 g) have been used. These blocks were placed between the calibration base and the sensor, programming a measurement similar to the one shown in Fig. 3, for each aluminum block. Figure 4 shows the relationship between the heat capacity (C 1 ) obtained with the calculation method and the real capacity of the aluminum blocks (C). As we can see, the heat capacity obtained follows the expression C 1 = 2.9 + C (JK −1 ), where C is the real capacity of the aluminum block and 2.9 JK −1 is the heat capacity of the calibration base. In calibrations performed on the calibration base in the ordinary way (without placing any aluminum block) this value, C 1 = 2.9 JK −1 , is obtained. The error in the adjustment is ± 0.03 JK −1 . Aluminum blocks have been used because of their high thermal conductivity. Thus, the identified model only changes in the heat capacity C 1 .
Heat capacity
Using the procedure described in the previous section, measurements were made on 6 subjects, whose anthropometric characteristics are shown in Table 1. The measurements were made on 6 different areas of the skin. The heat capacity measured by the calorimetric sensor varies between 4.1 and 6.6 JK −1 . Figure 5 shows the results obtained in each zone, as mean ± standard deviation. In all the zones the standard deviation is similar (≈ 5%). The thermal properties measured by the sensor are a combination of the properties of the tissues affected by the measurement. Therefore, each zone has a different heat capacity. The lowest heat capacity values were obtained in the heel. The volar area of the wrist and the abdomen present a higher heat capacity. This is possibly related with a higher perfusion in these zones. The temperature variation of the sensor thermostat was linear from 26 to 36 °C.
All measurements were performed with the subjects seated, at rest and under the same conditions of humidity and ambient temperature (55% RH, 23 °C). Before and after each measurement, the subjects' heart rate and blood pressure were measured to check their resting state. The skin area studied was dry, untreated, and no creams were applied. The sensor was attached to the skin in two ways: manually or with an adapted attachment (see Fig. 2). The pressure applied on the skin is the minimum necessary to ensure good contact between the sensor and the skin, but without disturbing the blood circulation. The results were similar with the two types of holding, although with the adapted attachment the experimental curves show less oscillations.
Thermal penetration depth
Heat capacity obtained with our calorimetric sensor is an absolute magnitude. In order to compare our results with those of other authors, a dimensional modelling of the heat-affected zone must be carried out. This volume is usually modelled by considering a thermal penetration depth. Each author considers this magnitude in a different way. Limei [25] used the expression of David [28], obtaining a maximum depth of 0.1 mm. Webb [22]. characterized the Heat capacity, C 1 /JK -1
Fig. 5
Mean heat capacity of different zones, obtained with the calorimetric sensor in the subjects of Table 1 depth with the expression of Silas [29], obtaining a value of 0.5 mm. The expression used by him is as follows: where Δp is the thermal penetration depth, β is a constant (in this case β = 1), α is the thermal diffusivity, t max is the transient time (of temperature change), ρ is the density, c p is the specific heat capacity and λ is the thermal conductivity.
In our case, we define the thermal penetration depth by considering a prismatic heat-affected zone. Considering the average values of density, thermal conductivity and heat capacity of the skin, and the results of our measurements, we obtain a thermal penetration depth of 3-4 mm. If we use Eq. 1, we obtain a value of approximately 3 mm. In the calculation we associate the time t max with the time constant of the final part of the calorimetric signal (see Fig. 6). This time constant corresponds to the stabilization of the calorimetric signal at the end of the linear variation of the sensor thermostat temperature. Figure 6 shows the adjustment on this section of the calorimetric curve (red), obtained considering a time constant of 90 s. Note that the method for determining the heat capacity performs a better reconstruction as it considers a two-body model, i.e., two time constants.
Thermal penetration depth is not invariant: it depends on the properties of the tissue on which the measurement is made, and also on the characteristics of the thermal excitation produced by the sensor. We have studied how the thermostat temperature change affects the thermal penetration depth. Figure 7 shows several experimental measurements on the left wrist of subject 1; as a function of the thermostat temperature change (2.5, 5.0, 7.0 and 9.0 ºC). Note that there is proportionality between the magnitude of the thermostat temperature change and the heat capacity measured by the sensor. As the magnitude of the temperature change increases, the excitation produced by the calorimetric sensor on the human tissue also increases, leading to an increase in the volume of the heat-affected zone. Correlations shown in the figure intend to illustrate this increasing trend. On the other hand, we have detected no relationship between the measured heat capacity and the heating rates used (1-4 Kmin −1 ).
Specific heat capacity
Direct measurement of the heat capacity and its treatment as an absolute quantity (Fig. 5) simplifies the interpretation of the results obtained. However, we consider convenient to contrast the results in specific quantities. Table 2 shows the results of Fig. 5 We can observe the high standard deviation of the measurements, our case being the lowest. On the other hand, the specific heat capacity values obtained by our sensor are higher, due to the greater thermal penetration depth of our measurements. Figure 8 shows the heat capacity obtained in the works referenced in Table 2. Each point corresponds to a mean value, and the line through it to its standard deviation. Grey rectangles are bounded by the maximum and minimum measured value. As we said before, measurement of the in vivo heat capacity is novel and this property is usually measured in vitro by DSC. For this reason, in Fig. 8, some specific heat capacity values measured in vitro are indicated in orange areas [30].
All the results are coherent with the in vitro references shown in the orange areas in Fig. 8. However, there are significant differences between authors. This is consistent, given the difference between the thermal penetration depths of each instrument. Consequently, the tissues involved in the heat-affected zone are different for each work.
The skin is composed of several layers with different thermal properties. The epidermis thickness can vary between 0.1 and 1.0 mm, and it is also composed of different layers. On the other hand, the dermis can be several millimeters thick [31]. The first layer of the epidermis is the stratum corneum, whose heat capacity can be estimated at 2.2 JK −1 g −1 [32,33]. The structure of the dermis consists of collagen and elastin fibers, with heat capacities of 2.0 [34] and 1.3 [35] JK −1 g −1 , respectively. These heat capacities are low compared to water (4.18 JK −1 g −1 ). As we go deeper into the skin, the blood perfusion (and water content) of the tissues increase, leading to a higher heat capacity. Thus, the heat capacity measured is a function of thermal penetration depth, as shown in Fig. 8.
Webb investigated the correlation between heat capacity and the thickness of the epidermis and stratum corneum, and its water content [22]. He found negative correlations with thickness (the thicker the epidermis or stratum corneum, the lower the heat capacity) and positive correlations with water content (more water implies a higher heat capacity), which is consistent with the hypothesis set out in the previous paragraph. For example, the lowest heat capacity detected by Webb is in the heel area, which has a very thick stratum corneum and a very low degree of humidity compared to the rest of the tissues analyzed in his experiment. This observation also coincides with the lower heat capacity detected with our instrument in the heel. Li et al. [24] also detected an increase in heat capacity as a consequence of water content. The dependence between heat capacity and water content Table 2 In vivo specific heat capacity measurements (results expressed as min ~ max, mean ± standard deviation %) Authors
Measurement conditions
Results (Jg −1 Li et al. [24] Photonic device [24] (Volar wrist) measurement at rest, one subject, as a function of humidity degree can also be studied by producing alterations in the skin. As an example, we can cite the experience of Limei: after artificially producing urticaria, heat capacity increased by 17%. With our sensor, we monitored a second-degree burn on the volar area of the right wrist of subject 1 (Table 1). This 2 × 1 cm 2 lesion was caused by an unfortunate accident while doing the ironing. Figure 9 shows the variation of the heat capacity of the injured area compared to a nearby healthy area. We emphasize on the ability of the sensor to quantitatively monitor the recovery process of an injured tissue. Figure 9 shows a 25% heat capacity decrease, mainly due to dehydration of the injured area.
Conclusions
We have developed a sensor that implements the principles of scanning calorimetry to measure in vivo the heat power and the heat capacity of a 2 × 2 (4) cm 2 skin region. Its thermal penetration depth is up to 4 mm in the current configuration. The calorimetric sensor is a useful tool for the study of the human body physiology and can complement other technologies. Our sensor provides a numerical value of the physiological state of the skin area under study. The possible deviation from the normal values provides interesting information. In this paper we present measurements made with the calorimetric sensor. The order of magnitude of the results obtained have been evaluated in several subjects and different areas of the human body, and a comparison with the results provided by other authors is made. Heat capacity at rest varies from 4.1 to 6.6 JK −1 . It presents a different value in each area studied, which depends mainly on the composition of the tissue analyzed. The lowest heat capacities were found in the heel and the highest in the abdomen. There are only a few studies in which this magnitude is measured in vivo, and most of them are very recent. The differences between authors are consistent with the difference of thermal penetration depth of each instrument. As the thermal penetration depth is greater, the heat capacity is also greater, since the tissue distribution changes and the water content increases. This is the main variable that affects the in vivo heat capacity value.
This technology has three advantages: (1) the device measures heat power and heat capacity simultaneously, (2) heat capacity is measured as an absolute quantity, which simplifies the interpretation of the results obtained, and (3) it is possible to regulate its thermal penetration depth by programming the thermostat temperature. Although this procedure is working successfully, the duration of measurements with these sensors must be taken into account; they usually require more time than other techniques and the processing of the signals is more complex.
Author Contributions: FS, MRR and PJRR contributed equally to the investigation. MRR contributed to the medical methodology of the work. All authors have read and agreed to the published version of the manuscript.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work research was funded by the "Consejería de Economía, Conocimiento y empleo del Gobierno de Canarias, Programa Juan Negrín" Grant Number SD-20/07 (Grant Holder: Pedro Jesús Rodríguez de Rivera).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Fig. 9 Heat capacity variation of a 2 × 1 cm 2 second degree burn of in the right wrist volar area. Comparison of the injured skin (red) with the healthy skin (blue) 5 1 | 4,998 | 2022-06-23T00:00:00.000 | [
"Engineering"
] |
Molecular Simulations of CO2/CH4, CO2/N2 and N2/CH4 Binary Mixed Hydrates
Grand canonical Monte Carlo simulations were performed to study the occupancy of structure I multicomponent gas hydrates by CO2/CH4, CO2/N2, and N2/CH4 binary gas mixtures with various compositions at a temperature of 270 K and pressures up to 70 atm. The presence of nitrogen in the gas mixture allows for an increase of both the hydrate framework selectivity to CO2 and the amount of carbon dioxide encapsulated in hydrate cages, as compared to the CO2/CH4 hydrate. Despite the selectivity to CH4 molecules demonstrated by N2/CH4 hydrate, nitrogen can compete with methane if the gas mixture contains at least 70% of N2.
INTRODUCTION
OpenAccess Gas hydrates are crystalline solids comprised of gas molecules enclosed in cavities of water lattice. Methane hydrate is the most abundant gas hydrate occurring in nature [1]. Methane is known to form sI hydrates with a unit cell comprised of 46 water and 8 gas molecules located in 6 large (5 12 6 2 ) and 2 small (5 12 ) cavities [2,3].
Permanent CO 2 storage in hydrates has been recognized as a potentially attractive carbon capture technology [4]. Among other possible methods of largescale production of CO 2 hydrate, injection of carbon dioxide into natural methane hydrate deposits is considered. The injection process leads to the replacement of methane in clathrate cavities by carbon dioxide, thereby increasing the efficiency of methane recovery and resulting in the formation of stable CO 2 hydrates [5,6]. However, injection of pure CO 2 is likely to be unfeasible, since it requires preliminary separation of carbon dioxide from other components of flue gas, as well as its subsequent compression. It is possible to overcome this drawback by replacing pure carbon dioxide by its mixture with nitrogen [7], which is the predominant component of flue gas. Furthermore, experimental studies have shown that the use of CO 2 /N 2 mixtures leads to more efficient CH 4 recovery, as compared to pure CO 2 injection [7][8][9]. These findings have been recently verified by testing on the industrial scale [10].
The mechanism of methane replacement in hydrates by carbon dioxide and nitrogen largely relies upon competition of the gas molecules for occupancy of clathrate cages of different types. As indicated by experimental data for pure and mixed hydrates, methane molecules are capable of occupying both small (5 12 ) and large cages (5 12 6 2 ), though large cages typically have somewhat higher occupancies, especially at higher temperatures [11]. Carbon dioxide molecules are significantly larger than that of methane, which makes them relatively unsuitable guests for occupying small cages. Earlier studies have suggested that small cavities in pure CO 2 sI hydrates are not occupied [11,12], but more recent experiments reported filling of up to 70% of small cages by carbon dioxide [13,14]. Nitrogen molecules are the smallest of all three gases, which makes them potential competitors for the filling of small cages.
Experimental studies of mixed hydrates composition and distribution of gas molecules over cages of different types are mainly focused on the properties of CO 2 /CH 4 and, to a smaller degree, CO 2 /N 2 hydrates, while other mixtures are quite rare. In CO 2 /CH 4 hydrates methane loses the competition for large cages to carbon dioxide, as shown by the decrease of large vs. small cage occupancy ratio for CH 4 upon mixed hydrate formation [6]. According to Raman spectra measurements [15], in CO 2 /N 2 hydrates nitrogen molecules can be found in both small and large cavities regardless of the gas mixture composition. Carbon dioxide predominantly occupies large cages, and the amount of CO 2 in these cavities increases with carbon dioxide content in the gas mixture. NMR studies frequently have problems detecting signals from CO 2 in small cages, which also suggests that carbon dioxide tends to occupy large cages [16,17]. In ternary CH 4 /CO 2 /N 2 hydrates nitrogen prevails over the two other gases in the competition for the filling small cages, while CO 2 does the same for large cages [18]. Methane occupancy in small cages was shown to be higher than in the large ones [19].
A molecular-level insight into the mechanism of competitive cage occupancy in gas hydrate can be obtained from molecular simulation studies. However, despite the successful application of molecular dynamics and Monte Carlo simulation techniques to one-component hydrates, including those of CH 4 , CO 2 , and N 2 , there is only a very limited number of simulation studies considering the properties of mixed hydrates [20][21][22]. A comparison of molecular dynamics simulation data for one-component CH 4 and CO 2 hydrates and CH 4 /CO 2 mixed hydrate suggests that the latter species may be more stable than either of the one-component gas hydrates [23]. Estimates of free energies of methane replacement by CO 2 and N 2 molecules obtained from molecular dynamics simulations show that only the replacement of methane by carbon dioxide in large cages of sI hydrates has a negative free energy [24]. Therefore, in agreement with experimental studies [6], a complete replacement of methane in its hydrates by other gases is impossible, since some CH 4 molecules are likely to remain in small cages. A molecular dynamics simulation study of the replacement of methane in sI hydrate by carbon dioxide reported the formation of amorphous CO 2 hydrate on the surface of CH 4 hydrate, which proceeded along with the decomposition of the latter [25]. A Monte Carlo simulation study for one-component and mixed hydrates of methane and carbon dioxide was reported [26]. According to simulation data, small cages are preferably occupied by CH 4 ; large cages do not show any preference for either of the gases, except for high pressures (100 bar or more), for which preferential occupancy by CH 4 molecules was observed. Finally, the energy barriers of gas diffusion through rings of water between the cages of sI hydrate were calculated [27]. Simulation data suggest that nitrogen has more possibilities than CO 2 to diffuse into the large cages, which are already occupied by methane, though this effect is relatively weak.
In the present study, Monte Carlo simulations are employed to investigate the nature of competitive cage occupancy in CO 2 /CH 4 , CO 2 /N 2 , and N 2 /CH 4 mixed sI hydrates. To represent the results, total and partial occupancy isotherms will be used, which allow to elucidate the effects of gas mixture composition and pressure on the composition of binary hydrates, to obtain the distribution of gas molecules over various types of cavities and to calculate selectivity of the hydrate framework to the components of the gas mixture.
SIMULATION DETAILS
Method and Models Simulation of hydrate framework occupancy was performed by grand canonical Monte Carlo method using in-house software. The dispersion interactions in the system were described by Lennard-Jones potential, while electrostatic interactions between effective atomic charges were taken into account via Coulumbic potential. Three-dimensional periodic boundary conditions were implemented to model a bulk hydrate phase. The nearest image convention was used to evaluate the interaction energies.
All-atom rigid models of water and gas molecules were chosen for the simulation, namely, TIP4P/ice for H 2 O [28], OPLS-AA for methane [29] and TraPPE for carbon dioxide and nitrogen [30].
Hydrate framework was represented by 4 × 4 × 4 unit cells of structure I with 2944 water molecules ( Fig. 1). In total, the simulation cell contained 512 cages available to gas molecules, including 128 small cages and 384 large cages. Hydrate framework was completely rigid and did not change during the simulation.
The occupancy of hydrate was studied at 270 K and pressures from 1 to 50-70 atm, depending on the system. Three binary gas mixtures were considered: CO 2 /CH 4 , CO 2 /N 2 , N 2 /CH 4 with 10, 30, 50, 70, or 90 mole percent of the first component.
From a methodological point of view, the investigation of hydrate framework filling resembles gas adsorption simulations by grand canonical Monte Carlo method, i.e., the chemical potentials of the gases control the amount of gas in the solid. The chemical potential values providing the necessary compositions of gas mixtures within the desired range of pressures were obtained from a large set of preliminary simulations for bulk gas mixtures without the hydrate framework. Monte Carlo simulation length varied from 15-20 million steps for preliminary bulk gas calculations to 30 million steps for hydrate simulations. All computed properties were averaged over the equilibrated part of trajectory, which was no shorter than 15 million steps.
Occupancy
The amount of gas contained in hydrate is described by occupancy Θ, which is defined as the number of gas molecules divided by the number of cages: For the calculation of the total occupancy N gas is the total amount of all gas molecules in the system, while for the partial occupancy the number of molecules of a particular gas is used. In a similar manner, for the calculation of the total occupancy the number of all types of cages in the framework is used (512 in this work), but when the occupancy of a particular type of cages is evaluated, N cages is the number of corresponding cages in the simulation cell (128 for small cages and 384 for large cages).
Selectivity Selectivity of hydrate framework to a component of the binary gas mixture is defined in the same way as in adsorption simulations and is described by selectivity coefficient S i , which is the ratio of gas mole fractions in hydrate and in the gas phase: where S i is the selectivity coefficient towards the component i, x i and x j are the molar fractions of the mixture components i and j. The selectivity coefficients to carbon dioxide (CO 2 /CH 4 and CO 2 /N 2 hydrates) and to methane (N 2 /CH 4 hydrate) were calculated.
RESULTS AND DISCUSSION Total Occupancy and Selectivity of Hydrate Framework
Occupancy isotherms of the binary CH 4 /CO 2 hydrate for gas pressures up to 50 atm are shown in Fig. 2a. For all mixture compositions considered, the total hydrate occupancy is relatively high (more than 0.7) even at low pressures, and grows quickly with increasing pressure. At pressures higher than 25-30 atm the hydrate framework is almost saturated. If the gas mixture contains less than 70% of CO 2 , the maximum occupancy is close to unity (0.97 or more). For mixtures containing more than 70% of carbon dioxide a significant decrease in total occupancy was observed. The dependence of occupancy on pressure for the CO 2 /N 2 mixture is generally the same as for CO 2 /CH 4 , though the absolute values of Ѳ are somewhat lower.
For the N 2 /CH 4 mixture (Fig. 2b), occupancies of the hydrate at pressures below 10 atm are noticeably lower than for CO 2 /CH 4 or CO 2 /N 2 mixtures. Occupancy isotherms reach saturation at higher pressures than for the two other mixtures (at ca. 30-40 atm).
Along with the total occupancies, partial occupancies were obtained for the components of all three mixtures. The dependences of partial occupancies on pressure are presented in Fig. 3 for mixtures with similar composition (70% of carbon dioxide for CO 2 /CH 4 and CO 2 /N 2 mixtures, 70% of nitrogen for N 2 /CH 4 ). For the mixtures containing CO 2 the amount of the carbon dioxide in the hydrate drastically exceeds the 47.8 Å amount of the second component and weakly depends on pressure. Noticeable pressure dependence is observed only for methane partial occupancy. For the N 2 /CH 4 mixture with 70% of nitrogen the partial occupancies for both gases are very close, i.e., the mixed hydrate contains more methane as compared to the initial gas phase composition.
The difference between compositions of the hydrate and initial gas mixture can be conveniently described by the selectivity coefficient S i , which is calculated using ratios of gas mole fractions in hydrate and in bulk gas.
Pressure dependence of hydrate selectivity coefficients to CO 2 for CO 2 /CH 4 mixtures of various composition is shown in Fig. 4. Under most of the conditions considered, the hydrate framework demonstrates selectivity to carbon dioxide ( > 1), which means that the hydrate contains more CO 2 than the coexisting gas phase. For the mixture containing 90% of CO 2 the hydrate becomes selective to methane ( < 1), i.e., the amount of methane in the hydrate is greater than that in the gas phase. The increase in pressure leads to a slight decrease in selectivity coefficients, which become nearly constant at high pressures.
For the CO 2 /N 2 mixture, the behavior of selectivity to CO 2 is qualitatively similar to that for the CO 2 /CH 4 mixture. However, the values of for the CO 2 /N 2 mixed hydrate are two times higher, and the hydrate never becomes selective to the nitrogen. It is worth mentioning that only large cages are selective to carbon dioxide, while in small cages is two orders of magnitude lower than in large cavities. The distribution of molecules over the cavities of hydrate framework will be discussed in detail in the next section.
Selectivity of hydrate framework to methane for the N 2 /CH 4 mixture is almost unaffected by mixture composition or pressure. The values of under the conditions considered in this work were found to be in the range from 2.3 to 2.6.
Qualitatively and quantitatively the results obtained are in good agreement with the available experimental studies [6,16]. A comparison of calculated and measured [31] selectivity coefficients for the CO 2 /CH 4 mixture is provided in Table 1.
The maximum deviation of calculated selectivity coefficients from the experimental values is about 35%, which can be at least partially attributed to the difference in temperatures and mixture compositions used in experiments and simulations. Though the pri- mary trends are similar in both cases, simulation data suggest a somewhat more pronounced increase of the values with increasing pressure and decreasing CO 2 content.
The observed difference in the behavior of N 2 /CH 4 mixture and CO 2 -containing mixtures is due to the nature of carbon dioxide. Possessing a noticeable quadrupole moment, carbon dioxide interacts with water molecules much stronger than non-polar molecules of nitrogen or methane. Therefore, carbon dioxide preferentially occupies the hydrate cages, which can be seen from partial occupancies (Fig. 3) and selectivity coefficients (Fig. 4). Moreover, even at low pressures the occupancy of hydrate by CO 2 molecules is quite close to the saturated framework occupancy. Weaker interactions of water with CH 4 and N 2 molecules lead to low occupancies at pressures below 15 atm (Fig. 2a, b). As one can see from Fig. 2b, the most 'unfavorable' gas for the filling of sI hydrate is nitrogen. Thus, the difference in hydrate occupancy caused by interactions of guest molecules with water is more pronounced at low pressures, while at higher pressures the occupancy mostly depends on the size of gas molecules.
While the low-pressure region on the occupancy isotherms is important for obtaining a complete description of the behavior of the simulated systems, it is located outside the experimentally observed hydrate stability zone. CO 2 /CH 4 hydrate stability zone at 273.7 K is located above 14 and 25 atm for mixtures containing 79 and 10% of CO 2 , respectively [32]. For the CO 2 /N 2 mixture (25% CO 2 ) at 274 K hydrates are observed at 59 atm, though this pressure value was found to decrease significantly when the temperature decreases and carbon dioxide content in the mixture increases [33]. For the N 2 /CH 4 mixture at 273 K the stability limit increases from 35 to ca. 140 atm with increasing nitrogen content in the mixture, but at the temperature of 270 K, for which the simulations were carried out in this study, the pressures should also be lower [34]. It is worth noting that these experimental pressures cannot be directly applied to simulations as the precise conditions of hydrate stability, because the pressure values obtained by experiment and simulation may differ significantly. However, the exact location of the lower stability limit is not a high-priority issue for this particular study, and further discussion will rely predominantly on simulation data obtained for high occupancies and high pressures, i.e., for conditions under which the hydrate should be definitely stable.
Cage Occupancies
As shown in Fig. 2, total occupancies reach saturation at high pressures, namely, at ca. 50 atm for CO 2 /CH 4 and CO 2 /N 2 hydrates and ca. 70 atm for N 2 /CH 4 hydrate. Thus, maximum occupancies can be estimated for each mixture composition. Maximum 2 CO S occupancies of the large and small cages can also be evaluated separately. The result is shown in Fig. 5.
It was found that total maximum occupancies of the hydrate framework are the highest for CO 2 /CH 4 and N 2 /CH 4 mixtures and are very similar to each other. Maximum occupancy of CO 2 /N 2 mixed hydrate is lower due to unfavorable filling of small cages. Large cages are almost fully occupied under all conditions. Occupancy of small cages is much lower and decreases with increasing CO 2 content in the mixture.
The results obtained can be explained by the partial gas occupancies (Fig. 6). Partial occupancy of small cages by carbon dioxide is close to zero, thus all CO 2 molecules reside in large cages (Fig. 6a, b). The filling of small cages by carbon dioxide rises for the mixtures with 90% of CO 2 , which is accompanied by a sharp decrease of the content of the second component in small cages. At such mixture composition (90% CO 2 ) the number of molecules of the second component in large cages is negligible. In CO 2 /N 2 and CO 2 /CH 4 hydrates nitrogen and methane fill both large and S small cages, the amount of N 2 molecules in cages being much lower than that of CH 4 . Obviously, carbon dioxide molecules are too large to be accommodated by small cages in noticeable amounts. Nitrogen and methane molecules are smaller, so they can occupy cages of both types. Thus, in sI hydrate nitrogen behaves more like methane than like carbon dioxide, which is the reason of the similarity in the properties of CO 2 /N 2 and CO 2 /CH 4 hydrates. For CO 2 /N 2 hydrates, a distribution of gas molecules over different cages, similar to the one obtained in simulations, had been observed experimentally [15]. As shown in [6] for CO 2 /CH 4 hydrate, the equilibrium occupancy ratio (Θ large /Θ small ) for methane varies from 1.26 (pure CH 4 ) to 0.23 (very low CH 4 content), which is comparable with the values obtained in this work (0.7 for 90% CH 4 and 0.1 for 10% CH 4 ).
The results of simulation of the N 2 /CH 4 mixed hydrate (Fig. 6c) show that all cages are mostly occupied by methane, except mixtures with the highest nitrogen content (70% or more). The difference in the occupancy of large and small cages by both components is not observed.
It should be noted, that the total maximum occupancies of CO 2 /CH 4 and N 2 /CH 4 hydrates are very close, but the behavior of partial occupancies is completely different.
CONCLUSIONS
Grand canonical Monte Carlo simulations were carried out to compare the occupancy of structure I hydrate framework by three binary gas mixtures, CO 2 /N 2 , CO 2 /CH 4 , and N 2 /CH 4 , at 270 K and pressures up to 70 atm. CO 2 /N 2 and CO 2 /CH 4 mixed hydrates are selective to carbon dioxide, with selectivity coefficients being almost twice higher for the CO 2 /N 2 mixture. Carbon dioxide in mixed hydrate predominantly occupies large cages and only a minor fraction of small cages (Θ small < 0.1). Nitrogen and methane show no preference to cage size and can be equally found in cages of either type. N 2 /CH 4 hydrate was found to be selective to methane. The analysis of the results obtained from simulations leads to the conclusion that methane is unlikely to be completely removed only by the injection of pure carbon dioxide, because the filling of small cages by CO 2 molecules is unfavorable. The injection of CO 2 /N 2 mixture should yield better results because methane from the small cages can be displaced by nitrogen, if the mixture contains a sufficiently high amount of N 2 (more than 70%), which closely resembles the typical composition of flue gas. | 4,881.8 | 2021-05-01T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Research on online teaching operation mode of postgraduate courses
. In order to effectively improve the online teaching effect of graduate courses, it is very important to plan the online teaching operation mode with the full help of information technology. This paper analyzes the characteristics and existing problems of postgraduate online teaching. In view of the characteristics and existing problems of online teaching, according to the teaching experience and the characteristics of postgraduate courses, this paper designs the teaching operation mode from the three aspects of teaching, auxiliary teaching and interactive platform, and puts forward the methods of developing online teaching from the three aspects of classroom teaching, course management and course assessment. This paper discusses the process of online teaching in detail, which provides a reference for the efficient online teaching of graduate courses.
Introduction
Postgraduate course teaching is an important part of postgraduate training. Optimizing teaching methods and improving course teaching effect has become a hot issue in training graduate students [1]. With the development of network, it provides a powerful tool for online teaching [2]. Based on online learning platforms, such as learning pass, nailing, Mu class, wisdom tree, rain class, etc, domestic college teachers carry on online teaching, provide teaching resources for postgraduates, and build a teaching and learning environment [2][3][4].
Online teaching operation mode is a great challenge for teachers, especially the online teaching operation mode after leaving school [5]. Mastering students' learning situation, carrying out classroom teacher-student interaction, effectively guiding graduate students' online learning and improving the efficiency of online learning have become the focus and difficulty of graduate teaching. Starting from the characteristics and existing problems of online teaching, this paper discusses the operation mode of online teaching of postgraduate courses and improves the effect of online teaching of postgraduate courses.
Characteristics of online teaching
In the process of online teaching, there is a lack of communication between teachers and students. Students' satisfaction with learning materials, effect and communication is low [6]. Especially when graduate students leave school, online teaching loses the advantage of centralized learning in school. For learning in the school, according to the characteristics of curriculum knowledge, teachers can complete theoretical learning in the classroom. Teachers can choose to study in the laboratory, cooperate with relevant experiments to verify the theorems, laws and conclusions of the course, and complete the necessary practical application. Teachers can also choose on-site teaching and use their knowledge to analyze the on-site operation. In centralized learning, teachers and students are in the same time and space. Through face-to-face communication and interaction, teachers can timely grasp students' learning status, analyze students' learning effect, and then give corresponding guidance according to the actual situation.
In online teaching, the teaching and learning environment has changed, and all teaching activities need to be completed with the help of the network. Because of network speed and congestion, teachers and students are not synchronized when receiving information. In the process of teacher-student interaction, teachers or some students have announced the answers to the questions, while some students have not heard the questions. Unfortunately, the network failure directly leads to the inability of online teaching.
When leaving school, online teaching has become a decentralized learning state. Students are no longer learning with their classmates, without the learning environment, and can not see the figure of their classmates struggling. The mutual encouragement and learning competition between students is not obvious. Therefore, the learning atmosphere is weakened, learning inertia is enhanced, and learning efficiency is reduced.
In the process of online teaching, based on modern information technology means, through online teaching, video, discussion and other methods, students can arrange their own learning according to their own situation, which is not limited by traditional classroom. And students can easily study the key and difficult knowledge, so as to stimulate students' interest, strengthen students' memory, tap students' potential and cultivate students' ability. It can effectively improve the traditional teaching operation mode.
Existing problems of online teaching
In view of the characteristics of online teaching, teachers cannot supervise students' learning face-to-face, observe students' learning status, and ensure that students seriously complete the learning of relevant course contents. Because even if the teaching platform and online classroom show students online, and can count students' learning situation and learning progress, it can not guarantee students' learning quality.
In addition, online teaching has greater freedom. Many students can study online anytime and anywhere. As long as there are enough learning materials, students can arrange their own learning. If students do not study each course according to the curriculum arranged by the school, it will lead to great differences in students' learning progress of the same course, Finally, it will affect the completion of the course teaching plan. More importantly, online teaching focuses on students' learning. Students' learning depends on students' subjective initiative. If students lack self-control ability, leisure and entertainment will occupy most of the time. Therefore, the main problems in online teaching include learning time, learning progress, learning content and learning effect.
Design and practice of online teaching operation mode
In view of the characteristics and existing problems of online teaching, teachers need to make full use of information technology to plan the operation mode of online teaching, so as to effectively improve the teaching effect. In order to successfully complete the teaching tasks of the course, according to the teaching experience, the teaching operation mode can be designed from three aspects: teaching, auxiliary teaching platform and interactive platform, such as the teaching operation mode of "Tencent classroom + Yiersi platform + QQ group interaction". Carry out relevant online teaching from "classroom" teaching, curriculum management and curriculum assessment.
Classroom teaching
Classroom teaching is the basic form of teaching. High quality and efficient classroom teaching is the basic prerequisite to promote students' effective learning. In particular, theoretical courses cannot lack classroom teaching. Teachers need to explain in detail to help students understand the relevant concepts, basic theories and applications of the course. Considering the actual situation of online teaching, according to the teaching experience of many courses, classroom teaching should include online teaching, online puzzle solving, offline discussion and course homework.
(1) Online teaching Online teaching is based on the teaching platform. For example, online course teaching is completed through "Tencent classroom". After students are online, teachers use the screen sharing and teach the course content. Online teaching means that teachers explain the contents of each chapter online, just as teachers and students study in the classroom. Considering the network situation and the review of students, it is best to record videos of all the contents of online teaching and upload them to the auxiliary teaching platform, such as uploading the videos to the "Teaching video" column of "Yiersi platform". Students can watch teaching videos anytime and anywhere as long as they log in to the auxiliary teaching platform. In this way, students can further study the relevant contents of the course repeatedly and selectively according to their own learning situation.
(2) Online puzzle solving If students have doubts about the course content taught online by teachers through the teaching platform, they need to be solved in time, especially for theoretical courses. Using the "online puzzle solving" mode, students can leave messages online in time, students can also apply for online speech, and put forward their own questions. Then the teacher explains students' questions online. Online puzzle solving is convenient for teachers to solve students' puzzles in time in the "classroom".
(3) Offline discussion Considering the characteristics of knowledge, it is not enough to only rely on online classroom learning, but also need "offline discussion". The "offline discussion" of the course is realized through the auxiliary teaching platform, such as "Yiersi platform", and both teachers and students can post or reply. Through off-line discussion, students can express their own views and see the views of other students. Therefore, it can not only help students understand the knowledge points of the course, but also expand the knowledge system of the course. It is also helpful to find the deficiencies in curriculum learning.
(4) Course homework Course homework is also a very important link in course learning, which can be realized through the auxiliary teaching platform. Course homeworks are set according to the course content and the connection with other courses. Based on the course assignment, the students' ability is preliminarily cultivated to integrate the knowledge of this course and the knowledge of this course with other courses. And the students' ability is also cultivated to use the relevant contents of this course to deal with practical problems. After students finish their homework, they need to upload it to the "Homework management" of the auxiliary teaching platform. Teachers can review students' homework online and upload comments.
Course management
The effectiveness of "Classroom" teaching is inseparable from "Course management", which is realized through auxiliary teaching platform and interactive platform, such as "Yiersi platform" and "QQ group". Course management includes sign in for class, online question answering, learning statistics and online test.
(1) Sign in for class Class attendance can remind students to start class. For example, sign in for class through the collection form of QQ group. Before class, the teacher shares the collection form to the course QQ group, and then the students fill in their name. A collection form can record the time when all students pay attention to class.
(2) Online question answering In order to master the students' mastery of the course content, when the online teaching is completed, students need to answer the questions related to the course content online and submit the answers through the interactive platform within the specified time, such as using the collection form of QQ group. According to the answers submitted online by the students, teachers can clarify the shortcomings of the students in this course, and it is also convenient for the teachers to correct or supplement the relevant contents in time.
(3) Learning statistics Learning statistics can be completed based on the auxiliary teaching platform. For example, the "learning" option of "Yiersi platform" can count students' learning. Learning statistics can intuitively grasp the learning progress and learning distribution time of students' teaching courseware, teaching video and teaching materials.
(4) Online test To further master the learning effect of students, online test tasks can be arranged based on the auxiliary teaching platform, such as "Online examination" of "Yiersi platform". It can directly clarify students' learning of relevant contents of the course. Moreover, online tests can encourage students to review and summarize course knowledge regularly.
Assessment method
The assessment method also needs to be considered in online teaching. The assessment of course learning should reflect the students' learning process and effect. Students' learning process can be reflected by their usual performance, according to their participation or completion in each teaching link. The learning effect of students can be determined by the final exam. The final assessment results of the course should include the usual performance results and final examination results. The proportion is very important for the study of graduate courses. It is suggested that the proportion should be between 30% and 50%, as shown in Table 1. Usual performance is given according to online tests, offline discussions, course homework, online question answering and learning statistics.
"Online teaching" is limited by time and space. Relying on modern information technology means and through the network, it can provide a variety of ways of "Teaching" and "Learning", which not only provides a guarantee for decentralized education and teaching in special circumstances, but also provides a new idea for traditional centralized education and teaching. It is a new challenge for every teacher. Teachers need to stimulate students' interest, tap students' potential, cultivate students' ability, and effectively improve the efficiency of online teaching in order to successfully achieve the teaching goal.
In the course of many online teaching, through the analysis of the actual situation of each link of teaching, some experience is summarized for online teaching. First of all, classroom teaching is very important. It is necessary to video the teaching content, so as to not only focus on learning, but also facilitate students' review. Secondly, curriculum management is very important. We need to implement some measures to supervise students' learning. Online learning may not be effective, and video viewing may not be effective.
The assessment form also needs careful analysis and mutual confirmation with the course management, so as to ensure the smooth completion of "online teaching" and improve the teaching effect. Moreover, according to the course characteristics and many aspects, the usual grades are set, which focuses on stimulating students' learning enthusiasm.
Conclusion
Online teaching has become a powerful guarantee for the implementation of graduate teaching, and has become an indispensable part of graduate teaching. Combining the characteristics of online teaching and courses, designing a suitable teaching operation mode is the fundamental guarantee for the successful completion of all teaching links. For the online teaching operation mode, the role of regular centralized learning, scientific curriculum management and reasonable assessment methods in effectively improving the teaching effect can not be ignored. | 3,046.4 | 2022-01-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
Semiallogenic fusions of MSI+ tumor cells and activated B cells induce MSI-specific T cell responses
Background Various strategies have been developed to transfer tumor-specific antigens into antigen presenting cells in order to induce cytotoxic T cell responses against tumor cells. One approach uses cellular vaccines based on fusions of autologous antigen presenting cells and allogeneic tumor cells. The fusion cells combine antigenicity of the tumor cell with optimal immunostimulatory capacity of the antigen presenting cells. Microsatellite instability caused by mutational inactivation of DNA mismatch repair genes results in translational frameshifts when affecting coding regions. It has been shown by us and others that these mutant proteins lead to the presentation of immunogenic frameshift peptides that are - in principle - recognized by a multiplicity of effector T cells. Methods We chose microsatellite instability-induced frameshift antigens as ideal to test for induction of tumor specific T cell responses by semiallogenic fusions of microsatellite instable carcinoma cells with CD40-activated B cells. Two fusion clones of HCT116 with activated B cells were selected for stimulation of T cells autologous to the B cell fusion partner. Outgrowing T cells were phenotyped and tested in functional assays. Results The fusion clones expressed frameshift antigens as well as high amounts of MHC and costimulatory molecules. Autologous T cells stimulated with these fusions were predominantly CD4+, activated, and reacted specifically against the fusion clones and also against the tumor cell fusion partner. Interestingly, a response toward 6 frameshift-derived peptides (of 14 tested) could be observed. Conclusion Cellular fusions of MSI+ carcinoma cells and activated B cells combine the antigen-presenting capacity of the B cell with the antigenic repertoire of the carcinoma cell. They present frameshift-derived peptides and can induce specific and fully functional T cells recognizing not only fusion cells but also the carcinoma cells. These hybrid cells may have great potential for cellular immunotherapy and this approach should be further analyzed in preclinical as well as clinical trials. Moreover, this is the first report on the induction of frameshift-specific T cell responses without the use of synthetic peptides.
Background
The last decades have witnessed the identification of an increasing number of truly specific tumor antigens. Not all antigens carried by human neoplasias have similar immunogenic properties. Somatic mutations should have the highest immunological impact. Such mutations create neoantigenic epitopes which are completely foreign to the immune system and can serve as antigenic determinants. The presence of high-grade microsatellite instability (MSI + ), for instance, is evidence of ongoing mutagenesis in a fraction of colorectal cancer (CRC). MSI occurs subsequent to DNA mismatch repair inactivation and causes insertion or deletion mutations at short repetitive DNA sequences located throughout the genome. MSI + tumors are typically infiltrated by predominantly activated cytotoxic T lymphocytes and display increased neoplastic cell apoptosis. These features argue for a strong antitumoral immune response directed against potent tumor rejection antigens [1][2][3]. We and others demonstrated that frameshift-neopeptides (FSP) encoded by mutations of microsatellites located in coding sequences are highly immunogenic [4][5][6][7][8][9][10]. These studies documented that FSPs represent true MSI + tumor-specific antigens.
Clinical cancer vaccination studies are essentially based on the knowledge of at least one tumor specific antigen. However, reported response rates from those trials are unsatisfying. Among the reasons made responsible for failures are immune evasion of tumor cells, disease-specific immune suppression and poor intrinsic immunogenicity of many tumors.
Cellular fusions of antigen-presenting cells (APC) with tumor cells are a relatively simple and effective way to obtain highly immunogenic vaccines which combine the antigen-presenting properties of professional APC with a full repertoire of tumor antigens [11][12][13][14]. Proof-ofprinciple clinical studies have also been performed [15][16][17].
Most researchers have focused on dendritic cells as APCs. However, antigen-unspecific B cells can be used as an alternative source of efficient APCs when properly activated by engagement of CD40 [18,19]. Arguments in favour of these CD40-activated B cells (CD40 Bs) are ease of isolation, activation and expansion [20].
Very recently, we optimized the generation of cellular fusions consisting of CD40 Bs and MSI + CRC cells [21]. In the present study, we have evaluated the potency of T cell induction by semiallogenic cell fusions of a MSI + tumor cell line and CD40 Bs. In particular, we have examined the potency of in vitro induction of T cells specifically recognizing MSI-induced FSPs derived from the tumor cell line fusion partner. The data presented here show that MSI + /CD40 B cell hybrid cells induce potent anti-MSI T cell responses, indicating a great potential as immunogens in cancer immunotherapy and providing a rationale for future use in clinical trials.
Cell lines and peptides
All tumor cell lines were obtained from ATCC and from cell line services (Eppelheim, Germany) and grown in RPMI 1640 medium supplemented with 10% fetal calf serum, 2 mmol/L L-glutamine and antibiotics. Tissue culture media and supplements were purchased from PAA (Cölbe, Germany) unless indicated otherwise.
B cell and T cell purification
Peripheral blood mononuclear cells were isolated from heparinized blood of a healthy HLA-A02 + donor using Ficoll-density gradient centrifugation. Whole CD3 + T cells were obtained from PBMCs by magnetic depletion of non-T cells using the MACS Pan T Cell Isolation Kit II (Miltenyi; Bergisch Gladbach, Germany) according to manufacturer's instructions. The remaining non-T cells were subsequently used as a source for B cells. All procedures using human cells were approved by the Ethics Committee of the University of Heidelberg in accordance with the provisions of the declaration of Helsinki (as revised in Edinburgh 2000). An informed consent in written was obtained prior to the blood sampling procedure.
Fusion of MSI + tumor cells with CD40 Bs
HCT116 tumor cells and CD40 Bs from a healthy HLA-A02 + donor were washed in serum-free RPMI 1640 medium, pelleted and stained with 5 μM 5-chloromethylfluorescein diacetate or 5 μM 5-(and 6)-(((4chloromethyl) benzoyl)-amino) tetramethyl-rhodamin in PBS, respectively. Cells were incubated for 45 min at 37°C , washed and resuspended in serum-free RPMI 1640 medium. After a second incubation period, stained cells were resuspended in serum-free RPMI 1640. CD40 Bs were mixed with tumor cells at a CD40 Bs:tumor cell ratio of 2:1 and exposed 5 min to 4 × 10 -5 M SDS. Cell suspension was pelleted and 1 ml 50% polyethylene glycol 1500 (Roche, Mannheim, Germany) was added over 5 min to the CD40 Bs/tumor cell pellet while stirring the cells continuously. The polyethylene glycol solution was then diluted by slow addition of first 1 ml warmed serum-and phenol red-free medium over 3 min with continued stirring and then another 10 ml of this medium over 5 min. After centrifugation cells were resuspended in phenol red-free RPMI 1640 supplemented with 10% FCS. Cells were cultured 24 h before the fusion efficacy was analyzed by flow cytometry. Fused cells consisting of MSI + colon carcinoma cells HCT116 and CD40 Bs, which exhibited dual fluorescence, were cloned by limiting dilution. Briefly, fused cells were seeded under limiting dilution conditions (0.7 cells/well) in 96-well plates containing a final volume of 200 μl RPMI 1640 with 10% FCS. Outgrowing fusion clones were comprehensively screened concerning the cell surface expression of HLA-A02, MHC class I and II molecules and costimulators (CD40, CD80, and CD86). For all subsequent experiments, two clones were selected and designated as Fc1 and Fc2.
T cell stimulation with semiallogenic cellular fusion clones
Fusion clones were collected from cell culture, irradiated (200 Gy) and added to purified CD3 + autologous T cells at a ratio of 3:1 (Tc:fusion cell) in IMDM supplemented with 10% human AB serum, 5 μg/ml insulin, 50 μg/ml transferrin and 15 μg/ml gentamicin in the presence of IL-7 (10 IU/ml ). Cells were plated at a density of 2 × 10 6 T cells in 1 ml of medium per well of a 24 well plate. For T cell restimulation this was repeated weekly. IL-2 was first given at day 21 (10 IU/ml), also at day 24 and from day 28 on only IL-2 was used instead of IL-7.
CD4 + T cells and CD8 + T cells were obtained from the whole T cell population by magnetic cell sorting using MACS CD4 and CD8 microbeads (Miltenyi; Bergisch Gladbach, Germany) according to manufacturer's instructions. CD4 microbeads were used for the sorting of CD8 + T cells and CD8 microbeads for the sorting of CD4 + T cells. Selection of T cells was checked by flow cytometry. CD4 + T cells and CD8 + T cells were restimulated separately with irradiated fusion clones and IL-2 as described for the whole T cell population.
IFN-g ELISpot assay
ELISpot assays were performed using nitrocellulose-96well plates (Multiscreen; Millipore, Bedford, USA) covered with mouse anti-human IFN-γ mAb (Mabtech, Sweden) and blocked with serum containing medium. Initially, in vitro primed T cells (1 × 10 4 ) were stimulated in triplicates with 2 × 10 4 CD40 Bs, HCT116, Fc1 or Fc2 cells per well as targets. Afterwards T effector cells (1 × 10 4 ) were plated in sixplicates with 2 × 10 4 peptideloaded autologous CD40 Bs. Peptides were added at a final concentration of 10 μg/ml. After incubation for 16 h at 37°C, plates were washed, incubated with biotinylated rabbit anti-human IFN-γ for 4 h, washed again, incubated with streptavidin-alkaline phosphatase for 2 h, followed by a final wash. Spots were detected by incubation with NBT/BCIP (Sigma-Aldrich, Steinheim, Germany) up to 1 h. Reaction was stopped with water and after drying spots were counted. The deduced frequency of peptidespecific T cells was calculated by subtracting mean numbers of spots in the no-peptide control from mean numbers of spots in the peptide-stimulated sample. Negative values were scored as zero.
After proving the assumption of normality, differences between negative control and FSP-containing wells were determined by using the unpaired Student's t-test. If normality failed, the nonparametric Mann-Whitney U-Test was applied. The tests were performed by using Sigma-Stat 3.0 (Jandel Corp, San Rafael, CA). The criterion for significance was set to p < 0.05.
Determination of cMS mutation pattern
Genomic DNA was isolated from microdissected tumor sections using the DNeasy Tissue Kit (Qiagen, Hilden, Germany). For analysis of cMS, the corresponding genomic regions were amplified as described previously [23]. PCR products were analyzed using an ABI3100 genetic analyzer and Genescan Analysis Software (Applied Biosystems, Darmstadt, Germany). Instability was scored, if comparison with amplification products of normal tissue revealed the occurrence of novel peaks or if the ratio of corresponding peak areas was ≤0.5 or ≥2.
Cytotoxicity assays
Standard chromium release assays were performed as described [24]. Tumor target cells were labelled with 100 μCi [ 51 Cr]-sodium chromate for one hour at 37°C. For each experimental condition, cells were plated in Vbottomed 96-well plates with 10 3 target cells/well in triplicate. Varying numbers of CTL were added to a final volume of 200 μl and incubated for 4 h at 37°C. Spontaneous and maximal release was determined in the presence of medium alone or of 1% NP-40. Supernatants (100 μl/well) were harvested and counted in a gammacounter. The percentage of specific lysis was calculated as follows: 100% × [experimental release -spontaneous release]/[maximal release -spontaneous release].
For intracellular staining, T cells were incubated with the protein transport inhibitor brefeldin A (2 μg/ml, Serva, Deisenhofen, Germany) for 15 h at 37°C. After two washes with PBS/1% FCS, cells were fixed with cold 4% paraformaldehyde (PFA, Serva) in PBS for 10 min at 4°C, washed twice with PBS/1% FCS and permeabilized with saponine buffer (PBS, 0,1% saponine, 1% FCS and 0,01 M Hepes) for 10 min at room temperature. T cells were subsequently stained for intracellular cytokines with PE-labelled anti-IFN-γ (4SB3), anti-TNF-α(Mab11), anti-IL-2 (MQ1-17H12) mAbs, or IgG1 isotype control mAb in saponine buffer for 20 min at 4°C. Cells were washed twice with saponine buffer and resuspended in PBS/1% FCS before measurement. Intracellular perforin was detected with PE-labelled anti-perforin (dG9) mAb. For analysis of intracellular granzyme B, cells were incubated with anti-granzyme B (2C5/F5, Serotec) mAb followed by FITC-labelled goat anti-mouse IgG's. Cells treated without primary antibody were used as negative control. Cell surface and intracellular immunofluorescences were obtained using FACSCalibur (BD Biosciences, Heidelberg, Germany) and CellQuest software. Typically, 20.000 cells were acquired from each sample. Cells to be analyzed were first gated according to reasonable size and granularity in the forward/sideward scatter plot. Gated cells were than blotted into histogram blots. Mean intensities of the negative control were between 2 and 10. For determination of percentages of positive cells, the cut-off included a maximum of 2% of the events in the negative control. Antibodies and signal detection reagents were obtained from BD Biosciences unless stated otherwise.
Semiallogenic cellular fusion and selection of fusion cell clones
The CD40 Bs used to generate fusions were derived from a healthy volunteer with neither a family history of hereditary non-polyposis colorectal cancer nor any other known severe disease, in particular no tumor. Therefore, the T cells of this donor can be considered as naive with regard to prior contact with MSI-induced frameshift mutations. As tumor cell fusion partner, the MSI + CRC cell line HCT116 was chosen because the mutational status of many coding microsatellites (cMS) is well known (Table 1), it shares human leukocyte antigen (HLA)-A02 as restriction element with the CD40 Bs of the healthy donor and it has been shown to be accessible to lysis by cytotoxic T lymphocytes (CTL) [8,10]. Semiallogenic cellular fusions were performed in the presence of polyethylene glycol. The fusion efficiencies were between 10 and 15% [data not shown] and thus very similar to published data [21]. However, additionally to fresh fusions we decided to obtain fusion cell clones as this allowed us to select variants expressing mutations in cMS giving rise to FSPs with high immunogenic potential and additionally high amounts of major histocompatibility complex (MHC) and of the costimulatory molecules CD40, CD80 and CD86. Extensive characterization of established fusion clones was performed and two fusion cell clones (Fc1 and Fc2) expressing many FSPs (Table 1) and higher amounts of MHC and costimulatory molecules as HCT116 (Table 2) were selected as stimulator cells in subsequent T cell stimulations. Data of two fusion clones expressing low amounts of MHC and costimulatory molecules are additionally shown (Table 2).
In vitro T cell stimulations
Isolated peripheral T cells autologous to the CD40 Bs used as fusion partner were weekly stimulated with Taf1B
UST3
-2 wt/-1 wt/-2 1 MSI is indicated by a "+". fresh fusions, with Fc1 and Fc2 using an established protocol of in vitro T cell stimulation [10]. As controls, T cells were also stimulated with autologous CD40 Bs and with HCT116 cells. In the first five weeks, all bulk T cell cultures grew moderately but in the following two weeks the T cells stimulated with HCT116 and CD40 Bs decreased followed by the fresh fusions stimulated T cells ( Figure 1A). Sustained growth with fold increases exceeding 10 was reached solely by the T cell cultures stimulated with the fusion cell clones Fc1 and Fc2 (Figure 1A). Consequently, all subsequent detailed analyses were restricted to the latter two cultures. Phenotypical analysis proved that the cultures consisted of pure T cells with a slightly dominance of CD4 + cells but a clearly activated status judged from the high expression levels of CD25, CD45RO, CD69 and CD71 ( Table 3). The Fc1-stimulated T cells produced higher levels of interferon (IFN)-γ and Perforin. Granzyme B was detectable in high levels in both T cell populations (Table 3).
Fc1 and Fc2 T cell cultures recognize tumor target cells
Next, we addressed whether the T cells recognized the fusion clones used for stimulation. A representative ELI-Spot-analysis of Fc1-stimulated T cells is presented in Figure 1B. As expected, the T cells readily secreted IFNγ in response to Fc1 (3.2%). The autologous B cells were not recognized (< 0.1%) but HCT116 could provoke a minor reaction (0.4%). This weak reaction towards the tumor cell fusion partner HCT116 did however not prevent killing of HCT116 by Fc1 and Fc2-stimulated T cells as subsequently tested in cytotoxicity experiments (Figure 2A). Of note, this killing could be enhanced by pre-treatment of HCT116 with IFN-γ leading to an upregulation of MHC molecules and immune presentation (Figure 2A). This activity was unlikely due to contaminating NK cells in the T cell bulk cultures, since the classical NK cell target cell line K562 was not recognized ( Figure 2B). Testing of another three HLA-A2 + colorectal cell lines revealed no recognition of the MSIcell lines SW480 and SW707 but recognition of the MSI + cell line Colo60H ( Figure 2B). Finally, Fc1 and Fc2-stimulated T cells' reactivity was tested against the MSI + prostate carcinoma cell lines LNCaP (HLA-A2 + ) and DU-145 (HLA-A2 -) ( Figure 2C). Both T cell lines strongly reacted against the MSI + and HLA-A2 + cell line LNCaP whereas they did not kill the MSI + but HLA-A2 -DU-145 ( Figure 2C).
Fc1CD4 + T cells have higher proliferative and Fc1CD8 + T cells higher cytotoxic potential
In order to elucidate the contribution of CD4 + and CD8 + T cells towards tumor cell recognition, we negatively separated the Fc1-stimulated T cells magnetically to high purity. As can be depicted from Figure 3A, the CD4 + T cells Fc1 had greater proliferative potential. However, the CD8 + T cells Fc1 better recognized HCT116 as well as LNCaP cells both in IFN-γ ELISpot as well as in cytotoxicity experiments ( Figures 3B and 3C). As an additional observation, the better recognition of Fc1 compared to HCT116 could be attributed to the CD8 + T cells Fc1 ( Figure 3B). These functional data match with the results of phenotypic FACS analysis. CD8 + TcFc1 showed stronger expression of IFN-γ, interleukin (IL)-2 and perforin ( Figure 4A), whereas CD4 + TcFc1 produced more tumor necrosis factor (TNF)-α and granzyme B ( Figure 4B).
Fc1 and Fc2 T cell cultures contain FSP-specific T cells
Since we hypothesized that at least a proportion of the fusion clone stimulated T cells are specific for MSIinduced FSPs, we tested the Fc1-stimulated T cells for recognition of FSPs in IFN-γ ELISpot-assays. Here, we observed a response towards four of the 14 FSPs included into this analysis ( Figure 5). This proves that fusions of MSI + cells with APC can functionally present FSPs to T cells which in turn can be activated and gain the potential to attack MSI + tumor cells.
Discussion
Data presented here confirm our previous findings that cell hybrids of human MSI + tumor cells and CD40 Bs as APC can easily be generated [21]. Fusion cells retain desired characteristics of both fusion partners; in particular the expression of MSI-induced FSPs and of functional antigen processing and presentation machinery together with potent immunostimulatory capacity. In the present study we worked with stable fusion cell clones instead of highly purified fresh fusion cells we described recently [21]. This approach was chosen to ensure a uniform fusion cell population concerning the immunostimulatory phenotype and the expression of FSPs. Indeed, the distribution of these features was heterogeneous between different clones. Others have shown that careful selection of fusion clones may be essential for successful T cell stimulation [25]. Good arguments in favour of CD40 Bs as APC are the possibilities to easily isolate them even out of minimal amounts of peripheral blood, to propagate them for longer time periods and lastly the outstanding T cell stimulatory capacity of fully activated B cells [8,21,22].
The disclosure of the molecular pathways leading to MSI has led us and others to hypothesize and prove that when affecting coding regions, this will ultimately give rise to FSPs with high immunogenic potential [4][5][6][7][8][9][10]. Beyond the documentation that FSPs represent true MSI + tumor-specific antigens, these studies suggest a tumor-immunological model character of MSI + tumors. Large numbers of specific FSPs have been identified [see [23] for an overview] and many of those are best suited as immunological read out target structures.
To the best of our knowledge, such a high number of specific antigens have not been identified for any other tumor entity. Several studies have shown that fusion cells have the potential to induce CD4 + as well as CD8 + T-cell mediated antitumor immunity, protection from an otherwise lethal challenge of tumor, and even regression of established metastatic disease [15,[26][27][28]. In vitro stimulation of CD3 + T cells with the two fusion clones that were analyzed in detail here induced effector T cells with strong cytotoxic potential that is HLA-A02-restricted and mediated mainly by CD8 + T cells. Moreover, these polyclonal T cell responses are directed against MSI + tumor cells. ELISpot assays revealed that at least a part of this reactivity was due to specific recognition of FSPs. In addition to the four FSPs specifically recognized in the ELISpot analysis shown in Figure 5, we observed significant reactions against two additional FSPs (OGT(-1) and AC-1(-1)) in repetition analysis (data not shown). Lack of reactivity towards microsatellite stable tumor cells but target cell recognition of yet another tumor entity displaying the MSI phenotype strongly argue for MSI + tumor antigens shared between those tumor entities. Thus, this is the first report on the induction of a specific immune response towards MSI-induced FSPs without the use of artificial peptides.
Another interesting aspect was the comparably stronger reactivity of fusion cell stimulated T cells to the fusion clone than to the HCT116 fusion partner. This could mainly be attributed to the CD8 + T cells. As explanation, we consider a better presentation to the T cells of FSPs by the fusion clones than by the HCT116 tumor cells or a better activation of low avidity T cells as likely.
In conclusion, our findings further support the idea that APC/tumor cell fusions represent an attractive approach to cancer immunotherapy [29,30]. We suggest that semiallogenic cell hybrids of human MSI + tumor cells and CD40 Bs are a feasible approach to generate polyepitope vaccines with the capacity to induce polyvalent immune responses. Future studies with primary tumor cells of MSI + patients fused to autologous APCs should further address the feasibility of this approach in a complete autologous situation as this may have important clinical implications for the treatment of MSI + tumors. However, the in vivo efficacy of such cell-based MSI + cancer vaccines must be proven either in appropriate animal models or in carefully designed small clinical studies including cases of very advanced disease.
Conclusions
In the presented study we demonstrate the potential of semiallogeneic cellular fusions of MSI + colorectal carcinoma cells and activated B cells to induce FSP-specific T cells. Selection of fusion cell clones facilitates detailed analysis of the antigen-presenting capacity as well as the repertoire of expressed frameshift antigens. These fusion clones were able to stimulate T cells which showed specificity for the fusion cells themselves and additionally for the MSI + carcinoma cell fusion partner. Moreover, we were able to demonstrate that at least part of this antitumoral potential was due to the specific induction of T cells recognizing several FSPs. Thus, these data underline the high potential of cellular fusions to induce tumor-specific T cells and may contribute to the development of novel cellular immunotherapies. | 5,397 | 2011-09-26T00:00:00.000 | [
"Biology",
"Medicine"
] |
Papaver somniferum and Papaver rhoeas Classification Based on Visible Capsule Images Using a Modified MobileNetV3-Small Network with Transfer Learning
Traditional identification methods for Papaver somniferum and Papaver rhoeas (PSPR) consume much time and labor, require strict experimental conditions, and usually cause damage to the plant. This work presents a novel method for fast, accurate, and nondestructive identification of PSPR. First, to fill the gap in the PSPR dataset, we construct a PSPR visible capsule image dataset. Second, we propose a modified MobileNetV3-Small network with transfer learning, and we solve the problem of low classification accuracy and slow model convergence due to the small number of PSPR capsule image samples. Experimental results demonstrate that the modified MobileNetV3-Small is effective for fast, accurate, and nondestructive PSPR classification.
Introduction
The private cultivation of Papaver somniferum is illegal in many countries because its extracts can be turned into addictive and poisonous opioids. However, because of the huge profits, the illegal cultivation of Papaver somniferum occurs all over the world. The appearance of Papaver somniferum is similar to that of its relatives, such as the ornamental plant Papaver rhoeas, frequently leading to mistaken identification reports from civilians engaged in anti-drug work. This paper seeks to develop a fast, accurate, and non-destructive identification method for Papaver somniferum and its close relatives (represented by Papaver rhoeas) to improve civilians' ability to distinguish between them, thereby effectively assisting the police in drug control work. It also provides model support for the development of Papaver somniferum identification systems on mobile terminals.
Papaver somniferum is traditionally identified by methods including direct observation, physical and chemical property identification, and spectral analysis. Zhang et al. [1] employed a discrete stationary wavelet transform to extract characteristics from Fourier transform infrared spectroscopy data to identify Papaver somniferum and Papaver rhoeas (PSPR). Choe et al. [2] used metabolite spectral analysis to identify Papaver somniferum, Papaver rhoeas, and Papaver setigerum. Wang et al. [3] used specific combinations of characteristic wavelength points to distinguish between Papaver somniferum and non-poppy plants, proving that spectral properties can be used to identify Papaver somniferum. Li [4] used a fluorescent complex amplification test that contained three simple sequence repeats to achieve the precise detection of Papaver somniferum and its relatives.
The above methods have limitations that render them unsuitable for the identification of PSPR for ordinary people in daily life. Direct observation, for example, is time-consuming and labor-intensive, and observers must be familiar with the characteristics of these plants. Other approaches require stringent experimental conditions and tedious operations with GhostNet to increase classification accuracy and reduce intermediate parameters, according to the features of remote sensing image datasets, and they achieved higher classification accuracy on the AID, UC Merced, and NWPU-RESISC45 datasets.
DCNNs can show superior performance only when there are enough training samples. They are prone to phenomena such as overfitting and slipping into local optima when training samples are insufficient [9]. Because Papaver somniferum cultivation is strictly controlled by the government, it is difficult to obtain training data with numerous samples of Papaver somniferum capsule images, and because there is no publicly available PSPR capsule dataset, we can only rely on an Internet image search to build our experimental dataset, which results in a small sample. Transfer learning is a useful machine learning method that applies the knowledge or patterns learned in a certain domain or task to a different but related domain or problem. Existing feature extraction capabilities can be leveraged to accelerate and optimize model learning efficiency with the parameters of a neural network model trained on a large image dataset transferred to a target model to aid in the training of a new model, enabling the training of models with higher recognition accuracy using smaller training samples [28]. Transfer learning can effectively improve the accuracy and robustness of the model, and has been widely used in text processing, [29][30][31] image classification [32][33][34], collaborative filtering [35][36][37], and artificial intelligence planning [38,39].
MobileNetV3 has the advantages of high classification accuracy and a fast classification speed, and it can better balance efficiency and accuracy for image classification on mobile devices. We propose a new classification model, P-MobileNet, based on an improved MobileNetV3 network with transfer learning from ImageNet. This study provides a new solution for fast, accurate, and nondestructive identification of PSPR for ordinary people, and it can be extended to identify any relatives of Papaver somniferum.
The main contributions of this paper are as follows: • A database of 1496 Papaver somniferum capsule images and 1325 Papaver rhoeas capsule images is established; • The structure of the MobileNetV3 network is improved to reduce the number of parameters and amount of computation, achieving fast, convenient, accurate, and non-destructive identification of PSPR; • The effectiveness of data expansion and transfer learning for model training is experimentally verified, and the influence of different transfer learning methods on the model is compared; • The improved MobileNetV3 model combined with transfer learning solves the problem of low classification accuracy and slow model convergence due to the small number of PSPR capsule image samples, and it improves the robustness and classification accuracy of the proposed classification model.
Data
It is difficult to take images of Papaver somniferum capsules in the field because its cultivation is strictly controlled by the government. Therefore, all datasets for this experiment were collected from an Internet search, with a total of 2821 images, comprising 1496 images of Papaver somniferum capsules and 1325 images of Papaver rhoeas capsules. The intercepted images were taken under different angles and light, and covered all growth and development stages of the capsule stage (flowering-fruiting, fruiting, and seed-drop), as shown in Figure 1. Note that the capsule images in the dataset are not of the same size and are resized consistently in Figure 1 for aesthetics. The maximum and minimum sizes of images in the capsule dataset are 624 × 677 pixels and 27 × 35 pixels, respectively. The establishment of the PSPR capsule image dataset can be divided into the following steps: 1. First, the dataset was mixed and scrambled and separated into training, validation, and testing data at a ratio of 8:1:1; 2. To improve the model's feature-extraction and generalization ability and avoid the problems of overfitting and low classification accuracy caused by a small sample dataset, the capsule image training set was expanded using common data expansion methods in deep learning [28], that is, horizontal mirroring, vertical mirroring, and rotation by 90, 180, and 270 degrees, respectively, as shown in Figure 2. As in Figure 1, the capsule images in Figure 2 are resized to a consistent size. The expanded training set includes 7170 Papaver somniferum capsule images and 6366 Papaver rhoeas capsule images; 3. Finally, all image sizes were resized to 224 × 224 pixels to ensure that the data suited the model's input size. The process flow of the establishment of the capsule image dataset is shown in Figure 3. The establishment of the PSPR capsule image dataset can be divided into the following steps: 1.
First, the dataset was mixed and scrambled and separated into training, validation, and testing data at a ratio of 8:1:1; 2.
To improve the model's feature-extraction and generalization ability and avoid the problems of overfitting and low classification accuracy caused by a small sample dataset, the capsule image training set was expanded using common data expansion methods in deep learning [28], that is, horizontal mirroring, vertical mirroring, and rotation by 90, 180, and 270 degrees, respectively, as shown in Figure 2. As in Figure 1, the capsule images in Figure 2 are resized to a consistent size. The expanded training set includes 7170 Papaver somniferum capsule images and 6366 Papaver rhoeas capsule images; 3.
Finally, all image sizes were resized to 224 × 224 pixels to ensure that the data suited the model's input size. The establishment of the PSPR capsule image dataset can be divided into the following steps: 1. First, the dataset was mixed and scrambled and separated into training, validation, and testing data at a ratio of 8:1:1; 2. To improve the model's feature-extraction and generalization ability and avoid the problems of overfitting and low classification accuracy caused by a small sample dataset, the capsule image training set was expanded using common data expansion methods in deep learning [28], that is, horizontal mirroring, vertical mirroring, and rotation by 90, 180, and 270 degrees, respectively, as shown in Figure 2. As in Figure 1, the capsule images in Figure 2 are resized to a consistent size. The expanded training set includes 7170 Papaver somniferum capsule images and 6366 Papaver rhoeas capsule images; 3. Finally, all image sizes were resized to 224 × 224 pixels to ensure that the data suited the model's input size. The process flow of the establishment of the capsule image dataset is shown in Figure 3. The process flow of the establishment of the capsule image dataset is shown in Figure 3. Figure 3. Establishment of the capsule image dataset of PSPR.
Basic MobileNetV3-Small
MobileNetV3, as part of a new generation of lightweight networks, builds on Mo-bileNetV1 and MobileNetV2 by combining deep separable convolution and an inverse residual structure with a linear bottleneck to improve computational efficiency and effectively extract feature information. It uses platform-aware Neural Architecture Search [40] and Neural Network Adaptation [41] to optimize the network structure and parameters. A Squeeze-and-Excite (SE) [24] channel attention module further improves network performance and operational efficiency. Figure 4 shows the MobileNetV3 structure.
Basic MobileNetV3-Small
MobileNetV3, as part of a new generation of lightweight networks, builds on Mo-bileNetV1 and MobileNetV2 by combining deep separable convolution and an inverse residual structure with a linear bottleneck to improve computational efficiency and effectively extract feature information. It uses platform-aware Neural Architecture Search [40] and Neural Network Adaptation [41] to optimize the network structure and parameters. A Squeeze-and-Excite (SE) [24] channel attention module further improves network performance and operational efficiency. Figure 4 shows the MobileNetV3 structure. Figure 3. Establishment of the capsule image dataset of PSPR.
Basic MobileNetV3-Small
MobileNetV3, as part of a new generation of lightweight networks, builds on Mo-bileNetV1 and MobileNetV2 by combining deep separable convolution and an inverse residual structure with a linear bottleneck to improve computational efficiency and effectively extract feature information. It uses platform-aware Neural Architecture Search [40] and Neural Network Adaptation [41] to optimize the network structure and parameters. A Squeeze-and-Excite (SE) [24] channel attention module further improves network performance and operational efficiency. Figure 4 shows the MobileNetV3 structure. MobileNetV3 includes two versions: MobileNetV3-Small and MobileNetV3-Large, with similar architecture but different complexity to suit different scenarios. MobileNetV3-Small is suitable for low-performance mobile devices and embedded devices. Considering the issues of computational cost and model efficiency, we use MobileNetV3-Small as the basic framework of the PSPR classifier and improve its network structure.
Construction of Network for Papaver Somniferum Identification
We propose a P-MobileNet model based on transfer learning and a modified MobileNetV3-Small model to lower the model's data requirements while improving operational efficiency. Figure 5 shows the P-MobileNet model structure, which consists of a pre-trained MobileNetV3-Small model on the ImageNet dataset and a modified MobileNetV3-Small model. MobileNetV3 includes two versions: MobileNetV3-Small and MobileNetV3-Large, with similar architecture but different complexity to suit different scenarios. Mo-bileNetV3-Small is suitable for low-performance mobile devices and embedded devices. Considering the issues of computational cost and model efficiency, we use MobileNetV3-Small as the basic framework of the PSPR classifier and improve its network structure.
Construction of Network for Papaver Somniferum Identification
We propose a P-MobileNet model based on transfer learning and a modified Mo-bileNetV3-Small model to lower the model's data requirements while improving operational efficiency. Figure 5 shows the P-MobileNet model structure, which consists of a pretrained MobileNetV3-Small model on the ImageNet dataset and a modified MobileNetV3-Small model.
Transfer Learning
DCNNs often fail to achieve higher prediction performance with small sample datasets, they are prone to problems such as training difficulty and overfitting [9], and it is sometimes difficult to obtain a large amount of data with labels. Transfer learning is an efficient strategy to solve image classification problems with small samples [32][33][34]42,43].
There are two main approaches for applying a pre-trained DCNN to a new image classification task [9,44]. One approach, called transfer learning method 1 (TL_M1), is to freeze all the weights of the convolutional layers from the pre-trained model and use them as fixed-feature extractors [9,45,46], and fully connected layers are added and trained using the new sample dataset. The other, called transfer learning method 2 (TL_M2), is to initialize the target model using the weights of the pre-trained model and then fine-tune the network weights training on the new sample dataset [9,47,48]. The impact of transfer learning on the model will be described in detail in Section 4.3 through experiments.
Modified MobileNetV3-Small Model
MobileNetV3-Small performed well on the challenging thousand-classification task on ImageNet. As for our binary identification task, deep networks impose excessive calculation costs and affect the classification speed. Consequently, after the analysis of the network configuration, we modified the architecture of the MobileNetV3-Small network to improve efficiency without degrading performance. The kernel size of the depthwise convolution of the last bottleneck layer of the original MobileNetV3-Small model is modified from 5 × 5 to 3 × 3 to reduce the calculation and latency of feature extraction. The
Transfer Learning
DCNNs often fail to achieve higher prediction performance with small sample datasets, they are prone to problems such as training difficulty and overfitting [9], and it is sometimes difficult to obtain a large amount of data with labels. Transfer learning is an efficient strategy to solve image classification problems with small samples [32][33][34]42,43].
There are two main approaches for applying a pre-trained DCNN to a new image classification task [9,44]. One approach, called transfer learning method 1 (TL_M1), is to freeze all the weights of the convolutional layers from the pre-trained model and use them as fixed-feature extractors [9,45,46], and fully connected layers are added and trained using the new sample dataset. The other, called transfer learning method 2 (TL_M2), is to initialize the target model using the weights of the pre-trained model and then fine-tune the network weights training on the new sample dataset [9,47,48]. The impact of transfer learning on the model will be described in detail in Section 4.3 through experiments.
Modified MobileNetV3-Small Model
MobileNetV3-Small performed well on the challenging thousand-classification task on ImageNet. As for our binary identification task, deep networks impose excessive calculation costs and affect the classification speed. Consequently, after the analysis of the network configuration, we modified the architecture of the MobileNetV3-Small network to improve efficiency without degrading performance. The kernel size of the depthwise convolution of the last bottleneck layer of the original MobileNetV3-Small model is modified from 5 × 5 to 3 × 3 to reduce the calculation and latency of feature extraction. The last two 1 × 1 convolution layers, responsible for extrapolation and classification, are reduced to one layer to reduce the number of parameters. These changes significantly reduce the number of model parameters, along with the computational burden, while maintaining accuracy. Table 1 shows the network structure of the proposed P-MobileNet model.
The columns in Table 1 are as follows: (1) Input represents the feature map size input to each feature layer of MobileNetV3; (2) Operator represents the layer structure which each feature map will cross; (3) Exp size represents the number of channels after the inverse residual structure in the bottleneck rises; (4) Out represents the number of channels in the feature map after passing the bottleneck; (5) SE represents whether the SE attention mechanism is introduced at this layer; (6) NL represents the type of activation function used, HS (h-swish) or RE (ReLU); and (7) S represents the step size used for each layer structure.
Experimental Environment
The configuration used for model training and testing in this paper is as follows: Intel Core i5-10210U CPU @ 1.60 GHz/2.11 GHz; 16 GB RAM; Nvidia GeForce MX250 graphics card; Windows 10 Home Chinese version; CUDA version 10.1; and PyTorch 3.8.
Evaluation Indicators
The model was evaluated based on accuracy, precision (P), recall (R), F1, number of parameters, computation (measured using FLOPs), weight file size, and average prediction time for a single image. The task of PSPR is a binary classification problem, and we define Papaver somniferum as the positive class and Papaver rhoeas as the negative class.
Accuracy, precision, recall, and F1 are defined as follows [6,49]: Accuracy reflects the proportion of correct predictions in the entire sample; Precision reflects the proportion of samples with positive predictions that are positive; Recall indicates the proportion of all positive samples that are correctly predicted; F1 is the summed average of precision and recall [25].
Experimental Design
The MobileNetV3-Small model trained on the ImageNet dataset was selected as the basic model and P-MobileNet was the target model. Six sets of experiments were conducted, combined with three learning methods (training from scratch, TL_M1, and TL_M2) and two data expansion methods (unexpanded data and expanded data).
Specifically, training from scratch means randomly initializing the weight parameters of all layers of the model, and the capsule image dataset is used to train the model, following which the back-propagation algorithm is used to tune its weights. In TL_M1, the pre-trained model's weights are used as fixed feature extractors and the linear classifiers are trained on the new sample dataset. To clarify, since the feature extraction layer structure of P-MobileNet is not identical to that of MobileNetV3-Small (the kernel size of the depthwise convolution of the last bottleneck layer of MobileNetV3-Small is modified from 5 × 5 to 3 × 3), the weight information of this layer is not passed from the pre-trained model but is trained from scratch together with the classification layer (which is a 1 × 1 convolutional layer without batch normalization in P-MobileNet). In TL_M2, the new sample dataset is used to fine-tune all layers of the model initialized by the weights of the pre-trained model (the weight information of the last bottleneck layer from the pre-trained model is ignored, as in TL_M1). This enables the model to learn highly generalizable features from a larger sample dataset, while the features are more relevant to the new classification task.
Regarding the data expansion methods, training under unexpanded data means the model is trained using the original capsule image dataset with 2821 images, while the other is training under the expanded capsule image dataset with 14,099 images, using the data expansion method described in Section 2.
Considering the computation and training time, the batch size for both testing and training was set to eight. The Adam optimizer was used with a learning rate of 0.0001, and the maximum number of training rounds was set to 120 epochs.
Experimental Results and Analysis
After 120 training epochs, a comparison of the performance of P-MobileNet under different learning methods and data expansion methods is shown in Table 2. In addition to the accuracy, precision, recall, and F1 values of the testing set, we also calculated the standard deviation (SD) of the training loss (train_loss) and the accuracy of the validation set (val_acc) to measure the volatility of the data. Figures 6 and 7, respectively. In both cases, P-MobileNet trained from scratch had the slowest convergence rate with large fluctuations, and the loss function presented a high loss value after stabilization. The model with TL_M2 had the fastest convergence speed and lowest loss value. The accuracy of P-MobileNet trained from scratch was the lowest and fluctuated greatly. The accuracy of the model with transfer learning fluctuated less, among which the accuracy of TL_M2 was the highest. The SD of val_acc for TL_M2 under unexpanded data was decreased by 3.354 percentage points compared to that for training from scratch. The differences in the model performance between TL_M1 and TL_M2 were relatively small, but it can still be observed that P-MobileNet with TL_M2 was more advantageous than training with TL_M1. From Table 2, the F1 value of TL_M2 was more than 1 percentage point higher than that of TL_M1, which shows that P-MobileNet with TL_M2 has higher recognition accuracy and robustness. 1. Influence of different learning methods on model performance. The train_loss curve and val_acc for the three learning methods are shown in Figures 6 and 7, respectively. In both cases, P-MobileNet trained from scratch had the slowest convergence rate with large fluctuations, and the loss function presented a high loss value after stabilization. The model with TL_M2 had the fastest convergence speed and lowest loss value. The accuracy of P-MobileNet trained from scratch was the lowest and fluctuated greatly. The accuracy of the model with transfer learning fluctuated less, among which the accuracy of TL_M2 was the highest. The SD of val_acc for TL_M2 under unexpanded data was decreased by 3.354 percentage points compared to that for training from scratch. The differences in the model performance between TL_M1 and TL_M2 were relatively small, but it can still be observed that P-MobileNet with TL_M2 was more advantageous than training with TL_M1. From Table 2, the F1 value of TL_M2 was more than 1 percentage point higher than that of TL_M1, which shows that P-MobileNet with TL_M2 has higher recognition accuracy and robustness. The differences in the model performance between TL_M1 and TL_M2 were relatively small, but it can still be observed that P-MobileNet with TL_M2 was more advantageous than training with TL_M1. From Table 2, the F1 value of TL_M2 was more than 1 percentage point higher than that of TL_M1, which shows that P-MobileNet with TL_M2 has higher recognition accuracy and robustness.
These results indicate that transfer learning effectively solved the problems of low classification accuracy and slow model convergence due to a small-sample dataset.
2.
Effect of data expansion on model performance.
The train_loss and val_acc for the expanded and unexpanded datasets under three different training methods are shown in Figures 8-10. For these three different learning methods, a general phenomenon was observed, namely that the loss function of the model trained on the expanded capsule image dataset was lower and less volatile than that on the original dataset. From Table 2, for training from scratch, TL_M1, and TL_M2, the test accuracy under expanded data was 1.4, 0.7, and 0.3 percentage points higher, respectively, than that for training on the original data; the SD of val_acc under expanded data was decreased by 2.924, 1.115, and 1.429 percentage points compared to that trained on the original data, respectively, which indicated that data expansion could improve the classification accuracy and robustness of the model. trained on the expanded capsule image dataset was lower and less volatile than tha the original dataset. From Table 2, for training from scratch, TL_M1, and TL_M2, the accuracy under expanded data was 1.4, 0.7, and 0.3 percentage points higher, respectiv than that for training on the original data; the SD of val_acc under expanded data decreased by 2.924, 1.115, and 1.429 percentage points compared to that trained on original data, respectively, which indicated that data expansion could improve the cl fication accuracy and robustness of the model. trained on the expanded capsule image dataset was lower and less volatile than tha the original dataset. From Table 2, for training from scratch, TL_M1, and TL_M2, the accuracy under expanded data was 1.4, 0.7, and 0.3 percentage points higher, respecti than that for training on the original data; the SD of val_acc under expanded data decreased by 2.924, 1.115, and 1.429 percentage points compared to that trained on original data, respectively, which indicated that data expansion could improve the cl fication accuracy and robustness of the model. trained on the expanded capsule image dataset was lower and less volatile than tha the original dataset. From Table 2, for training from scratch, TL_M1, and TL_M2, the accuracy under expanded data was 1.4, 0.7, and 0.3 percentage points higher, respectiv than that for training on the original data; the SD of val_acc under expanded data decreased by 2.924, 1.115, and 1.429 percentage points compared to that trained on original data, respectively, which indicated that data expansion could improve the cl fication accuracy and robustness of the model. It could also be found that, under the model trained from scratch, data expansion had a greater promotion effect on improving the accuracy of the model and avoiding the phenomenon of overfitting than under the model with transfer learning. This was mainly due to the fact that the pre-trained model learned a large amount of knowledge on the large image dataset, weakening the role of data expansion.
In any case, the accuracy and robustness of the model were improved by different magnitudes on the expanded capsule image dataset, regardless of the learning strategy, indicating that the data expansion provided the necessary amount of data for model training and that a certain size of dataset is still necessary.
To summarize, the expanded capsule image dataset was used to train P-MobileNet with TL_M2.
Comparison of Classification Networks
To verify the effectiveness of P-MobileNet for PSPR identification, we compared various DCNNs on the self-constructed PSPR capsule image dataset (including the expanded training data, unexpanded validation data, and test data), with a total of 14099 images. Models included some representative traditional CNNs (AlexNet, GoogLeNet, ResNet-34) and popular lightweight networks. All models were trained under transfer learning. Classification results were compared in terms of accuracy, precision, recall, F1, number of parameters, FLOPs, weight file size, and average prediction time for a single image on the testing set, as shown in Table 3. Table 3 further illustrates that the traditional network models could not meet the requirements for mobile deployment because of their enormous calculations. Lightweight networks tend to have much fewer parameters and FLOPs than traditional networks, but they have comparable or even better model performance. Among the lightweight network models, SqueezeNet had the fewest parameters and smallest model size but the lowest accuracy and recall rates, 96.2% and 94.7%, respectively. ShuffleNetV2 outperformed SqueezeNet, with the smallest FLOPs of 2.28 M, but the largest number of parameters, 148.8 M. The performance of GhostNet and MobileNetV3 exceeded that of ResNet-34.
MobileNetV3 performed best. The number of parameters, amount of computation, and model size of MobileNetV3-Small were much smaller than those of MobileNetV3-Large, while they showed similar performance at PSPR classification, which further indicates the redundancy of the MobileNetV3 model for this task. Compared with MobileNetV3-Small, the recall of P-MobileNet increased by 0.8 percentage points, and the F1 value was the same, at 98.9%. However, P-MobileNet had only 36% of the parameters of MobileNetV3-Small, and it used less calculation. The model was only slightly larger than SqueezeNet, and the prediction speed was the fastest.
We compared the performance of the models based on F1 and the number of parameters, as shown in Figure 11, where the horizontal scale is the number of parameters and the vertical scale is F1. P-MobileNet had the highest F1 with the fewest parameters. We compared the performance of the models based on F1 and the number of parameters, as shown in Figure 11, where the horizontal scale is the number of parameters and the vertical scale is F1. P-MobileNet had the highest F1 with the fewest parameters. Based on these results, P-MobileNet best balanced accuracy and efficiency for the PSPR classification task, with a classification accuracy of 98.9% and an average prediction time of 45.7 ms for a single image, which is better than other tested models. Based on these results, P-MobileNet best balanced accuracy and efficiency for the PSPR classification task, with a classification accuracy of 98.9% and an average prediction time of 45.7 ms for a single image, which is better than other tested models.
Conclusions
The appearance of Papaver somniferum is similar to that of Papaver rhoeas, increasing the difficulty of its identification. Traditional methods of Papaver somniferum identification, including direct observation, physical and chemical property identification, and spectral analysis, cannot be applied to drug-related cases and Papaver somniferum identification in daily life. To solve these problems, we proposed the P-MobileNet model for PSPR classification, based on the improved MobileNetV3-Small with transfer learning.
•
Compared with training from scratch, transfer learning could fully utilize the knowledge learned on large datasets, significantly accelerated the convergence speed of the model, and improved the classification performance. Regardless of the type of transfer learning method adopted, pre-training and fine-tuning P-MobileNet had a superior impact than that obtained by training P-MobileNet from scratch. The feature extraction ability of the random initialization model was not good enough under a small sample dataset; • The impact of data expansion on the model trained from scratch was greater than that of the model with transfer learning. Data expansion enriched the diversity of data, which was helpful to mitigate overfitting and improved the classification performance of the model. Although transfer learning weakened the effect of data expansion, a certain amount of training set expansion was necessary to improve the robustness of the model; • Analysis of the classification performance of different models showed that the proposed P-MobileNet model has the advantages of high classification accuracy, a few parameters, and a fast detection speed. Compared with MobileNetV3-Small, P-MobileNet maintains a high classification accuracy of 98.9%, with only 36% of the parameters of the MobileNetV3-Small model; the FLOPs are reduced by 2 M; and the detection speed is improved to 45.7 ms/image. This study provides a means to achieve the rapid, accurate, and non-destructive identification of PSPR on mobile terminals. | 6,841.8 | 2023-03-01T00:00:00.000 | [
"Computer Science",
"Environmental Science",
"Engineering"
] |
Anomalous Sun Flyby of 1I/2017 U1 (‘Oumuamua)
: The findings of Micheli et al. ( Nature 2018 , 559 , 223–226) that 1I/2017 U1 (‘Oumuamua) showed anomalous orbital accelerations have motivated us to apply an impact model of gravity in search for an explanation. A small deviation from the 1/ r potential, where r is the heliocentric distance, is expected for the gravitational interaction of extended bodies as a consequence of this model. This modification of the potential results from an offset of the effective gravitational centre from the geometric centre of a spherically symmetric body. Applied to anomalous Earth flybys, the model accounts for energy gains relative to an exact Kepler orbit and an increased speed of several spacecraft. In addition, the flat rotation profiles of eight disk galaxies could be explained, as well as the anomalous perihelion advances of the inner planets and the asteroid Icarus. The solution in the case of ‘Oumuamua is also based on the proposal that the offset leads to an approach and flyby trajectory different from a Kepler orbit without postulating cometary activity. As a consequence, an adjustment of the potential and centrifugal orbital energies can be envisaged outside the narrow uncertainty ranges of the published post-perihelion data without a need to re-analyse the original data. The observed anomalous acceleration has been modelled with respect to the orbit solutions JPL 16 and “Pseudo-MPEC” for 1I/‘Oumuamua.
Introduction
The astronomical body designated as 1I/2017 U1 ('Oumuamua) is the first object that has been observed during its passage through the Solar System from the interstellar environment. It was detected by Robert Weryk on 19 October 2017 with the Pan-STARRS telescope system in Hawaii [1]. Observations on 30 October 2017 of the lightcurve and its variation indicated a rotation period of 'Oumuamua of more than five hours [2]. Belton et al. reported in ref. [3] rotation periods near four and nine hours combined with an excited spin state of 'Oumuamua, cf. also ref. [4][5][6][7]. The findings by Micheli et al. [8] that 'Oumuamua's path deviates from a Kepler orbit resulted in a number of publications with different proposals to account for the additional acceleration. Cometary activity and the recoil resulting from outgassing would be a natural explanation, and is, indeed, "the most plausible physical model" [8]. However, even with long equivalent exposure times no cometary coma or tail activity could be detected [2,9]. The inactivity was also confirmed by Spitzer observations on 21 November 2017 [10], and Katz expressed in ref. [11] "... skepticism of the reported non-gravitational acceleration." Thus important problems remain, because many other observations showed neither cometary activity nor was any meteor activity detected on Earth, cf. [8,[12][13][14][15]; in addition, the shape, consistence and origin of 'Oumuamua are still debated, cf. [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32].
Solar radiation pressure could produce an excess radial acceleration with a 1/r 2 dependence, where r is the heliocentric distance, if the object has a large surface and a very low mass according to refs. [23,33]. The authors of the latter reference even suggest as a possibility that a lightsail could be of "artificial origin". However, McNeil et al. [34] estimated a density of approximately 2 000 kg m −3 for 'Oumuamua, a typical value for asteroids, cf. [35]. A density of this amount is hardly consistent with a lightsail. Radio SETI observations detected no signal [36].
The observation of an interstellar object in the Solar System raises the question with regard to where it might have come from. Potential sources have been indicated by [37][38][39], although the authors of ref. [37] point out that non-gravitational accelerations on the outbound path pose severe difficulties in determining the approach geometry.
In view of the many open questions concerning the nature of 'Oumuamua, we discuss in the following sections, whether a modification of the 1/r gravitational potential and possible consequences could provide an answer to the anomalous Sun flyby of 1I/2017 U1 ('Oumuamua). Deviations from expected trajectories of artificial spacecraft had previously been deduced from Earth flyby observations, which were also not fully consistent with Newton's theory of gravity based on a potential exactly proportional to the inverse distance [40][41][42][43][44]. Many studies have been performed to solve this problem, e.g., in refs. [45][46][47][48][49].
We suggested that the interaction between gravitating bodies is affected by mass conglomerations according to an impact model of gravity [50,51]-based on the ideas of Nicolas Fatio de Duillier [52][53][54]. It results in a modification of the 1/r potential U for extended bodies, such as the Sun and the Earth. This could qualitatively explain the anomalous energy gain during Earth flybys [55]. Another application of the graviton impact model in the context of large masses could demonstrate a physical process to explain the anomalous rotation curves of disk galaxies [56,57]. The model has also successfully been applied to explain the secular perihelion advances of the inner planets and the asteroid Icarus with the result that an offset of ρ = (4 400 ± 500) m in the instantaneous direction to an orbiting body is required to explain the observed perihelion advances [58].
In this paper, we will consider our impact model for the gravitational interaction of 1I/2017 U1 ('Oumuamua) with the Sun. Since the Sun is not a gravitational point source, but has a nominal radius of 1 R N = 6.957 × 10 8 m, cf. [59]. The effective gravitational centre is, therefore, not expected to coincide with the geometric centre -even if a spherical symmetry of the Sun is assumed -but is situated on a sphere with radius ρ around the centre, cf. [51]. We had thought that the observed anomalous acceleration of 'Oumuamua might also be a consequence of this offset. The calculations in the following sections 1 show, however, that the effect of the energy gain with reasonable values of ρ is too small to cause directly the unexpected acceleration, because of the large radial distances involved.
An indirect approach can, however, be pursued to adjust the orbit calculations. It still depends on the anomalous energy gain near perihelion, because it allows us to question the narrow ranges of the published observational uncertainties presented in Table 1 and to define adjusted trajectories outside these ranges which yield the required accelerations with respect to the expected motion of 'Oumuamua. It is to be noted here that we do not attempt to analyse original data, instead we use the reported orbit characteristics. 1 Most of the formulae needed are taken from Landau and Lifshitz in ref. [60]. Their original equation numbers are added in square brackets. The equations have only been modified to comply with the specific nomenclature used here. For instance, the effective potential energy Equation (3) originally reads [15.2]
Modification of the Gravitational Potential
The reduced mass m of the two-body system Sun-'Oumuamua can be approximated by the mass m Ou of 'Oumuamua according to [13.4] All calculations are, however, performed for a normalized reduced mass of m = 1 kg leading to specific quantities, such as, the specific angular momentum M and the specific approach energy E ∞ in the next sections. The use of these specific quantities eliminates the unknown mass of the attracted body and simplifies all equations. The gravitational potential of the system can then be written as a specific quantity and would read under the assumption of the Sun as a point source: where ε = 1.327 124 4 × 10 20 m 3 s −2 is the nominal solar mass parameter [59] and r = |r| is the length of the heliocentric position vector. According to CODATA 2018, the gravitational constant is G N = (6.674 30 ± 0.000 15) × 10 −11 m 3 kg −1 s −2 and gives a solar mass of M = 1.988 409 × 10 30 kg. The specific effective potential energy equation then becomes where the last term is the specific centrifugal energy. Equation (2) has to be modified for bodies with large mass values, e.g., for the Sun, according to our gravitational interaction model. If the distance of the orbiting body from the Sun is always much greater than 1 R N , the displacement ρ of the effective gravitational centre mentioned in Section 1 can be assumed to be parallel to the radius vector r in the direction of the orbiting body and, consequently, the specific potential energy will be and can be approximated with ρ r by The physical process causing the offset ρ , described in refs. [50,51], is that multiple interactions of the gravitons with the massive body occur before they can escape .
Observations of 'Oumuamua
The published orbit characteristics of 'Oumuamua considered in this study are compiled as data arcs No. 1 to 4 in Tables 1 and 2 from the corresponding references. This selection of different solutions out of many more evaluations available in the literature is listed to show the history of the observations and the resulting orbital parameters.
We suggest that a modification of the gravitational potential U (r) in Equation (2) to U mod is required as given in Equation (4) for an adequate evaluation of the data arcs. A first attempt is made in Section 2.3 by varying the offset ρ without changing E ∞ of orbit No. 4 in Table 2. Although it will turn out that the effect with reasonable offset values is very small for 'Oumuamua, the deviation from an exact 1/r potential justifies to scrutinize in Section 2.4 the tight uncertainties given for the orbit parameters in Table 1. These uncertainties determined from data arcs starting more than a month after the perihelion passage might not be representative for the modified potential. An indication of this expectation is that the orbit determinations No. 1 to 4 each exceed the uncertainty limits of e and q of the previous solution. For No. 3, the following statement 2 , related to the anomalous trajectory, supports this prospect: "The behavior of these accelerations outside the observed data arc from 14 October 2017 to 2 January 2018 can only be assumed. Predictions outside this time interval, particularly prior to October 2017, could be much more uncertain than reported here." Bailer-Jones et al. in [37] also point out that it is challenging to estimate the inbound leg of the trajectory under these conditions.
From the published eccentricity values e and perihelion distances q of the orbits in Table 1 (rounded to significant digits), the other quantities have been calculated under the assumption of hyperbolic Kepler motions: The "semi-axis" a of the hyperbola, the semi-latus rectum p and the specific approach energy E ∞ with the specific angular momentum and the eccentricity with the approach speed at infinity with and the impact parameter with Eccentricities: e, e Ou and e adj ; perihelion distances: q and q adj ; 1 σ uncertainties of e and q ; "semi axes" a and a adj calculated with Equations (6)a and (20)b; the astronomical unit 1 au = 149 597 870 700 m [61]. c Assuming a realistic offset ρ and an unrealistic large one. Since E ∞ and the flyby distance are assumed to be constant, the "semi axis" a and the perihelion q have not changed either; violating Equation (6) (6)c Equation (7)a Equation (6)b Equation (8) Equation (9) Equation (12)
Orbit Modification by Offset ρ Ou
The laws of conservation of energy and angular momentum determine the motion of a body in a central gravitational configuration: where dr/dt =ṙ is the radial velocity. This equation describes a Kepler orbit with an unmodified Equation (2). There is no radial velocityṙ peri at perihelion, but the highest tangential speed V peri . For a perihelion distance q it is: With the modified potential U mod of Equation (4) the Equation (10) reads: It should be noted that the denominator of the potential energy term depends on ρ Ou for the modified orbit, whereas the one of the centrifugal energy term does not, because the angular momentum is defined about the centre of the solar mass. A comparison with the Kepler orbit in Equation (10) can best be made at perihelion q, whereṙ peri = 0. From then follows an exact solution for M mod using a constant E ∞ : which is a constant of the motion. Since at the start of the data arcs at a radial distance r 1 ρ Ou (cf. Equation (17) and Table 3) and beyond the modified orbit can be approximated by a Kepler orbit, we use Equation (7)b with M mod to estimate the modified eccentricity e OU for the orbit calculations in Section 2.4. The quality of the approximation can be checked after an estimate of ρ Ou has been obtained (as listed in Tables 1 and 3). The initial conditions at r 1 can be defined by requiring r 1 = r Ou 1 andṙ 1 ≤ṙ Ou 1 , where the radial velocity can only be approximated, because a constant E ∞ leads to a constantṙ ∞ mod with Equation (12). This equation will be evaluated in Section 2.4 3 using the specific angular momentum M mod obtained in Equation (14) with varying assumptions for ρ Ou until a reasonable anomalous acceleration fit has been achieved in Figure 1.
As can be seen from Table 3, an offset of 4 900 m had only a minor effect on the anomalous accelerations and thus the effect is not sufficient to explain directly the anomalous acceleration found in ref. [8] for 'Oumuamua: "... which corresponds to a formal detection of non-gravitational acceleration with a significance of about 30 σ." This detection allows a range of accelerations between A 1 /(r/au) 2 and A 1 /(r/au), "where A 1 is a free fit parameter" with a value of "A 1 = (4.92 ± 0.16) × 10 −6 m s −2 ".
Since we see no reason as to why the solar flyby of 'Oumuamua the offset valid for the perihelion advances of the inner planets should be outside the range ρ = (4 400 ± 500) m , we will use its maximum in the next section. The specific potential energy gain at perihelion in Equation (14) would be ε ρ /q 2 = 444 m 2 s −2 and the specific centrifugal energy is also increased by that amount 4 .
Orbital Parameter Adjustments
As outlined in Section 2.2, the uncertainties in the orbital parameters might be much larger than given in Table 1. In particular, we suspect that the relation between the specific approach energy in Equation (6)c and the specific angular momentum in Equation (7)a is not adequately constrained and, therefore, adjusted quantities outside the narrow limits can later be used in Equation (17). In Table 2, the energy and angular momentum quantities are given in detail together with other supplementary data.
The maxima of E ∞ and M have been obtained from the extreme values of e and q of the orbit solutions No. 3 and 4. Taking the above considerations into account, we argue that we are justified to increase the specific approach and centrifugal energies of the adjusted orbits beyond the maximum 1 σ values in Table 2 and compare them with orbits No. 3 and 4 in an attempt to model the anomalous accelerations on the data arcs. The details of the adjustment can only be estimated by a trial and error method to be explained below, after the specific approach energy had been increased by a certain amount ∆E to E adj = E ∞ + ∆E . 3 The physical process leading to the unexpected acceleration can be demonstrated by a comparison of Equations (10) and (12) at the heliocentric distance r 1 (the start of a data arc), where both radial velocities are assumed to be equal, and at a greater distance r 2 = r 1 + ρ . This implies: To find the specific effective potential energy change in a Kepler orbit and a modified one, we compare A lengthy calculation then shows that the decrease in the specific effective potential energy in the modified orbit is smaller than in the Kepler orbit by 2 ε ρ 2 /[r 1 (r 2 1 − ρ 2 )]. Although the variation of the specific angular momentum in the adjusted orbit equation is, in principle, not dependent on that of the specific energy, we feel that the observations reflected in the orbit parameters No. 3 and 4 might provide some constraints: (1) Assuming that the heliocentric distance r of 'Oumuamua and its radial velocityṙ could best be established at the beginning of the observations, we require that the initial conditions agree for orbit No. 4 and the corresponding adjusted trajectory. By equating Equations (10) and (17) the same radial velocityṙ 1 =ṙ adj 1 is valid for both trajectories at this distance. It can be seen that and that the adjusted specific angular momentum as function of the fit parameter ∆E is cf. Equation (17). It is important to note that the perihelion distance has to change accordingly. As there are no observations available for q, this cannot be checked and lead to a potential conflict.
(2) For the adjustment of orbit solution No. 3, we refer to a statement by Micheli et al. in the appendix of [8]: METHODS, Section Non-gravitational models. We interpret it to mean that "the non-gravitational acceleration on 'Oumuamua on October 25 at" r = 1.4 au was 2.7 × 10 −6 m s −2 . Assuming that the radial velocity was well defined there by solution No. 3, we use a heliocentric distance r = 1.4 au = 2.094 370 2 × 10 11 m instead of r 1 in Equation (19).
With corresponding values for E adj and M adj , the adjusted (or modified) orbit characteristics in Table 1 would be defined by cf. Equations (7)b and (6)a,c. In Section 3 the quality of the approximations will be considered.
In the next step, we have to establish the time dependence of the hyperbolical motions on the trajectories with parametric equations for the time t, the heliocentric distance r and the heliocentric x and y coordinates given in ref. [60] ( §15, p. 38) 5 : where the parameter χ varies from −∞ to +∞ for a complete flyby. The equations have to be applied for orbit characteristics e and a in Table 1, as well as for e adj and a adj , cf. Equations (20)a,b, both for orbits No. 3 and 4 under consideration. The same procedure will be used to obtain an estimate of e Ou as a function of ρ or ρ Ou . As listed in Table 3, the time t 1 -the start time of the observations in seconds after the perihelion passage-should correspond to χ{0} and t 2 , the end of the observations to χ{100} for orbits No. 3 and 4. The formalism allows us to calculate the radial velocitiesṙ in Equation (10) for both data arcs between t 1 and t 2 at times t{i} and positions r{i} for equidistant values of the parameters χ{i} (i = 0 to 100). This procedure yields 101 data points and 100 intervals in Equations (22) and (23) (A test with more intervals did not substantially improve the calculations). The same calculations have to be done for the modified and adjusted configurations of Equations (17) and (12) The adjusted accelerations are Figure 1 over the heliocentric distance r for a comparison with the A 1 /(r/au) and A 1 /(r/au) 2 fits. Table 3. Boundaries of the trajectory calculations and anomalous acceleration results. Shaded areas show the uncertainty ranges of the radial "non-gravitational acceleration" terms A 1 /(r/au) (dashed-dotted line) and A 1 /(r/au) 2 (dashed line), cf. [8]. The residual radial accelerations ∆A adj 3 (r{i}) and ∆A adj 4 (r{i}) as differences between orbit solutions No. 3 and 4 and the corresponding adjusted trajectories are plotted as triangles and plus symbols, respectively, cf. Equation (26). Increases ∆E 3 and ∆E 4 of the specific energies provided the best overlap of the adjusted graphs with the A 1 -terms and their uncertainty ranges. The diamond symbol indicates the anomalous radial acceleration 2.7 × 10 −6 m s −2 at r = 1.4 au (ref. [8]). An offset ρ = 100 000 km would result in a residual radial acceleration ∆A Ou 4 shown as a solid line.
Start of Observation
The task at hand now is to find specific energy increments ∆E 3 and ∆E 4 or offset values ρ Ou that lead to additional radial accelerations relative to orbits No. 3 and 4 in reasonable agreement with the range of "non-gravitational acceleration" found by Micheli et al. in ref. [8]. The results with different assumptions for ∆E have been compared by many iterations with this range until acceptable fits with ∆E 3 = +2.9 × 10 5 m 2 s −2 (for No. 3) and ∆E 4 = +4.5 × 10 5 m 2 s −2 (for No. 4) were achieved as shown by the functions ∆A adj 3,4 . The relative increase required in Equation (15) thus is ≈ 0.1%.
Discussion
Since the expectation outlined in the introduction that an offset of the gravitational centre of the order of several kilometers would provide a direct explanation of 'Oumuamua's anomalous radial acceleration could not be substantiated, we pursued an indirect approach and proposed that the deviations from a Kepler orbit permitted us to assume wider uncertainty margins than listed in Table 1 for the orbit solutions No. 3 and 4 until reasonable fits could be produced. As shown in Figure 1, it was possible to obtain appropriate additional acceleration values covering the uncertainty range of ref. [8], but with a steeper decrease as a function of radial distance.
The offset values required to achieve such a fit with constant E ∞ and perihelion q are far outside the range in line with the anomalous perihelion data mentioned in the introduction. The corresponding calculations were, nevertheless, helpful in showing that -even for large offset values -the deviations of the modified orbits from Kepler orbits on the data arcs is so small that the approximation in Equation (20) is very good. In Equation (12), for instance, the correction factors for the potential energy term would be r 1 /(r 1 − ρ ) = 0.999 999 97 and (r 1 − ρ Ou )/r 1 = 0.999 392 82 at the start of the observations. The orbits No. 1 and 4 together with the adjusted trajectories during the observation times have been plotted in Figure 2 with the help of Equations (21)c and (21)d. In addition, the trajectories have been traced back to their perihelia by decreasing χ in equidistant steps to zero. It can be seen that orbit No. 4 can barely be distinguished from the adjusted path on this scale. The main difference is indeed the additional acceleration and, consequently, a higher radial velocity at the end of the data arc combined with a greater radial distance. We find from Table 3 The adjusted specific energy E adj and the angular momentum M adj in Table 2 correspond to an increase in V ∞ of 10.98 m s −1 for No. 3 and 17.04 m s −1 for No. 4 (relative variations 0.041 6% and 0.064 5%, respectively). The impact parameters h experienced relative increases of ≈ 0.072 1% (No. 3) and ≈ 0.038 1% (No. 4). The adjusted energy and angular momentum quantities are greater than the maximal values E max and M max allowed within the 1 σ-limits listed in Table 1. However, this should not necessarily be seen as a conflict, because of the modified solar gravitational potential in Equation (4). As a consequence of the offset ρ , the trajectory of 'Oumuamua is not an exact Kepler orbit and wider uncertainty ranges can be expected. The arguments presented in Section 2.2 also support this conjecture.
Conclusions
The observed anomalous acceleration has been modelled with respect to the orbit solutions JPL 16 and "Pseudo-MPEC" for 1I/'Oumuamua. with the assumption that the trajectory of 'Oumuamua is not an exact Kepler orbit allowing us to assume angular momentum and approach energy values M adj and E adj outside the published uncertainty ranges.
In conclusion, it appears that the observed anomalous acceleration of the interstellar asteroid 1I/2017 U1 ('Oumuamua) can be modelled without the assumption of any cometary activity. | 5,703.6 | 2020-12-07T00:00:00.000 | [
"Physics",
"Geology"
] |
Fast and reliable production, purification and characterization of heat-stable, bifunctional enzyme chimeras
Degradation of complex plant biomass demands a fine-regulated portfolio of glycoside hydrolases. The LE (LguI/Eco81I)-cloning approach was used to produce two enzyme chimeras CB and BC composed of an endoglucanase Cel5A (C) from the extreme thermophilic bacterium Fervidobacterium gondwanense and an archaeal β-glucosidase Bgl1 (B) derived from a hydrothermal spring metagenome. Recombinant chimeras and parental enzymes were produced in Escherichia coli and purified using a two-step affinity chromatography approach. Enzymatic properties revealed that both chimeras closely resemble the parental enzymes and physical mixtures, but Cel5A displayed lower temperature tolerance at 100°C when fused to Bgl1 independent of the conformational order. Moreover, the determination of enzymatic performances resulted in the detection of additive effects in case of BC fusion chimera. Kinetic measurements in combination with HPLC-mediated product analyses and site-directed mutation constructs indicated that Cel5A was strongly impaired when fused at the N-terminus, while activity was reduced to a slighter extend as C-terminal fusion partner. In contrast to these results, catalytic activity of Bgl1 at the N-terminus was improved 1.2-fold, effectively counteracting the slightly reduced activity of Cel5A by converting cellobiose into glucose. In addition, cellobiose exhibited inhibitory effects on Cel5A, resulting in a higher yield of cellobiose and glucose by application of an enzyme mixture (53.1%) compared to cellobiose produced from endoglucanase alone (10.9%). However, the overall release of cellobiose and glucose was even increased by catalytic action of BC (59.2%). These results indicate possible advantages of easily produced bifunctional fusion enzymes for the improved conversion of complex polysaccharide plant materials. Electronic supplementary material The online version of this article (doi:10.1186/s13568-015-0122-7) contains supplementary material, which is available to authorized users.
Introduction
As the major component of the plant cell wall, cellulose is the most abundant renewable biomass resource on earth. Due to the severe continuous depletion of crude oil and constant emission of greenhouse gases, effort has been taken to establish sustainable production of bioethanol from lignocellulose. In addition to efficient pretreatment methods to separate lignin, hemicellulose and cellulose, the efficient degradation of the latter polysaccharides into fermentable monosaccharide sugars, by the synergistic action of enzymes, is a bottleneck in lignocellulose conversion (da Costa Sousa et al. 2009;Hu et al. 2013). However, the economic commercialization of lignocellulosic biorefinery approaches is mainly hindered by the large costs to produce functional and stable biocatalysts for polysaccharide decomposition (Bornscheuer et al. 2014). The complex structure of lignocellulose is the major impediment to its degradation and requires the use of a portfolio of cellulases: endoglucanases (EC 3.2.1.4) randomly cleave internal β-1,4-glycosidic linkages, while cellobiohydrolases (EC 3.2.1.91) produce cellobiose by hydrolyzing chain ends of oligo-and polysaccharides, and finally, β-glucosidases (EC 3.2.1.21) produce glucose from cellobiose (Klippel and Antranikian 2011).
So far, most cellulases that exhibit enzymatic properties for industrial applications were isolated and Open Access *Correspondence<EMAIL_ADDRESS>Institute for Technical Microbiology, Hamburg University of Technology, Kasernenstr. 12, 21073 Hamburg, Germany characterized from wood-degrading fungi or mesophilic Bacteria (Kuhad et al. 2011). However, harsh industrial conditions certainly presume the exploitation of further enzyme sources. Moreover, conventional isolation and application techniques reached their limits in recent years, resulting in the development of versatile molecular biology techniques to engineer tailored biocatalysts (Bornscheuer et al. 2012). These candidates are being designed to overcome main drawbacks including limits in enzymatic specificity, efficiency and thermal instability.
The increasing demand for active biocatalysts capable of catalyzing the conversion of cellulose at elevated temperatures or in the presence of solvents allows the reasonable application of enzymes that are isolated from extremophilic microorganisms, so called extremozymes . Such Bacteria and Archaea thrive in the harshest places on earth like hot springs, sea ice, solfataric fields and the deep sea and represent a treasure chest of industrially applicable biocatalysts encoded in their genomes. Due to cultivation limits, metagenomic analyses have greatly facilitated the identification of cellulases and additional biocatalysts from extremophiles (Chow et al. 2012;Graham et al. 2011;Schröder et al. 2014).
Suitable candidates with comparable biochemical properties can be used for fine-regulated processes to efficiently degrade plant-material for the production of monosaccharides. To increase coupled catalytic action of single enzymes, several strategies were tested including enzyme cocktails, artificial cellulosomes and fusion enzymes (Bülow et al. 1985;Elleuche 2015;Morais et al. 2012;Resch et al. 2013;Rizk et al. 2012). In this context, end-to-end gene fusion has been proven to be a competent method for the construction of lignocellulose degrading bi-and multifunctional enzymes (Adlakha et al. 2012;Fan et al. 2009a;Hong et al. 2007;Kang et al. 2015;Lee et al. 2011). Using this method, a polypeptide is capable of catalyzing two or more distinct reactions. Thus, the number of enzymes that have to be produced will be minimized.
In a previous study, a highly active endoglucanase, Cel5A, from the thermophilic anaerobic bacterial species Fervidobacterium gondwanense DSM 13020 was shown to tolerate its fusion to another protein either at its N-or C-terminus without loosing its catalytic ability to degrade β-1,4-linked cellulosic materials (Rizk et al. 2015). In this study, F. gondwanense Cel5A and a β-glucosidase from a hydrothermal spring metagenome exhibiting comparable heat-stable and heat-active properties were fused in both orientations. Detailed characterization of fusion constructs and equal mixtures of parental enzymes showed that the close proximity is advantageous. However, improved performance was only detectable in one orientation that is superior over the other.
Strains and culture conditions
Escherichia coli strain NovaBlue Singles™ (Merck KGaA, Darmstadt, Germany) was used for plasmid propagation and maintenance and E. coli M15[pREP4] (Qiagen, Hilden, Germany) was the host for heterologous expression of cellulase-encoding genes and for production of bifunctional fusion proteins. Antibiotics ampicillin (100 µg/ml) and kanamycin (50 µg/ml) was supplemented to Luria Bertani (LB) for selection of plasmids. Protein production took place in a 1.2-l fed-batch fermentation culture at 37°C in medium prepared as described elsewhere (Horn et al. 1996). Gene expression was induced by the addition of 1.0 mM isopropyl-β-d-1thiogalactopyranoside (IPTG) when an optical density OD 600 = 25-30 was reached. After 4 h of incubation at a constant temperature of 37°C, cells were harvested by centrifugation resulting in an average wet weight of 80-90 g. Cell pellets were frozen at −80°C for storage and further used for purification approaches.
SDS-PAGE and western blot and zymograms
A 12% sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) was applied to separate proteins produced in E. coli. Visualization was achieved either using Coomassie Brilliant Blue staining or Pierce® Silver Stain Kit (Life Technologies, Darmstadt, Germany). The tool "Compute pI/Mw" from ExPASy Proteomics Server (http://www.expasy.org) was used to calculate molecular weights of proteins for their identification on SDS-PAGEs. A semidry western blotting system was used to transfer proteins from SDS-PAGE to Roti®-PVDF membrane (Roth, Dautphetal-Buchenau, Germany) and a combination of His-Tag® Monoclonal Antibody/Goat Anti-mouse FgG AP conjugate or Strep-Tag® II Monoclonal Antibody/Goat Anti-mouse FgG AP conjugate enabled detection of tagged proteins on the membrane with the His-tag® AP Western Reagents Kit (Merck KGaA, Darmstadt, Germany). In-gel activity assays (Zymograms) were done using carboxymethyl-cellulose (CMC) to measure activity of endoglucanase (i) and esculin for β-glucosidase (ii). Moreover, side activity (β-galactosidase) of the latter enzyme was determined with 5-bromo-4-chloro-3-indolyl-β-d-galactopyranoside (X-gal) (iii). (i) In case of endoglucanase, a thin layer of 2% (w/v) agar-agar containing 0.1% (w/v) CMC in 10 mM Tris-HCl, pH 7.5 was prepared. Samples were heated at 70°C for 5 min or 98°C for 10 min and run on SDS-PAGE. Denaturating agents were washed out in 2 × 30 min in 10 mM NaPO 4 , pH 7.0 mixed with 25% (v/v) isopropanol, followed by two washing steps for 10 min in 10 mM NaPO 4 , pH 7.0. Afterwards, the gel was incubated on top of the 0.1% (w/v) CMC-containing agar layer and incubated for 90 min at 70°C. CMCagar was stained for 20 min using 0.1% (v/v) congo-red and washed for 5 min using 1 M NaCl to detect endoglucanase activity as halos. (ii) In case of β-glucosidaseactivity, samples were prepared according to (i) and separated on a SDS-PAGE followed by a 1 min washing step in A. dest, 60 min washing in 1% (v/v) Triton X-100 and another step for 1 min in A. dest. Afterwards, the gel was incubated with 0.1% (w/v) esculin hydrate and 0.01% (w/v) ammonium ferric citrate in 10 mM NaPO 4 , pH 7.0 for 60 min at 70°C. (iii) To detect activity towards X-gal, samples were prepared and washed with A. dest and Triton X-100 according to (ii) and incubated for 60 min at 70°C in 10 mM NaPO 4 , pH 7.0 containing 0.3 mg (w/v) X-gal and 1% (v/v) dimethylformamide.
Protein activity assays
Plate assays were done by growing E. coli M15[pREP4] expressing single or fusion genes on LB-medium supplemented with ampicillin, kanamycin and 0.1 mM IPTG. Cells were grown over night at 37°C, before overlaying with AZCL-dye coupled HE-cellulose (Megazyme, Bray, Ireland) containing agarose for identification of endoglucanase-activity or 1.2% screening-agarose (50 mM sodium acetate, 2.5 mM CaCl 2 × 2 H 2 O, 170 mM NaCl, 2.5 mM esculin hydrate, 0.4 mM ammonium ferric citrate) for β-glucosidase activity, respectively, and incubated at 70°C. Protein concentrations were determined according to the assay developed by Bradford (1976).
Enzymatic activity towards cellulosic polysaccharides was quantified spectrophotometrically at 546 nm using the 3,5-dinitrosalicylic (DNS) acid-assay (Bailey 1988). The release of reducing sugar ends was measured using the carbohydrate substrate β-glucan (barley, low viscosity, Megazyme, Bray, Ireland) as model substrate. A standard reaction sample was composed of a mixture containing enzyme and 0.5% (w/v) β-glucan incubated in 10 mM NaPO 4 buffer, pH 7.0. Reaction took place in 10 min at 80°C. The amount of enzyme needed to catalyze the release of 1.0 µmol of reduced sugar ends per minute was defined as one unit. Enzymatic activity of β-glucosidase was measured towards 2 mM of 4-nitrophenyl-β-dglucopyranoside (4-NP-β-d-GP) in 10 mM NaPO 4 buffer, pH 7.0 by incubation for 10 min at 80°C under optimal conditions. Subsequently, activity assay was stopped by addition of 10 mM Na 2 CO 3 and incubation on ice. The optical density was determined at OD 410 . One unit of catalytic activity was defined as the amount of enzyme needed to release 1 µmol 4-nitrophenol per min under optimal conditions. To investigate proper reaction conditions, experiments were carried out between 20 and 100°C and pH 2.0 and 11.0. Heat stability tests were done by incubation of enzyme samples in 10 mM NaPO 4 buffer, pH 7.0 at different temperatures between 64 and 90°C in a TGradient cycler (Biometra, Göttingen, Germany). Afterwards, samples were cooled down on ice, before they were applied in standard measurements. High performance liquid chromatography (HPLC) was used to determine oligosaccharides, cellobiose and glucose resulting from β-glucan and cellobiose degradation by cellulases. A 1260 Infinity LC system equipped with Hi-Plex Na and Hi-Plex H columns and with a R i -detector from Agilent was applied with MilliQ-water used as mobile phase at a flow rate of 0.3 ml/min (Hi-Plex Na column) and 0.6 ml/min (Hi-Plex H column), respectively (Agilent, Waldbronn, Germany).
Generation of bifunctional fusion enzymes
The aim of the study was to genetically link a gene (cel5A) encoding a glycoside hydrolase family 5 endoglucanase from extreme thermophilic bacterium F. gondwanense to an open reading frame (bgl1) encoding a glycoside hydrolase family 1 β-glucosidase from a hydrothermal spring metagenome (Klippel 2011;Schröder et al. 2014). To express single and fusion genes under identical conditions, open reading frames were ligated into vector pQE-30-LE harbouring HIS-and STREP-tag encoding sequences separated by a sequence for a -Gly-Ser-Ser-Ser-Gly-linker derived from an overlapping LE-restriction site (LguI and Eco81I) (Marquardt et al. 2014). A cloning approach using restriction endonucleases LguI and/or Eco81I enabled a step-by-step continuous ligation of two fragments in vector pQE-30-LE. Restriction/ ligation of acceptor plasmid pQE-30-LE::cel5A led to the creation of cel5A-bgl1 and bgl1-cel5A fusions. Expression of genes resulted in the production of proteins flanked with a HIS-tag at the N-terminus and a STREP-tag at the C-terminus. Linker amino acid residues were -Gly-Ser-Ser-Ser-between HIS-tag and N-terminal moiety and -Ser-Ser-Gly-between C-terminus of the target protein and STREP-tag. Two proteins in a fusion construct were separated by an additional -Ser-Ser-linker peptide.
Expression of cloned genes was tested on plate-assays using AZCL-HE-cellulose containing plates for endoglucanase activity and esculin-containing plates to detect β-glucosidase activity. E. coli M15[pREP4] harbouring plasmid pQE-30-LE::cel5A displayed endoglucanasespecific activity, while maintenance of plasmid pQE-30-LE::bgl1 resulted in activity towards esculin. As expected, expression of both fusion constructs from plasmids pQE-30-LE::CB and pQE-30-LE::BC led to the hydrolysis of AZCL-HE-cellulose and esculin, respectively ( Figure 1). Mutation of both catalytic glutamate residues, E294A in Cel5A and E395G in Bgl1, revealed functionality of each enzyme as fusion partner in both orientation variants (Additional file 1: Figure S1).
Production and purification of enzymes
Expression strain E. coli M15[pREP4] harbouring plasmid derivatives of pQE-30-LE encoding for single or fusion genes were grown in high-cell density fermentation and crude extracts were prepared for protein isolation. Two purification steps using double-tag properties enabled the separation of fusion proteins including the ability to control the separation from degradation products containing single tags. Production and purification of single constructs HIS-Cel5A-STREP and HIS-Bgl1-STREP displayed the potential of two-step affinity chromatography of proteins that were produced with the pQE-30-LE system in E. coli. Both proteins were produced in recombinant form and purified in small scale (Additional file 1: Figure S2). The double-tagged variant of Cel5A was purified to apparent homogeneity (Additional file 1: Figure S2A), while a purification approach using Bgl1 led to the detection of an additional, smaller HIS-tag protein (Additional file 1: Figure S2B). It might be possible that a proteolysed, truncated HIS-tagged Bgl1 variant without STREP-tag interacts with full-length Bgl1 and therefore remained at low amount in the eluate after Strep-Tactin Superflow® purification. The recent observation that Bgl1 forms a tetramer under native conditions supports this hypothesis .
To produce endoglucanase/β-glucosidase fusion constructs, E. coli M15[pREP4] was transformed either with plasmid pQE-30-LE::CB or pQE-30-LE::BC, respectively. SDS-PAGE and western blotting analyses of crude extracts and samples from purification steps revealed the identification of fusion proteins of an approximate mass of 100 kDa, which is in good agreement with the predicted molecular weights (98.4 kDa) of the fulllength proteins (HIS-tag-1.3 kDa, STREP-tag-1.1 kDa, Cel5A-38.7 kDa, Bgl1-57.3 kDa). Interestingly, impurities exhibiting STREP-tagged degradation products were observed when Bgl1 was produced as C-terminal fusion partner (Figure 2a), while HIS-tagged truncated protein variants remained in NiNTA-purified fractions in fusion proteins with Bgl1 being at the N-terminus ( Figure 2b). However, in case of HIS-Cel5A-Bgl1-STREP purification, STREP-tagged degradation products are not completely washed out during affinity chromatography using NiNTA-agarose. These results indicate that truncated Bgl1-STREP constructs probably interact with HIS-Cel5A-Bgl1-STREP proteins, thereby even remaining in the final eluate ( Figure 2a). Moreover, a HIS-Bgl1 degradation construct is detectable in NiNTA-eluate when crude extract from HIS-Bgl1-Cel5A-STREP producing cells is applied onto NiNTA-agarose containing column (Figure 2b). Utilization of an additional STREPtag purification step did not result in complete separation of the fusion protein from the degradation product, probably again due to Bgl1-multimerization (data not shown). Therefore, NiNTA-purified samples were heated up to 70°C for 10 min prior to application on Strep-Tactin Superflow® column resulting in efficient purification of HIS-Bgl1-Cel5A-STREP (Figure 2b).
Zymograms using substrates for endoglucanase (CMC) and β-glucosidase (Esculin) were applied to investigate functionality of purified constructs (Figure 3). Activity towards CMC clearly indicates the functionality of endoglucanase, when produced as single protein and in both orientations as fusion enzyme. Fusion constructs
HIS STREP bgl1
Cellulose Esculin Activity + --+ + + + + Figure 1 Qualitative analyses of enzymatic activity towards endoglucanase-and β-glucosidase-specific substrates. Graphic illustrations indicate the structure of single and fusion proteins, while plate assays display catalytic activity of mono-and bifunctional cellulases. AZCL-HE-cellulose and esculin were used as respective substrates for endoglucanase, and β-glucosidase. and β-glucosidase alone displayed activity towards esculin and side-activity towards X-Gal, indicating functionality and the capability to be renaturated in SDS-PAGE. However, catalytic activity was detected only after sample heating at 70°C for 5 min prior to gel loading, while heating at 98°C for 10 min in sample buffer resulted in complete inactivity and loss of refolding ability. As a control experiment, purified β-glucosidase was loaded on a SDS-PAGE after heating at 70 or 98°C, respectively and used for zymogram assay, indicating that tertiary conformation is impaired by heat, resulting in non-reversal refolding. Incubation for 5 min at 70°C already resulted in two distinct bands on SDS-PAGE, indicating the presence of two forms of the enzyme (denaturated and native monomer). A signal at 59.9 kDa indicates complete denaturation and is also found after incubation at 98°C, while the protein seems to migrate faster through SDS-PAGE when the tertiary structure is partially retained at 70°C. Moreover, only the folded enzyme form displayed activity in a zymogram (Additional file 1: Figure S3).
Catalytic properties of bifunctional cellulases
Catalytic activity as a function of temperature and pH was measured using β-glucan and 4-NP-β-d-GP as substrates for endoglucanase and β-glucosidase. The activity was measured between 20 and 100°C with fusion constructs displaying optimal activity at 90°C for β-glucan and 80°C for 4-NP-β-d-GP. However, both fusion constructs displayed decreased activity towards β-glucan at 100°C when compared to Cel5A alone (Figure 4a). Due to the fact that inactive β-glucosidase fused to endoglucanase in constructs CB E395G and B E395G C displayed the same result, this effect might be due to fusion mediated conformational changes of the enzyme (Additional file 1: Figure S4B). To prove this, singular Cel5A was mixed with singular Bgl1 and the activity profile according to temperature was compared to Cel5A alone (Additional file 1: Figure S5). Temperature profiles of Cel5A in mixture and alone were highly comparable proving that faster inactivation of tandem constructs is due to artificial fusion. Activity towards 4-NP-β-d-GP was not influenced by fusion of β-glucosidase to endoglucanase, when compared to enzymatic performance of Bgl1 (Figure 4b). Control experiments using fusion constructs with inactivated endoglucanase displayed identical results (Additional file 1: Figure S4A). Thermostability tests indicated that Cel5A was more stable as N-terminal fusion partner at low temperatures (up to 77.2°C), while both fusion constructs were instable compared to Cel5A alone when incubated at 81.6 and 86°C for 60 min. Incubation at temperatures between 60 and 70°C even resulted in thermoactivation for CB fusion construct (Figure 5a). Comparable activation effects were also obtained for CB and Cel5A after incubation for 10 min at lower temperatures (data not shown). Complete inactivity was observed for all constructs after incubation at 90°C for 60 min. In contrast to these results, activity of Bgl1 was decreased in fusion constructs. However, Bgl1 was also activated at temperatures between 64 and 68.4°C when fused at the N-terminus, but the single enzyme was more active at all conditions tested compared to fusion proteins (Figure 5b). Linking proteins did not influence the catalytic behaviour at all when different pH values were tested. Cel5A is optimally active at a pH of 6.0 and retains more than 40% of activity between pH 5.0 and 7.0. Almost identical results were measured with fusion constructs CB and BC, respectively (Figure 6a). Bgl1 and both fusion constructs retained more than 20% of catalytic activity towards 4-NP-β-d-GP between pH 5.0 and pH 8.0. The optimal pH was shown to be between 6.0 and 7.0 (Figure 6b). Bgl1 CB BC Figure 3 Zymograms to analyze catalytic activity of fusion enzymes. SDS-PAGE of purified single and fusion enzymes were loaded after denaturation (incubation at 70°C for 5 min in SDS-containing buffer). Protein bands were detected with silver staining. In case of in-gel activity assays, enzymes were washed with Triton X-100 and incubated at optimal conditions in reaction buffer. Utilization of soluble CMC-cellulose was used to detect the activity of endoglucanase, while esculin and X-Gal were applied to visualize the catalytic performance of β-glucosidase.
Enzyme kinetics and HPLC-analyses revealed additive effects of cellulase activities
Kinetic parameters were investigated using β-glucan as substrate for endoglucanase Cel5A and 4-NP-β-d-GP for β-glucosidase Bgl1 (Table 2). Substrate affinities for single constructs were determined to be K M = 0.12% using β-glucan for Cel5A and K M = 0.41 mM using 4-NP-β-d-GP for Bgl1, respectively. Affinity of enzymes in fusion construct BC was only slightly shifted (0.13% and 0.53 mM), while combined activity of both enzymes in construct CB displayed a lowered affinity towards β-glucan (KM = 0.23%). To accurately compare specific activities of fusion and single constructs, catalytic activity of Bgl1 in fusion constructs was adjusted to molecular weights of the single protein partner in the fusion enzymes. For example, His-tagged Bgl1 accounts for 49.2% of total molecular weight (98.4 kDa) in fusion construct CB, indicating that v max = 744.0 U per mg of total fusion protein can be adjusted to 1,256.7 U per mg of Bgl1 as fusion partner in CB. Therefore, C-terminal fusion of Bgl1 resulted in a reduced activity towards 4-NP-β-d-GP (61.8% residual activity) compared to Bgl1 alone, while activity level of N-terminal Bgl1 is increased 1.2-fold. It is not possible to calculate the catalytic activity of Cel5A in fusion constructs using β-glucan in DNS-assays, because Bgl1 converts released cellobiose to glucose, which also exhibits reducing ends that are also measured. However, combined activity of both enzymes towards β-glucan was reduced compared to Cel5A (1,816.9 U/mg) alone, indicating that the catalytic performance of the endoglucanase was impaired by fusion at the N-terminus (822.6 U/mg) and C-terminus (1,205.9 U/mg). To get a deeper insight into enzymatic performance of Cel5A in fusion enzymes, site-directed mutagenesis was applied to inactivate Bgl1 moieties. Constructs CB E395G and B E395G C enabled the detection of endoglucanase activity alone. As expected, catalytic activity of Cel5A in mutated constructs was reduced to 191.1 U/mg and to 536.1 U/mg, respectively, while the substrate affinity was only slightly affected, proving a parallel release of reducing sugar ends by Bgl1 in doubleactive fusion enzymes (Table 2).
To investigate positive effects in construct BC and inactivation effects in CB in more detail, HPLC-analyses were conducted. Barley β-glucan contains 32% β-1,3linkages in addition to β-1,4-linkages, which are not hydrolyzed by endoglucanase resulting in the accumulation of various sized oligosaccharides with cellobiose being the lowest weight sugar compound released (Additional file 1: Figure S6). Utilization of different stationary phases to distinguish between small oligosaccharides, cellobiose and glucose enabled the detection of combined activities of Cel5A and Bgl1. The endoglucanase releases oligosaccharides and cellobiose from β-glucan and the latter is further processed by Bgl1 to give glucose ( Figure 7a). Endoglucanase Cel5A produced oligosaccharides and 10.9% cellobiose from β-glucan when incubated for 24 h, while more then 50% of oligosaccharides were converted to cellobiose and glucose (27.7% glucose + 25.4% cellobiose) in the presence of Bgl1 mixed with Cel5A indicating potential product inhibition of Cel5A alone (Figure 7b). In contrast to these results, utilization of the BC fusion enzyme resulted in the determination of ~60% of di-and monosaccharides (36.7% glucose + 22.5% cellobiose) from β-glucan. Due to the decreased activities of the Cel5A and Bgl1 moieties in CB (see also v max , Table 2), fused Bgl1 only converted little amounts of cellobiose to glucose (13.7% release of glucose and cellobiose). The slight increase in cellobiose and glucose by CB compared to Cel5A alone might also be a result of product inhibition effects. This is also in good agreement with the observation that around 50-60% of totally released cellobiose is converted to glucose when both enzyme activities are present (Mixture: 51.6% glucose from 100% cellobiose, CB: 57.6%, and BC: 62.0%) (Figure 7b). Furthermore, Bgl1A released almost comparable amounts of glucose from cellobiose when incubated as single enzyme (28.9% glucose from cellobiose), in mixture with Cel5A (30.4%) and in fusion protein BC (35.6%). The catalytic activity of Bgl1 in CB is decreased, which is also in good agreement with determined enzyme kinetics (Figure 7c, Additional file 1: Figure S7; Table 2). Moreover, comparable amounts of Bgl1 activity (each in mixture, CB and BC) were measured by final glucose yields in tests using either cellobiose as substrate or β-glucan (Figure 7b, c), indicating that no channelling effects were observed, but intermediate products were taken from the reaction mixture.
Discussion
A consortium of cellulases and hemicellulases is mandatory for the efficient degradation of complex lignocellulosic plant materials (Bornscheuer et al. 2014;Khandeparker and Numan 2008;Rizk et al. 2012). Cost effective production of such enzymes is crucial for the sustainability of lignocellulosic bioethanol. Therefore, the generation of artificial enzyme chimeras is a promising approach to improve the cost-efficient degradation of plant-derived biomass. Different strategies are applied to produce multifunctional enzyme systems including enzyme cocktails, cellulosomes or xylanosomes and hybrid fusion enzymes (Conrado et al. 2008;Hu et al. 2013). However, all these different techniques without simple mixtures of singular enzymes are limited by the immense size of linked partners. There are two main strategies to circumvent the problem of high molecular weights: (1) enzymes can be truncated or even reduced to their catalytic site, or (2) proteins are chosen that do not contain domains in addition to the catalytic region. Cellulases are often modular multi-domain enzymes composed of carbohydrate-binding modules or predicted domains of unknown function (Bergquist et al. 1999). However, the thermozymes used in this study are compact enzymes that do only contain catalytic domains allowing the production of limited sized bifunctional fusion enzymes (Klippel 2011;Schröder et al. 2014). The LE-cloning strategy has been especially designed to allow the continuous and easy integration of fusion partners in different orders into a growing vector system. Each clone (product and entry) can be used as a new entry-clone (Marquardt et al. 2014). The proper arrangement of fusion enzymes on one polypeptide has been shown to be important for efficient catalytic activity and was even shown to be advantageous for improved thermostability (Hong et al. 2007;Lee et al. 2011). In fact, extensive characterization of fusion constructs in comparison with parental enzymes is mandatory to understand their functional equivalency (Fan et al. 2009b). The fusion of the endoglucanase from F. gondwanense downstream of the β-glucosidase from an Azorean hot spring metagenome displayed catalytic properties that were not influenced by fusion of the two polypeptides with regard to pH range. However, activity of endoglucanase was reduced at 100°C compared to the single Cel5A and enzyme mixtures, which was probably a result of modified protein conformation. Structural data would be important to shed some light on fusion-mediated conformational changes influencing performance of endoglucanase and β-glucosidase. In contrast to this result, the β-glucosidase maintained the temperature optimum and activity range of the parental enzyme, when fused either to the N-or C-terminus of the endoglucanase. However, the specific catalytic activity of Bgl1 was reduced by fusion to the C-terminus, but slightly increased when produced as the N-terminal fusion partner enzyme. Shifts in pH and temperature optima by fusion of protein candidates were also described in other examples (Fan et al. 2009b;Ribeiro et al. 2011;Zhao et al. 2013).
The most interesting part in enzyme fusions is the possible improvement of catalytic properties including synergism, increased product yield and product channelling. In our case, BC was superior over the opposite orientation, resulting in increased total activity and improved thermostability of Bgl1. In contrast to these results, complete catalytic inactivity of a β-glucosidase (BglB) as a result of genetic fusion has been shown by the generation of a bifunctional β-glucosidase/endoglucanase from Thermotoga maritima. BglB was completely inactive when fused as the N-terminal partner, while both enzymes exhibited hydrolyzing activity in the opposite direction. However, fusion resulted in 70% reduced activity even in the orientation with BglB downstream of endoglucanase Cel5C (Hong et al. 2007). Lower or higher specific activities of fusion proteins were often reported, but there are only rare descriptions of bifunctional enzymes that were superior over the free enzymes because of true synergistic and conformational effects (Adlakha et al. 2012;Hong et al. 2007;Riedel and Bronnenmeier 1998;Rizk et al. 2015). BC released around 9% more glucose from β-glucan than the enzyme mixture, which was probably caused by the increased catalytic activity of β-glucosidase and not by intermediate channelling. In agreement with this hypothesis, activity of Bgl1 towards cellobiose was also slightly increased when fused to the N-terminus compared to the singular enzyme. Kinetic parameters were determined to shed more light on these effects.
The K M of β-glucosidase was determined to be 0.4-0.5 mM for single enzyme and fusions, which is in the same range as observed for Bgl1 produced with a N-terminal HIS-tag without C-terminal tag in a previous study . The determination of kinetic parameters using fusion enzymes CB and BC with the substrate β-glucan is difficult because of combined enzyme activities on initial substrate and intermediate products (Figure 7a). However, both fusion constructs displayed reduced activity towards β-glucan when compared to parental Cel5A, indicating that fusion negatively influenced the performance of the endoglucanase. It is thus hard to compare kinetics of Cel5A in fusion enzymes with the parental enzyme in both orientations. Therefore, mutation constructs containing inactivated β-glucosidase were generated to determine catalytic activity of endoglucanase without additional release of glucose from cellobiose mediated by Bgl1. The outcome displayed that activity of endoglucanase is significantly reduced in both enzyme chimeras. Interestingly, HPLC analyses revealed that the final product yield (glucose and cellobiose) of fusion enzyme BC is improved compared to 1:1 enzyme mixtures of identical molar concentrations. This result indicates that a secondary effect is important in addition to conformation-mediated activity reduction of Cel5A. Product analyses using HPLC were done after 24 h of incubation compared to 10 min activity assays used to determine enzyme kinetics probably indicating that the thermostability of fusion enzymes compared to single enzymes must be also taken into account when incubation took place at 60°C ( Figure 5). Finally, such an effect is most probably influenced by the release of cellobiose as well, which inhibits the catalytic activity of endoglucanase.
Fusion of genes to produce multifunctional enzymes is an interesting tool for industrial application, due to improved catalytic activity as well as lower production costs from minimized numbers of recombinant polypeptides. The main objective is to create artificial chimeric enzymes that are superior over monofunctional biocatalysts in the hydrolysis of natural substrates. Generation of bifunctional fusion enzymes in both orientations was shown to be important in random fusion studies due to opposite increasing and reducing activity effects on identical partners. Moreover, the LE-cloning system can allow the incorporation of additional fusion partners into the established vector system to easily screen for superior multifunctional fusion enzymes containing additional cellobiohydrolases, cellulose binding modules or hemicelluloses in the future. However, structural determinations are highly recommended to understand conformational effects and to use the BC-construct for rational design studies to produce novel multifunctional biocatalysts. | 7,123.2 | 2015-06-10T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
Changes in initiation of adjuvant endocrine therapy for breast cancer after state health reform
Background Socioeconomic differences in receipt of adjuvant treatment contribute to persistent disparities in breast cancer (BCA) outcomes, including survival. Adjuvant endocrine therapy (AET) substantially reduces recurrence risk and is recommended by clinical guidelines for nearly all women with hormone receptor-positive non-metastatic BCA. However, AET use among uninsured or underinsured populations has been understudied. The health reform implemented by the US state of Massachusetts in 2006 expanded health insurance coverage and increased the scope of benefits for many with coverage. This study examines changes in the initiation of AET among BCA patients in Massachusetts after the health reform. Methods We used Massachusetts Cancer Registry data from 2004 to 2013 for a sample of estrogen receptor (ER)-positive BCA surgical patients aged 20–64 years. We estimated multivariable regression models to assess differential changes in the likelihood initiating AET after Massachusetts health reform by area-level income, comparing women from lower- and higher-income ZIP codes in Massachusetts. Results There was a 5-percentage point (p-value< 0.001) relative increase in the likelihood of initiating AET among BCA patients aged 20–64 years in low-income areas, compared to higher-income areas, after the reform. The increase was more pronounced among younger patients aged 20–49 years (7.1-percentage point increase). Conclusions The expansion of health insurance in Massachusetts was associated with a significant relative increase in the likelihood of AET initiation among women in low-income areas compared with those in high-income areas. Our results suggest that expansions of health insurance coverage and improved access to care can increase the number of eligible patients initiating AET and may ameliorate socioeconomic disparities in BCA outcomes. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-021-08149-0.
Background
Breast cancer (BCA) is the second most common cancer and the second leading cause of cancer death among women in the US [1]. This is also the case in the state of Massachusetts, where about 30% of new cancer cases and 13% of cancer deaths among women from 2011 to 2015 were due to BCA [2,3]. However, due in large part to advances in adjuvant therapies, overall BCA mortality rates have steadily declined since the 1970s, and relative 5-year and 15-year survival rates among BCA patients were estimated to be 90 and 80%, respectively in 2018 [1,4,5]. While mortality has declined overall, gains have been greatest in affluent areas and have lagged for women in poor counties, suggesting continued sociodemographic disparities in access to care [1].
Previous research has shown that personal, social, and structural factors are associated with guidelineconcordant use of AET among BCA patients [21,22]. Older age, low socioeconomic status, non-White race/ ethnicity, higher levels of comorbidities and disability, and nonprivate insurance are factors associated with delayed initiation of adjuvant BCA therapies [23][24][25][26][27][28][29][30]. In particular, health insurance coverage is a significant factor associated with initiation of guideline-recommended treatment for BCA. Multiple studies found that higher patient out-of-pocket costs were inversely associated with adherence to AETs, suggesting the importance of comprehensive insurance coverage in ensuring access to therapy along the cancer care continuum [31][32][33].
To better understand the impact of insurance coverage on initiation of AET and associated socioeconomic disparities among BCA patients, we examine the impacts of the Massachusetts Health Care Insurance Reform Law passed in 2006 (hereafter, Massachusetts health reform), which increased the rate of insurance coverage in the state as well as the scope of covered benefits for many who were already previously insured [34]. The reform mandated that every individual in the state have health insurance if affordable coverage is available to them. To ensure the affordability of insurance plans, the Commonwealth of Massachusetts expanded its Medicaid program (the health insurance program for low-income and disabled individuals), instituted insurance market reforms, and required employers not offering insurance to contribute to the financing of insurance premium subsidies [35]. In addition, the Commonwealth Health Insurance Connector was established to allow individuals without access to employer-sponsored insurance to purchase community-rated insurance directly [36].
This study examines changes in initiation of AET among non-elderly women in low-income areas diagnosed with BCA following health reform in Massachusetts. We hypothesize that changes in the proportion of BCA patients who initiate AET will be more substantial among women from lower-income areas, who are most likely to be impacted by the reform, than those from higher-income areas in Massachusetts.
Data source
Our primary data source is the Massachusetts Cancer Registry (MCR), which is administered by the Massachusetts Department of Public Health and follows standards set by the North American Association of Central Cancer Registries (NAACCR), the Commission on Cancer (CoC), the National Cancer Institute (NCI), and the Centers for Disease Control and Prevention (CDC) [37]. The MCR collects sociodemographic information of patients (age, sex, race, ethnicity, marital status, and geographic area), cancer diagnosis (date of diagnosis, primary site, stage at diagnosis, and other tumor details) and treatment information from various health care settings within the state. We also used the American Community Survey (ACS), which is administered by the US Census Bureau to collect economic and sociodemographic information from a sample of the US population [38], to estimate median household income at the ZIPcode level in 2006. The Area Health Resources Files (AHRF) were used to control for county-level health care capacity and infrastructure.
Study population
Out of BCA cases diagnosed during the study period of 2004-2013 in Massachusetts, we eliminated cases among non-female patients; cases in patients under 20 or over 64 years of age, given they were not directly impacted by the insurance reforms; cases for which the patient had another BCA diagnosis within the study period on or before the date of diagnosis of the current case; cases for which diagnosis was established by autopsy or death certificate or where diagnosis date was the same as death date; cases for which the patient had another BCA diagnosed within 365 days; cases that did not receive surgery; cases with ER-negative status; cases with incomplete PR status (data not collected or not documented); cases with stage IV diagnosis; cases with no information on initiation of AET; and cases whose date of endocrine therapy preceded the date of surgery. Figure 1 illustrates how the final study sample of all ER-positive BCA surgical patients ages 20-64 years with complete AET initiation information in Massachusetts from 2004 to 2013 was derived (n = 20,713).
Statistical analysis
We estimated a linear probability regression model to examine differential changes in the likelihood of initiating AET among BCA patients residing in lower-income areas compared to those in higher-income areas in Massachusetts. Our model includes a binary variable identifying lower-income ZIP code areas, another binary variable indicating the post-reform period, and an interaction term between these two binary variables to assess population-level differential impacts of the state health reform on initiation of AET among BCA patients living in lower-versus higher-income areas of Massachusetts. The outcome variable is a binary variable indicating whether the patient initiated AET after surgery. ZIP code areas with median household income below the state median value in 2006 ($68,293), were categorized as low-income areas; this designation was used to capture a population most likely impacted by the reform. The comparison group was BCA patients who lived in ZIP code areas with median household income above the state median income. The pre-reform period includes 2003-2006; the post-reform period includes 2007-2013. We adjusted for patient age, marital status, race/ethnicity, and stage at diagnosis. We controlled for Fig. 1 Study Sample Selection and Exclusions. *We assumed cases with missing diagnosis day but with usable data on month to have occurred on the 15th of the month. **Exclusion is based on subsequent breast cancer tumors only. ***Incomplete PR status corresponds to the following cases: PR status information was not collected for this case or not documented in patient record secular trends in BCA treatments by adjusting for each calendar year in the model. An individual-specific error term was estimated using Huber-White robust standard errors.
We conducted two additional sets of models to test whether the changes in the initiation of AET were sensitive to definition of low-income areas and study period. The first model defined low-income areas as the lowest tertile ZIP code areas and high-income areas as the highest tertile ZIP code areas in Massachusetts and compared subgroups of patients most likely and least likely to be impacted by the health reform. The second model truncated the post-reform period in 2010 to account for the introduction of generic AIs in 2010 [32]. In addition, we estimated three additional sets of models to test robustness of results to sample inclusions and model covariates: 1) including only patients who were both ER and PR positive, 2) excluding patients whose derived American Joint Committee on Cancer (AJCC) stage was 0, and 3) adjusting for county-level characteristics including median household income, number of providers (primary care physicians, specialists, and safety net providers) per 1000 population, number of hospital beds per 1000 population, percent unemployed (> 16 years old), percent without a high school diploma (> 25 years old), percent of White Non-Hispanic/Latino, and percent urban residents [39].
To supplement our main results comparing populationlevel average changes in proportions of BCA patients residing in low-income areas who initiate AET after surgery to those in high-income areas, we estimated a linear model to assess overall changes in temporal trends in AET initiation. This model includes a time variable that measures the study period by quarter and ranges from 0 (first quarter of 2003) to 39 (fourth quarter of 2013), a binary variable that indicates the pre-reform period as 0 and the post-reform period as 1 which indicates the level change following the state health reform, and an interaction variable between these two variables which indicates the slope change following the state health reform. We adjusted for patient age, marital status, race/ethnicity, stage at diagnosis, and county-level characteristics described above. Table 1 presents the characteristics of the sample before and after Massachusetts health reform. About 71% of BCA patients initiated AET statewide before the reform; after the reform, this proportion increased by 6.2 percentage points. In lower-income ZIP code areas, we observed an increase of 8.8 percentage points in the proportion of BCA patients who initiated AET after surgery; in higher-income ZIP code areas, there was an increase of 4.2 percentage points.
Results
More BCA patients were diagnosed at stage 0 or I in ZIP code areas above state median household income in the pre-reform period; however, this difference was attenuated in the post-reform period. The number of BCA patients who had a mastectomy with reconstruction increased statewide in the post-reform while the number of BCA patients who had breast-conserving surgery (BCS) decreased more in higher-income ZIP code areas. Women residing in high income areas were more likely to be married and less likely to be non-Hispanic White compared to women residing in low-income areas. Table 2 presents differential changes in the likelihood of initiating AET between BCA patients residing in lowincome versus high-income areas, adjusting for patient characteristics. There was a 5-percentage point increase (p-value< 0.001) in the likelihood of initiating AET following the Massachusetts reform in low-income ZIP codes, relative to the increase in the higher income ZIP codes. This effect was more pronounced among younger BCA patients ages 20-49 years, for whom we observed a 7.1-percentage point (p-value = 0.001) relative increase in the likelihood of initiating AET relative to the comparison group.
When we shortened the post-reform period to isolate changes after health insurance expansions prior to the introduction of generic AIs (Table 3, column 1), we observed a significant relative increase in the likelihood of initiating AET among BCA patients aged 20-64 years. However, the differential effect of Massachusetts health reform on the likelihood of initiating AET among older women residing in low-income areas was no longer significant when the post-reform period was restricted to 2007-2010. Using tertiles to define the low-and highincome areas in Massachusetts (Table 3, column 2), the estimates were robust to the alternative specification of income categories. We consistently observed a significant increase in the likelihood of initiating AET among BCA patients residing in lower-income areas in MA when we restricted the study population as ER-and PRpositive BCA patients (Table 4, Column 1), excluded patients diagnosed with in situ disease (Table 4, Column 2), and adjusted for county-level demographic and health care capacity variables (Table 4, Column 3).
To assess pre-trends of the outcome, we estimated event study regressions for all models and considered the joint F-test on interactions between income category and year in the pre-period to test the pre-period significance level (Additional file 1: Appendix Table 1). We observed that the pre-trends of the outcome were not significantly different between lower-income areas and higher-income areas in MA.
In addition, Fig. 2 presents temporal trends in the adjusted predicted percentage of BCA patients initiating AET in Massachusetts quarterly from 2004 to 2013. Considering the adjusted predicted percentage of BCA patients initiating AET in the pre-reform period, a higher percentage of BCA patients residing in high income areas in MA initiated AET after surgery than BCA patients residing in low income areas on average. However, the percentage of BCA patients residing in low income ZIP codes in MA initiating AET and that of BCA patients residing in high income ZIP codes in MA initiating AET converge at quarter 20, about 2 years after the state health reform. From quarter 20 till the end of the study period, we observe a higher proportion of BCA patients residing in low income ZIP codes in MA initiating AET after surgery than BCA patients residing in high income ZIP codes in MA.
Discussion
In this study, we examined the differential impact of Massachusetts health reform on the initiation of AET in lower-income ZIP codes, where residents were most likely to be impacted by the reform, and higherincome ZIP codes in Massachusetts. The expansion of health insurance resulted in positive relative changes in the proportion of BCA patients in lower-income areas of the state who initiate AET. The most pronounced effects of Massachusetts health reform were in women ages 20-49 years. The relative percentagepoint change between the pre-and post-reform periods among the younger sample of women was twice as large as that of older women (7.1-percentage points vs. 3.6-percentage points). Massachusetts health reform increased access to medical care, improved financial support for safety-net hospitals, and provided more expansive prescription drug coverage. In particular, younger patients were more likely to gain coverage under state health reform. A 2004 Massachusetts Health Insurance Survey found that over 90% of newly enrolled Medicaid enrollees after Massachusetts health reform were previously unenrolled [35]. Previous studies have demonstrated that higher out of pocket prescription drug costs are associated with lower initiation and higher discontinuation of medications and treatments [27,31,32,[40][41][42][43][44]. Our study further supports these findings, estimating about a 5-percentage point relative increase in the likelihood of BCA patients aged 20-64 years in low-income areas initiating AET after reform relative to BCA patients in the same age group living in high-income areas. Given that AET is recommended for extended periods, even small monthly costs may add up to a substantial financial burden over time [45,46].
Socioeconomic disparities in mortality among BCA patients have persisted despite increases in overall survival rates over recent decades. According to a recent report by the American Cancer Society, the mortality rate among BCA patients in poor counties was about 1.16 times higher than that in affluent counties. The observed relative increases in likelihood of initiating AET among younger women in lower-income areas who were more likely to have been without health insurance prior to the reform implies that Massachusetts health reform reduced disparities in receipt of adjuvant therapy and has important implications for health outcomes among BCA patients. This study examines initiation of AET, though AET adherence is also critical for reducing breast cancer recurrence rates, thus future studies estimating the effects of state and federal health reforms on adherence to AET would provide additional insight regarding the impact of health insurance policy changes on health outcomes.
This study has limitations. First, the limited prereform period might not have captured other secular trends that contributed to the increased likelihood of initiation of AET among BCA patients. Second, we did not have comparable data available from other states, limiting the geographic generalizability of our findings and the ability to include a control group that was not impacted by the reform in any way. However, this study compared ZIP codes where the median household income was below the state median household income to Model (1) re-defines the post-reform period to be 2007-2010. Model (2) re-defines the low-income areas to be the lowest tertile ZIP code areas and the highincome areas to be the highest tertile ZIP code areas in Massachusetts. Both models are based on the main multivariable difference-in-differences regressions in Table 2. All models control for age at diagnosis, marital status, race/ethnicity, stage at diagnosis, and type of surgery AET Adjuvant Endocrine Therapy ***p-value< 0.01 **p-value< 0.05 those where median household income was above the state median household income in an effort to compare a population most likely to be impacted by the reform to a comparable group. Given that those in higher-income areas also stood to benefit from certain provisions of the health reform, our estimates of the differential impact on patients from lower-versus higher-income areas may represent an underestimate of the full impact of reform. Fourth, due to the nature of cancer registry data, there was no relevant clinical information, including These robustness checks were conducted in the main multivariable difference-in-differences regressions in Median household income, percent unemployed, percent with less than a high school education, percent non-Hispanic white, percent urban; and primary care physicians, specialist physicians, safety net provider, and hospital beds all specified as rate per 1000 population Fig. 2 Adjusted predicted % BCA patients initiating AET in Massachusetts by quarter from 2003 to 2014. The model adjusted for age, race/ethnicity, marital status, stage at diagnosis, ER status, type of surgery received, low-, intermediate-, and high income areas, county-level characteristics including median household income, number of providers (primary care physicians, specialists, and safety net providers) per 1000 population, number of hospital beds per 1000 population, percent unemployed (> 16 years old), percent without a high school diploma (> 25 years old), percent of White Non-Hispanic/Latino, and percent urban residents. BCA: Breast Cancer; AET: Adjuvant Endocrine Therapy; ER: Estrogen Receptor menopausal status and comorbidities, and other potential socioeconomic information including patient's education and income that can impact initiation of AET. However, regarding patient's menopausal status, we adjusted for BCA patients' age as a proxy. Fifth, local area (such as ZIP code-level) health care capacity characteristics, such as the number of providers, were not adjusted for in the model. Estimates from models that adjusted for county-level health care capacity characteristics were strikingly similar to estimates from our main model.
Conclusions
Disparities in BCA outcomes by socioeconomic factors, such as poor insurance coverage and lack of financial resources, persist. Given that about two-thirds of earlystage BCA cases are hormone-responsive, our findings indicate that expansion of health insurance coverage increases the number of eligible patients initiating AET and insurance coverage expansion may be an important policy tool for reducing income disparities in BCA outcomes, including survival. Timely initiation of and adherence to AET will result in better prognosis, which will prevent recurrence rates and improve survival of patients. This evidence from Massachusetts health reform underscores the significance of continued efforts to expand coverage across the US and emphasizes the importance of evaluating the effect of other relevant health insurance policies, such as the Affordable Care Act (ACA), on the uptake of adjuvant treatment among cancer patients. | 4,590.6 | 2021-05-01T00:00:00.000 | [
"Medicine",
"Economics"
] |
GIS-Based Emotional Computing: A Review of Quantitative Approaches to Measure the Emotion Layer of Human–Environment Relationships
: In recent years, with the growing accessibility of abundant contextual emotion information, which is benefited by the numerous georeferenced user-generated content and the maturity of artificial intelligence (AI)-based emotional computing technics, the emotion layer of human–environment relationship is proposed for enriching traditional methods of various related disciplines such as urban planning. This paper proposes the geographic information system (GIS)-based emotional computing concept, which is a novel framework for applying GIS methods to collective human emotion. The methodology presented in this paper consists of three key steps: (1) collecting georeferenced data containing emotion and environment information such as social media and o ffi cial sites, (2) detecting emotions using AI-based emotional computing technics such as natural language processing (NLP) and computer vision (CV), and (3) visualizing and analyzing the spatiotemporal patterns with GIS tools. This methodology is a great synergy of multidisciplinary cutting-edge techniques, such as GIScience, sociology, and computer science. Moreover, it can e ff ectively and deeply explore the connection between people and their surroundings with the help of GIS methods. Generally, the framework provides a standard workflow to calculate and analyze the new information layer for researchers, in which a measured human-centric perspective onto the environment is possible.
Introduction
The human-environment relationship has always been a key issue in geography in terms of the interaction between human society and its activities and geographical environment [1][2][3]. There is a significant body of literature that investigates such relationship from various aspects, including As illustrated in Figure 2, the framework comprises three key steps: first, collecting environment and emotion related data in various context from data sources such as social network sites and official sites; second, exploring and cleaning data and extracting emotional information from georeferenced emotion related data based on its data structure; and third, conducting spatiotemporal analysis using As illustrated in Figure 2, the framework comprises three key steps: first, collecting environment and emotion related data in various context from data sources such as social network sites and official sites; second, exploring and cleaning data and extracting emotional information from georeferenced emotion related data based on its data structure; and third, conducting spatiotemporal analysis using GIS methods such as spatial interpolation and kernel density analysis in order to provide researchers with additional insights into the complex human-environment relationship. To elaborate the contents of each step, the rest of this paper is structured as follows. In Section 2, step 1 and step 2 of GIS-based emotional computing will be stated. Specifically, we classify three types of data sources of human emotions in the existing literature and elaborate their current advantages and weakness. On the basis of data sources and data structure, we introduce several popular methods of emotion recognition. Additionally, Section 3 presents the step 3 of GIS-based emotional computing, and three analysis directions show the potential of GIS methods in emotion analysis. Section 4 summarizes the current challenges and opportunities on GIS-based emotional computing. Finally, in Section 5, we end the paper with a number of key conclusions.
Emotion Recognition
Emotion organizes our cognitive processes and action tendencies [19] and influences individuals' social interactions in systematic ways [20][21][22][23]. Furthermore, studies suggest that emotional expressions have a potential impact on personality, even can predict life outcomes (e.g., marriage and personal well-being) of decades later [24,25]. Since measuring a person's emotional state is one of the most vexing problems in emotional studies, emotion recognition plays a dominant role in GIS-based emotional computing. Generally, the data sources of human emotions include the following three types: self-report, body sensor, and UGC. According to data structure, the methods of emotion recognition can be classified into four types: self-reported, body sensor-based, UGC text-based, and UGC image-based. As such methods continue to be improved, we will introduce several popular methods of each type in this section.
Self-Reported
Self-report usually collects emotional information by online or offline questionnaires and interviews. It is a traditional and classic data source. Although alternative data sources of human emotions emerged one after another, self-report remains a popular choice.
A substantial body of research on self-reported emotional information proves its easy interpretability, the richness of information, and sheer practicality [26][27][28]. For example, a recent study obtained the daily time, location, activity, mode of transportation, and emotions of female sex workers in their diaries [29]. However, the response rate of questionnaires, in most studies, remains relatively low [10,30], and these studies rest upon the assumption that respondents can represent those who refused to respond. Moreover, prior literature has also shown that people have blind spots in their self-knowledge, and they may not always understand their emotional states very accurately [31,32].
There are two mainstream self-reported scales wildly utilized in emotional research. One common test called Satisfaction With Life (SWL) was put forward by Diener, Larsen [33]: its score reflects the extent to which a person feels that his/her life is worthwhile [34,35]. Continued efforts have been made by scholars and policymakers to measure and promote subjective well-being for individuals and groups at the community level with the help of SWL [36,37]. Applications of SWL have been implemented at regional, national [38], and global levels [39][40][41].
However, the SWL test is restricted to only rate people's happiness. A two-factor model of Positive and Negative Affect Schedule (PANAS), developed by Watson et al. [42], has been used more extensively according to the self-report emotion literature. This model is comprised of two 10-item emotion scales. These items are words that describe different feelings and emotions in Positive Affect (PA) and Negative Affect (NA), such as interested and irritable to describe a person's emotional state. Updated versions of the PANAS were developed. For instance, to assess specific emotional states, Watson et al. [42] created a 60-item extended version of the PANAS (the PANAS-X) that can measure 11 specific emotions including fear, sadness, guilt, hostility, shyness, fatigue, surprise, joviality, self-assurance, attentiveness, and serenity. Meanwhile, a 30-item, modified version of the PANAS designed for children (PANAS-C) was proposed by Laurent et al. [43], and provides a brief, useful way to differentiate anxiety from depression in children.
Body Sensor
In recent decades, with the motivation of making computers that can assess and even understand users' emotional states, existing literature of human-computer interaction (HCI) has applied sensing technology to collect users' physiological signals in different emotional states [44][45][46]. Stationary and wearable sensors are both commonly utilized to collect the changes in the physiological signals of users [47]. As an example, a wearable sensor platform was developed by Choi et al. [48], which monitored mental stress.
Even if people do not explicitly express their emotions through facial expressions, changes in their physiological patterns are inevitable and collectible [49]. However, the inherent noise in physiological signals and their non-standard data structures has hampered the wide utilization of such data [49]. Even more, they can only provide datasets with limited sample sizes and short time durations [50][51][52].
There is a popular workflow of body sensor-based methods. Once the physiological signals were collected from multi-sensory devices, signal processing methods were used to extract applicable features from the physiological signals. Then, machine learning algorithms utilize, such features as model inputs to predict emotional state. Generally, five types of physiological signals are widely captured because they are show the correlation of underlying emotional fluctuations [53], including: (1) cardiovascular activities, (2) electrodermal activities, (3) the respiratory system, (4) the electromyogram activities, and (5) brain activities. Likewise, there are numerous options of signal processing methods (e.g., Fourier transform, wavelet transform, thresholding, and peak detection) and machine learning algorithms (e.g., k-nearest neighbor, regression trees, Bayesian network, and support vector machine) in the workflow [49]. For instance, Choi et al. [48] used the k-nearest-neighbor algorithm and the discriminant function analysis to analyze the physiological signals such as galvanic skin response and heat flow, when classifying the emotions.
UGC Text-Based
When entering the 21st century, the increasing development of social networking sites (SNS) provides unprecedented opportunities to collect massive individual emotional information. Geo-tagged UGC (e.g., microblogs, blogs, and reviews) usually collect from various SNS such as Twitter, Amazon, Weibo, and Flickr.
These UGC offer rich information about users' emotions in different settings such as family, work, and travel. Moreover, those petabytes of data have high spatiotemporal resolution, and their collection is convenient and timesaving. Nevertheless, abundant evidence shows that the bias (including emotional bias) exists in big data, and its spatial sparsity still needs to be addressed [54]. Furthermore, although geo-information shows that UGC can be related to places, emotions may not be directly affected by the surrounding environments since they may be influenced by the activities at specific places. As for UGC text, it is difficult to extract emotional information within complex sentences (e.g., multiple negations and metaphors). There is no common model or algorithm to detect emotions in different languages. Besides, the same sentence may have different meanings in diverse contexts and cultures.
Early research in this area focused on identifying and quantifying the polarity (i.e., positive or negative) of natural language text. For example, Pang, Lee [55], and Read [56] utilized support vector machine and Naïve Bayes (NB) classifier to extract emotional polarity from large volumes of movie reviews and emoticons. Since human emotions are very subjective and complex, setting just positive, negative, and neutral categories is too coarse to capture the full details of human emotions [57]. Recently, there has been an increased emphasis on extracting multi-dimensional human emotions from text by developing emotion lexicons such as WordNet-Affect (WNA) [58], EmoSenticNet (ESN) [59], and word-emotion lexicon [60].
Moreover, there is research that aims to improve the existing emotion lexicons to make it suitable for different settings. For example, a novel emotion lexicon was developed by Chakraverty et al. [61], which was compiled by integrating information from three aspects: the domain of psychology, the lexical ontology WordNet, and the set of emoticons and slangs commonly used in web jargon.
UGC Image-Based
UGC images contain the advantages and disadvantages of UGC we discussed above. With regard to images, their quantity is less than UGC text. Although images are informative, they resist interpretation. With the development of technology in computer vision, image-based emotion extraction methods are becoming more and more mature. Detecting facial expressions is a fashionable image-based extraction method. Human faces provide one of the most powerful, versatile, and natural means of communicating a wide array of mental states [62], and the relationship between facial muscles and discrete emotion in various cultures is consistent [63]. Most of the techniques on facial expression-based emotion extraction methods are inspired by the work of Ekman et al. [64], who produced the facial action coding system (FACS). Still, many early facial-expression datasets [65,66] were collected under "lab-controlled" settings where participants were asked to artificially generate some specific expressions, which do not provide a good representation of natural facial expressions [67]. In recent years, several studies have utilized robust computational algorithms to automatically capture human emotions from individuals' facial expressions in photos. Recent efforts like that of Yu [68] have proposed a method that contains a face detection module based on the ensemble of three face detectors, followed by a classification module with the ensemble of multiple deep convolutional neural networks (CNN). What's more, several commercial application programming interfaces (APIs), such as Face++ Detect API [14] and Microsoft Azure Emotion API [69], are available for scientific research.
Analyzing Collective Emotion with GIS
Generally, there are following three analysis directions in the current emotion studies of human-environment relationship: (1) the temporal and spatial distribution of human emotions, (2) the impact of environment on collective emotion, and (3) collective emotion as indicator. In this section, we will illustrate how to apply GIS methods to these studies.
The Temporal and Spatial Distribution of Human Emotions
Due to the changes of the environment, people may have different emotional experiences at different times and places. Understanding the distribution of human emotions is a basic topic in GIS-based emotional computing, and it is broadly observed at different granularities in the existing literature [70][71][72][73]. For example, the diurnal and seasonal rhythms of the changes in individual-level emotions can be identified by natural language processing from Twitter text [74]. Additionally, Flickr photos with geotags are traced and analyzed to extract the trend in the changes of human emotions between 2004 and 2014 [75] at the international level. Moreover, the World Happiness Report [40] surveys the state of global happiness. Visualization of the spatiotemporal distribution of human emotions at the national scale is widely carried out in different countries [38,76,77]. Moreover, researchers have begun to study the distribution of human emotions at fine granularities including communities and parks [78,79]. However, the previous emotion maps either displayed the discontinuous sample points or a simple regionalization of emotions averages to various areal units at a certain scale because of spatial sparsity of the sampling data. In the GIS-based emotional computing framework, evenly distributed sampling points and GIS methods, such as spatial sparsity would be used to improve the accuracy. Further improvements will be discussed in Section 4.
The Impact of Environment on Collective Emotion
Scholars have shown that the surrounding environment has impacts on collective emotion [10][11][12]. It appears that both physical and social environmental factors are related to collective emotion [80][81][82]. On the one hand, literature from environmental psychology has explored the interactions between collective emotion and physical environmental factors such as naturalness [83], density, accessibility, and so forth. Most of these studies suggested that happiness is lower in less natural landscapes, denser populations, and in areas with more traffic inconveniences. On the other hand, the relationships between collective emotion and socio-economic attributes have been reported widely in social science. For instance, Easterlin [13] found that there is a significant positive association between income and happiness within countries. Table 1 shows what kinds of environmental factors and at what scales have related works examined the impact of environment on human emotions. White et al. [36] self-reports 17,000 individuals The Netherlands Self-reported distress is greater in areas with lower levels of green space.
de Vries et al. [91] tweet text of Twitter
metropolitan statistical areas The United States
Climate factors like relative humidity and temperature contribute to local depression rates.
Yang et al. [92] self-reports NA The United States There is a significant positive association between income and happiness within countries Easterlin [13] NA-not available.
Nevertheless, such studies are usually limited to a fixed granularity, and it is difficult to tell whether scale affects the interactions between collective emotion and environmental factors. Furthermore, the interactions are mostly qualitative rather than quantitative. With integrating GIS methods to emotion analysis, solving these problems can be possible. For example, as for the interaction between collective emotion and the accessibility of an environmental feature such as a water body or green vegetation, separately establishing several buffers will help us to explore how distance from an environmental feature has an impact on collective emotion.
Collective Emotion as Indicators
Since Goodchild [93] proposed the concept of volunteered geographic information (VGI), which suggests that general individuals can be compared to environmental sensors, a variety of studies have tried to explore urban development patterns using individual-level big geospatial data, called "social sensing" [94]. In the context of human-environment relationship, collective emotion has been served as a system of indicators describing the interaction of human and environment and supporting policymakers to make decisions [95].
Collective emotion provides a new insight to understand crisis events that range from natural disasters to man-made conflicts and how people respond to such rapid environment changes [96,97]. For example, Chien et al. [98] evaluated sentiment analysis of Flickr text in disaster management at the time of the strike of a typhoon in Taiwan, China in 2009. Likewise, Dewan et al. [99] analyzed the emotion of textual and visual content obtained from Facebook during the terror attacks in Paris, France, 2015.
In recent years, collective emotion in places is gradually applied to guide urban planning [100,101]. A recent work analyzed the spatial characteristics of residents' emotions in the city and at different types of places in the city of Nanjing, China, to provide evidence that could help optimize urban space development [102]. Likewise, another research measured pedestrians' emotions, and results offered initial evidence that certain spaces or spatial sequences do cause emotional arousal [103]. A semantic and sentiment analysis was conducted to understand the perceptions of people towards their living environments by examining online neighborhood textual reviews [79] and nearby neighborhood street view images [104].
Although discovering valuable insights, these studies have great possibilities to obtain more accurate results by GIS-based emotional computing. Firstly, the framework focuses on the multisource data collection methods, which improve the volume and tolerance to the noise of emotion data. Moreover, the integration of multiple disciplines, such as GIScience, computer science, and social science, brings excellent calculation and analysis abilities that enable researches to perceive dynamic and complex responses to places in near real-time. For instance, poorly timed traffic lights at crossroads and a situation of severe earthquake both became detectable for immediately deciding the assistance policies.
Challenges and Opportunities
While GIS-based emotional computing offers rich insights into a better understanding of human-environment relationship, it poses a number of challenges, highlighted below: firstly, different emotional baselines may exist in different regions and even between individuals. In other words, emotional experiences may be influenced by many factors such as individuals' memory, life history, culture, age, and gender. Diener, Diener [105] found that self-esteem is strongly related to subjective well-being (analogous to general positive emotions such as happiness) in individualist cultures (such as the United States), but only has limited effects in collectivist cultures (such as China). In fact, prior literature has shown that how and when emotions are experienced may differ from one culture to another [106][107][108][109]. This difference is also affected by population's age and gender characteristics [110,111]. Therefore, researchers should take the demographic composition and culture of different places into account when conducting research with GIS-based emotional computing.
Spatial sparsity of data on human emotions is an important issue to be solved. Although emotion maps have been created by studies at different spatial scales [84,112], the sampling data is an occurrence collection. In other words, these are presence-only data without absence data. Therefore, the previous emotional studies were either the interpolations of sampling points, which inevitably involved overfitting, the discontinuous display of sample points [112], or simply the regionalization of emotions averages to various areal units at a certain scale [113]. However, for emotional expressions that cannot be observed, it is hard to determine the emotions that are associated with places. In a recent work, Li et al. [114] utilized MaxEnt [115], a species distribution model, which is intensively applied in ecology, to map the geographic distribution of human emotions at a global scale but fell short of applying to other granularity such as city and community. Yet, there is still no model available that all scholars have agreed upon through a consensus to describe and predict the continuous distribution of human emotions based on presence-only data.
Another challenge is that spaces with various land use mix (LUM) [116] may trigger different emotions. People usually express emotional responses to "place" rather than "space" [8], but multiple places may overlap in the same space at different times. For a specific street, people may stay on the street for work during the daytime while visiting bars at night. The locale and its spatiotemporal dynamics may influence human emotions and are supposed to be taken into consideration for GIS-based emotional computing.
It is important to note that SNS emotional information may bring systematic bias for GIS-based emotional computing. SNS users as a sample may not be representative of the total population [117,118]. Besides, due to the potential social pressures imposed by SNS [119,120], users may suppress or exaggerate their emotions. For instance, Huang et al. [121] suggest that the majority of Weibo users tend to post more photos with positive emotions instead of negative emotions, and there are significant differences in place emotion extracted from Weibo and in-situ. Since there is no model that is suitable for all places to rectify the emotions extracted from SNS yet, it is wise to pay attention to the bias of big data when conducting emotion research.
The impact of GIS-based emotional computing is multi-fold. With the help of the framework, the informative emotion layer of human-environment relationship can potentially enrich a variety of fields such as traffic planning, urban safety, human-centric tourism, and evaluating current planning projects. One the one hand, GIS-based emotional computing aims to collect massive multisource georeferenced data and provide state-of-the-art, multidisciplinary techniques for effectively and accurately detecting normalized emotion information from such data. On the other hand, the map from individual emotion to place emotion is promised by using GIS-based spatial analysis. Furthermore, geostatistics is a useful tool for deducing the causality between collective emotion and environmental factors.
There are several opportunities in the current development of GIS-based emotional computing. There has also been research into the connection between human perception and urban space through urban street view imagery, which is another promising dataset that can be employed in GIS based emotion computing [104,122]. Building a multi-source emotional data fusion model can greatly advance the development of GIS-based emotional computing. A good way to obtain a wide range of human emotions in real-world settings is by combining big data (human emotions extracted from UGC) with small data [123] (human emotions captured in reality) based on different cultures and demographic characteristics to calibrate online emotion. Moreover, why people are satisfied with some places instead of others has not yet been extensively investigated. It remains unclear which environmental factors will influence people's emotions at all scales and how to properly quantify the extent of their influence.
Example of Implementing GIS-Based Emotional Computing
The emotion information analyzed by GIS-based emotional computing plays an increasingly vital role in human-environment relationship research, and it serves as a critical component of various applications including resource management, conservation, human geography, crime analysis, real estate, psychology, environmental justice, etc. Hereby we give an example that exhibits the potential to quantify human emotion and serves as a layer in GIS for human-environment relationships study.
The recommendation of tourist sites is a key topic in tourism studies. With GIS-based emotional computing techniques, georeferenced contents uploaded by tourists to photo services in the public domain enrich traditional recommendation systems with an emotion layer. One of our previous studies collected Flickr photos of 80 tourist sites all over the world, and applied spatial clustering to emotion information extracted from photos, for constructing an emotion layer for these tourist sites. Afterward, a map of tourist sites with emotion tendency and a ranking list of global tourist sites based on emotion were drawn, which serve as references for potential tourists. By calculating and analyzing the emotion layer and other layers in GIS, we have also attempted to identify, which natural and non-natural environmental factors may have an impact on visitor's emotions [84]. The workflow of the example can be seen in Figure 3. This example illustrated that, with GIS-based emotional computing, it is possible to cater to tourist preferences for accurate advertising and management of the tourist industry. emotion information extracted from photos, for constructing an emotion layer for these tourist sites. Afterward, a map of tourist sites with emotion tendency and a ranking list of global tourist sites based on emotion were drawn, which serve as references for potential tourists. By calculating and analyzing the emotion layer and other layers in GIS, we have also attempted to identify, which natural and nonnatural environmental factors may have an impact on visitor's emotions [84]. The workflow of the example can be seen in Figure 3. This example illustrated that, with GIS-based emotional computing, it is possible to cater to tourist preferences for accurate advertising and management of the tourist industry.
Conclusions
In this paper, we propose a new conceptual framework: GIS-based emotional computing, for providing a new approach to measure the emotion layer of human-environment relationship. The methodology comprises three steps: (1) collecting environment and emotion related data from
Conclusions
In this paper, we propose a new conceptual framework: GIS-based emotional computing, for providing a new approach to measure the emotion layer of human-environment relationship. The methodology comprises three steps: (1) collecting environment and emotion related data from different data sources, (2) detecting emotional information from georeferenced emotion related data by AI-based emotional computing techniques, and (3) conducting spatiotemporal analysis using GIS. The current literature related to each step was reviewed, and the improvements of GIS-based emotional computing can be done were discussed. The emotion layer reveals deep interactions between human and their surrounding environment, and it reveals "what people real feel" instead of "what people would feel". GIS-based emotional computing consolidates the cutting-edge technologies of multidisciplinary, such as GIScience, sociology, and computer science, for providing a more effective and accurate avenue to calculate and analyze the emotion layer. It is important to note that GIS-based emotional computing of this scope has only been possible recently, due to the increasing capability of both massive UGC with emotional information and the technologies that take advantage of these resources. This implied that GIS-based emotional computing may have unlimited potential because of developing and advancing technologies. However, while the promise of collective emotion in describing the human-environment relationship is alluring, the challenges above have to be addressed for increased uptake of GIS-based emotional computing. | 5,944.4 | 2020-09-15T00:00:00.000 | [
"Computer Science",
"Geography",
"Environmental Science"
] |
Global Analysis of Beddington-DeAngelis Type Chemostat Model with Nutrient Recycling and Impulsive Input
In this paper, a Beddington-DeAngelis type chemostat model with nutrient recycling and impulsive input is considered. Except using Floquet theorem, introducing a new method combining with comparison theorem of impulse differential equation and by using the Liapunov function method, the sufficient and necessary conditions on the permanence and extinction of the microorganism are obtained. Two examples are given in the last section to verify our mathematical results. The numerical analysis shows that if only the system is permanent, then it also is globally attractive.
Introduction
The chemostat is an important and basic laboratory apparatus for culturing microorganisms.It can be used to investigate microbial growth and has the advantage that parameters are easily measurable.The chemostat plays an important role in bioprocessing, hence the model has been studied by more and more people.Chemostats with periodic inputs were studied [1,2], those with periodic washout rate [3,4], and those with periodic input and washout [5].In recent years, those with nutrient recycling [6][7][8][9][10] have been investigated and some investing results were obtained.Now many scholars pointed out that it was necessary to consider models with periodic perturbations, since those phenomena might be exposed in many real words.However, there are some other perturbations such as floods, fires and drainaye of sewage which are not suitable to be considered continually.Those perturbations bring sudden changes to the system.Systems with sudden changes are involving in impulsive differential equations which have been studied intensively and systematically [11][12][13].Impulsive differential equations are found in almost every domain of applied sciences.
Recently, many papers studied chemostat model with impulsive effect the Lotka-Volterra type or Monod type functional response.But there are few papers which study a chemostat model with Beddington-DeAngelis functional response, especially a Beddinton-DeAngelis type chemostat with nutrient recycling.The Beddington-DeAngelis functional response is introduced by Beddington and DeAngelis [14,15].It is similar to the wellknown Holling II functional response but has an extra term B t in the denominator that models mutual interference in species.The model, we consider in this paper, takes the form:
S t x t a S t DS t brx t t nT n Z k A S t Bx t aS t x t x t Dx t rx t t nT n Z A S t Bx t S t S t p t nT n Z x t x t t nT n Z
where S(t), 1 x t represent the concentration of limiting substrate and the microorganism respectively, D is the dilution rate, a is the uptake constant of the microorganism, k is the yield of the microorganism The organization of this paper is as the following.In Section 2, we introduce some useful notations and lem-mas.In Section 3, we will state and prove the main results on the global asymptotic stability and permanence.In Section 4, we give a brief discussion and the numerical analysis.
Preliminaries
In this section, we will give some notations and lemmas which will be used for our main results.Firstly, for convenience, we set is left continuous at t = nT and x(t) is continuous at Lemma 1. Suppose is any solution of system (2) with initial solution The proof of Lemma 1 is simple, we omit it here.
In what follows, we give some basic properties about the following system. Clearly, is a positive periodic solution of system (3).Any solution of system (3) is , Hence, we have the following result.Lemma 2. System (3) has a positive periodic solution , as for any solution u(t) of system (3).Moreover, The proof of Lemma 2 can be found in [16].Lemma 3.There exists a constant M > 0 such that S(t) < M, x(t) < M for each solution of (S(t); x(t)) system (2), for t large enough.
Proof Let (S(t); x(t)) be any solution of system (2) with initial value for all t¸ 0, where u(t) is the solution of system (3).From Lemma 2, we have Thus, V(t) is ultimately bounded.From the definition of V(t), there exists a constant such that S(t) < M, x(t) < M for any solution (S(t), x(t)) of system (2), for t large enough.This completes the proof.The solution of system (2) corresponding to x(t) = 0 is called microorganism-free periodic solution.For system (2), if we choose 0 x t , then system (2) becomes to the following system , ,
S t DS t t nT n Z S t S t p t nT n Z
System (4) has a unique global uniformly attractive positive solution Hence, system (2) has a positive periodic solution V t S t x t next section, we will study the global asymptotical stability of the microorganism-free periodic solution as a solution of system (2).
,0 u t Then similar to the proof of Lemma 3, we obtain for all where u(t) is the solution of system (3) and Hence, there exists a function By the definition of V t , we have Then periodic solution of system ( 2) is globally attractive.
x t ) be any positive solution of system (2).Define a function as follows It follows from the second equation of system (2) that Hence, there exist constants 0 and , such that x t 0 for all , then from (6) we have For any , we choose an integer such that , then integrating (8) from to t, from (7) we have 0 where In fact, if there exists a such that x t , then there exists a integrating the above inequality from t 2 to t 1 , from (7) we obtain (10).
Obviously, let , then from ( 10) we obtain a contradiction.Hence, This completes the proof.
Then system (2) is permanent.Proof Let (S(t); x(t)) be any solution of system (2) with initial value Lemma 3, the first equation of system (2) becomes is the solution of the following impulsive system Therefore, we finally obtain This shows that S(t) in system (2) is permanent.
In the following, we want to find a constant , such that Consider the following auxiliary impulsive system from Lemma 2, system (12) has a globally uniformly attractive positive periodic solution Further, for above 2 0 a 0 y M and M > 0, where M is given in Lemma 3, there is such that for any and for all t t , then our go ssing on the case of , then above t t , we also have we inequality (16).Particularly, obtain
S t D t n Z A S t S t p t nT n Z
Hence, from the comparison theorem of impulsive differential equations, we have for all , whe the s re y(t) is olution of system (12) with
Further
, we also from (13) have Thus, from system (2), we have From the above discussion, we have 2 lim t x t m , S(t); x(t)) of and is independent of any solution ( syste (2).This completes the proof.
As a consequence of Theorem 1 and Theorem 2, we have the following corollary.Corollary 1 For system (2), the following conclusions hold.
a) The microorganism-extinction solution
4.
paper, we investigate Beddington-DeAngelis type chemostat with nutrient recycling and impulsive input.We prove that the microorganism-free periodic solution of the system (2) is globally attractive.The necessary and sufficient condition for permanence of system (2) are obtained in this paper.
According to Theorem 1, the microorganism-free periodic solution
Discussion and Numerical Analysis
Then Theorem 1-2 can be state as: If and exp exp that conditions for the system coexist or non-coexist are due to the influences of the impulsive perturbations.
In order to illustrate our mathematical results and investigate the effect of impulsive input nutrient we present the following results of a numerical simulation.
Figure 2 .
Figure 2. (a) Time-series of the nutrient S for permanence and periodic os global attractivity; (b) Time-series of the microorganism population x for global ttractivity. | 1,904 | 2013-07-23T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
GASKET SOLUTIONS FOR ELECTROLYZERS – AN OVERVIEW
Electrolysis seals play an important role in the chemical industry and are used to seal liquids and gases in electrolysis cells. A good seal is critical to the performance and reliability of the electrolysis process, ensuring that no unwanted substances enter the system and that the correct amount of liquid and gas flows through the cell. This article explains the most important aspects of seal geometries, materials for electrolytic seals and required equipment.
INTRODUCTION
Electrolysis seals play an important role in the chemical industry. They are used to reliably seal liquids and gases in electrolysis cells Pitschak et al. (2017). A good seal is of great importance for the performance of the electrolysis process. It ensures that no unwanted substances enter the system and that the right amount of liquid and gas flows through the cell. Loadman (2012) This article focuses on the key aspects of injection moulding tools, sealing geometries and materials used in electrolysis seals. The correct selection of these components is crucial to achieve an effective seal and ensure optimal functionality of the electrolysis system. A through consideration of these aspects enables manufacturers and engineers to develop and apply the best solutions for electrolysis seals.
SUCCESS FACTORS FOR AN ELECTROLYSIS SEAL
The main task of an electrolytic seal is to seal the contact surfaces of the assembly. For this, it is important that there is no relative movement between the components.
To achieve this goal, it is important to look at the process as a whole. Figure 1 shows that the success factors of an optimally functioning electrolytic seal are an interplay of the factor's geometry, material, injection mould, and the production process.
MATERIAL
When sealing gases and liquids in electrolysis, we face a number of challenges. An effective electrolysis seal must be able to provide gas tightness and pressure resistance. It is important to avoid leaks and prevent gas leakage to ensure efficient and safe electrolysis.
One way to overcome these challenges is to choose the right sealing material. EPDM (ethylene propylene diene (monomer) rubber) is a proven material used in many electrolysis applications. EPDM offers excellent chemical resistance to a wide range of substances and is relatively stable at elevated temperatures Loadman (2012) This material is well suited for general sealing requirements in electrolysis.
For more demanding applications, for example in environments with acids, a fluororubber (FKM) compound can be used as a sealing material. FKM is a high-tech material known for its outstanding chemical resistance. It can withstand even aggressive chemicals and offers very high temperature and pressure resistance. The use of FKM seals can be an effective solution when dealing with extreme conditions and demanding environments in electrolysis. Loadman (2012) In order to minimise gas bubble formation and leaks, not only the right material but also the right shape of the seal is crucial. The gasket geometry should be designed to allow optimal matching and pressure distribution between the components to be joined. Careful design and shaping can achieve an effective seal. Loadman (2012) Furthermore, regular inspection and maintenance of the electrolysis seals is of great importance. Wear or damage can occur over time and lead to leaks. Timely detection and maintenance of leaks or damaged seals is therefore crucial to ensure smooth operation of the electrolysis system.
Overall, the selection of the right sealing material and careful design of the sealing geometry is crucial to overcome the challenges of gas sealing and pressure resistance in electrolysis. By using materials such as EPDM or FKM and paying attention to the correct shaping, we can achieve a reliable and efficient seal that meets the requirements of electrolysis.
CHOICE OF THE APPROPRIATE GEOMETRY
O-ring and flat gaskets are often used as static seals that provide a reliable seal. Gronitzky (2017) However, additional sealing lips can provide a better seal at low forces and thus higher gas tightness and pressure resistance.
O-ring geometries are suitable when only a few plates are stacked on top of each other, because the force required for compression is very high Li (2023) Li (2023) In an electrolyzer, sometimes 100 or more plates are stacked on top of each other, all of which must be sealed together. In this task, O-ring seals are conceivable as a geometry for a sealing task, but are rather unsuitable, as a lot of pressure is required. NOK Foundation Coorperation (2015) A sensible alternative is to profile the seal, accordingly, as can be seen in Figure 3. Figure 3 shows that the contact surface area increases significantly with compression. In this respect, the required forces increase accordingly when not only two plates, but perhaps 100 plates are pressed together. Nielson (2022) A corresponding approach to solving this problem is profiling to increase the contact surface pressing force, as shown here in Figure 3.
INJECTION MOULD AND SUITABLE PROCESS
Choosing the right injection moulding tools for electrolysis seals is crucial to producing a reliable seal. A key factor in tool selection is the tool steel as well as the surface finish of the tool, as these factors influence the shaping of the electrolysis seal and thus its performance.
When manufacturing electrolysis gaskets, it is important to select suitable gating points as they have a critical influence on the moulding. Du et al. (2023) The placement of the gating points allows for optimal distribution of the material during the injection moulding process.
Careful selection of gating points can improve the quality of the electrolysis seals by minimizing unwanted defects such as air pockets or material distortion.
In addition, factors such as the cooling of the mould should also be considered when selecting the injection moulding tool. Combination of material and mould is a guarantee for function. Effective tempering is important to reduce cycle times and ensure uniform vulcanization of the electrolysis seal. In contrast, insufficient tempering can lead to uneven shrinkage and deformation of the seal, which can sometimes severely impair its performance.
SUMMARY AND OUTLOOK
In summary, the careful selection of the material, geometry, process sizing and injection moulding tool is of great importance for the production of reliable electrolysis seals. For the gasket, the appropriate material is necessary accordingly. A suitable geometry is also important. For the tooling, the right tool steel, the surface finish of the tool and the placement of the gating points are the critical factors to ensure optimal shaping and performance of the seals.
Only if the above-mentioned success factors are equally taken into account, then it can be guaranteed that the electrolysis seal will permanently fulfil its task. | 1,531.6 | 2023-06-02T00:00:00.000 | [
"Engineering"
] |
Progress of Optical Fabrication and Surface-Microstructure Modification of SiC
SiC has become the best candidate material for space mirror and optical devices due to a series of favorable physical and chemical properties. Fine surface optical quality with the surface roughness (RMS) less than 1 nm is necessary for fine optical application. However, various defects are present in SiC ceramics, and it is very difficult to polish SiC ceramic matrix with the 1 nm RMS. Surface modification of SiC ceramics must be done on the SiC substrate. Four kinds of surface-modification routes including the hot pressed glass, the C/SiC clapping, SiC clapping, and Si clapping on SiC surface have been reported and reviewed here. The methods of surface modification, the mechanism of preparation, and the disadvantages and advantages are focused on in this paper. In our view, PVD Si is the best choice for surface modification of SiC mirror.
Introduction
At present, mirror systems as the most important device are applied commonly in the optical system with high precision.Until now, three generations of reflector materials have been developed.The first generation is glass-ceramic; the second one is mainly made of metal, such as Beryllium metal and its alloys; the third generation of the reflector material is based on silicon carbide.SiC may be the best material available for mirror optics because of its outstanding combination of thermal and mechanical properties.It has remarkable dimensional stability even under the disturbances of temperature, humidity, and chemicals.Its specific stiffness and elastic modulus are higher than that of beryllium, which has toxicity.The density of SiC is slightly higher than aluminum and its fracture toughness is higher than glass.The remarkable properties of SiC in terms of hardness, stiffness, and thermal stability combined with a reasonable density are indeed of primary importance for all space applications.This combination of material advantages makes SiC an excellent material candidate for space optical instruments [1,2].
Based on microstructure and processing routes, four kinds of SiC ceramics including hot-pressed SiC (HP-SiC), reaction-bonded SiC (RB-SiC), sintered SiC (S-SiC), and chemical vapor deposition SiC (CVD-SiC) were developed.The properties of different SiC materials and the brief description of various SiC component manufacturing techniques are summarized in Table 1 [2,17].Whatever the preparation process, it is difficult to obtain high-quality optical surface due to polishing the bare SiC very difficultly.Moreover, microstructure defects, like pores, steps at different phases, and grain boundary damages, are unavoidable under certain surfacing condition and present further difficulty in polishing this material.The AFM topography images of surfaces polished by 1 um abrasive grains of different SiC ceramics are shown in Figure 1.The steps still exist at the interfaces between two phases both in RB-SiC and S-SiC, and it cannot meet the optical requirements (<1 nm RMS).The rms surface roughness values of SiC ceramics
Optical Fabrication of Silicon Carbide
However, RB-SiC is typically a difficult material to machine.SiC is harder than most other materials except diamond, cubic boron nitride (cBN), and boron carbide (B 4 C), and hence available cutting tool materials for machining RB-SiC are very limited.Recent efforts have focused on the precision machining of RB-SiC by grinding, lapping, polishing, and combinations of these [24,25].Main features and classification of optical fabrication methods of SiC are shown in Table 2 [26][27][28].
Surface Modification
Surface modification of SiC material is to add a thick film coating to the substrate of SiC for obtaining better surface quality and easier polishing.After fine polishing, the surface shows an extremely smooth surface and it can meet the optical requirement.The earliest study method is hot pressing glass, the recent developments of SiC surface modification is CVD-and PVD-coated SiC and Si, and C/SiC clapping is also applied to space optical devices.
SiC Clapping.
SiC clapping has the same mechanical properties and thermal physical properties as the SiC matrix, so that SiC clapping as a surface modification coating of silicon carbide mirrors attracts extensive attention of researchers.SiC coatings exhibit many outstanding properties, like good isotropic, high hardness, high thermal conductivity, and excellent optical performance characteristics.The amorphous SiC coating prepared by PVD technique with the low temperature and short cycle has also been reported.
CVD SiC
Clapping.Chemical vapor deposition has been widely applied in thick-film preparation from the 1960s, and its principle is that the reaction materials resulted from the decomposition of Si-containing precursors are deposited on the substrate surface to form a thin film.The film is of good uniformity and repeatability.Such a method is applied to the preparation of SiC coating.Polished CVD-SiC (crystalline cubic α-SiC) holds also good physical properties for making mirrors such as low density, high melting point, and low expansion coefficient.
The main precursors for CVD SiC is CH 3 SiCl 3 , and the reaction equations are as follows: CH 3 SiCl 3 (g) → SiC(s) + 3HCl(g).The dense SiC coating with excellent optical properties (surface roughness <0.3 nm RMS) can be obtained.It can meet the application requirements of the mirror surface The final surface accuracy of 80 nm RMS was reported by Murahara [10] Ion beam milling High-speed ion beam hits the surface of the sample
Good surface quality, expensive equipment
The final surface accuracy of 1 nm RMS was reported by Johnson et al. [11] Float polishing The sample is floating on the polishing plate by the high-speed rotating fluid dynamic pressure
Good surface quality
In [12], final surface accuracy of 3 nm RMS There are different treating methods for CVD SiC process, including atmospheric pressure chemical vapor deposition SiC (APCVD SiC), low-pressure chemical vapor deposition SiC (LPCVD SiC), and plasma enhanced chemical vapor deposition SiC (PECVD SiC) [32][33][34].Different methods for preparing CVD SiC are shown in Table 3.
CVD SiC coating has been widely applied to surface modification of SiC matrix.For example, CVD SiC with 0.2 nm RMS on the C/SiC substrates had been reported by Trex Advanced Materials [21].The CVD SiC on S-SiC substrates in the polished roughness of less than 0.1 nm has also been reported by BOOSTEC.In China, Zhang [35] applied this method to obtain CVD-SiC with the surface roughness of 1.478 nm RMS.
PVD SiC.
High temperature (typically 1300 • C) in the preparation process of CVD SiC may result in stronger film stress on the SiC mirror.This is due to the unacceptable high residual stress buildup in heavy cross-sections.The cost of producing CVD SiC as a large self-supporting substrate is very high.However, PVD SiC appears to be very attractive due to its relative simplicity, low substrate temperature, and wide accessibility by industry.
Ion-Assisted Deposition SiC (IAD SiC).
The ion implantation of semiconductors rapidly became an accessible technology from the 1970s because of its ability to produce superior electronic devices.Ion beam modification of nonsemiconductor materials for enhancing surface sensitive properties has been actively pursued in the international R&D community.The advantages of this technique include high density, superior adhesion, and the ability to produce arbitrarily thick coatings.Perhaps the most important feature of the IBAD technology is the frequently demonstrated ability to control many coatings properties such as morphology, adhesion, and stress, as well as stoichiometry.The amorphous SiC coating with large area can be prepared by this method on the SiC substrate.The α-SiC coating has been prepared on the RB SiC and polished with surface roughness up to 0.2 nm RMS by U.S. HDOS [3,36] using ion beam deposition method.Hall ion source [37] and high-energy Kaufman ion source [38] can both be used as the ion-beam resources for preparing α-SiC and its system of SiC-modified membrane and the surface roughness can be down to 0.867 nm RMS and 0.743 nm RMS.
Magnetron Sputtering SiC Clapping.Magnetron sputtering is used widely due to its low cost, high deposition rate, low deposition temperature, and good adhesion of the film feature.The SiC coatings were deposited by RF magnetron sputtering from a sintered SiC target onto commercially monopolished RB SiC substrate kept at low temperature.The deposition rate is fast and the desired temperature is low, but the film stability of its system is not sustainable [39].Magnetron sputtering SiC clapping was prepared with the roughness down to 1.394 nm RMS [40] and 3.184 nm RMS as shown in Figure 2 by Tang et al. [22] and the surface roughness of 2 angstrom by Kortright and Windt [41].
Pulsed Laser Deposition SiC (PLD SiC).PLD is a relatively new technique and attracting great attention for its simplicity, reduced investment cost, and flexibility.In the PLD process [42], a high flux pulsed laser beam is focused on the target material leading to the formation of a plasma plume.The high heating rate of the target surface (≈100 K/s) due to pulsed laser irradiation leads to a congruent evaporation of the target irrespective of the evaporating point of its constituent elements or compounds, so that the target stoichiometry can be retained in the deposited films.The major drawback of this technique is that the high energy involved in the process also leads to the formation of microparticulate on the thin film surface.Vendan et al. [43] has prepared α-SiC thin films by fs-PLD and ns-PLD techniques and found that the surface roughness of SiC films by the ns-PLD technique was bigger than that by fs-PLD.Magida et al. [32] has prepared SiC thin films which exhibit high reflectivity in the ultraviolet band (40.7 nm-121.6 nm) and the surface roughness of 1 nm RMS.
Ion Beam Sputtering SiC (IBS SiC).The ion beam engine concept was firstly developed in the United States.By the IBS technique, the SiC films being flat and smooth, a large area of dense internal stress and low defect density can be obtained.For example, the IBS SiC coatings have been prepared and polished to be less than 2 A RMS by Johnson [36] for the optical system requiring ultralow scatter performance.
Si Clapping.
The thermal properties of Si coating match well with that of SiC as shown in Table 4, then Si clapping can be used for the surface modification of SiC.It can be seen that their thermal performance is well matched, and Si can be used as a good reflector material.CVD and PVD are the main preparation methods.
CVD Si Clapping.
The Si clapping is easier to be polished well with better surface quality and low-cost SiCclapping.Polycrystalline Si produced by a scalable CVD process has exhibited a surface finish of 0.2 nm RMS.Polycrystalline Si was fabricated by reacting trichlorosilane (SiHCl 3 ) with excess H 2 in a hot-wall CVD reactor according to the reaction: SiHCl 3 + 2H 2 → Si(s) + 4HCl(g).However, it is not widely applied [14,44] because columnar structure is often present in CVD Si.
Polycrystalline Si [19] was also used to clad on several advanced ceramic materials such as SiC, sapphire, pyrolytic BN, and Si by a chemical vapor deposition (CVD) process.The thickness of Si cladding ranged from 0.025 to 3.0 mm.CVD Si adhered quite well to all the above materials, where the Si cladding was highly stressed and cracked.The surface roughness can reach 0.2 nm RMS after polishing.Amorphous silicon thin films were formed by chemical vapor deposition by Choi et al. [45].The amorphous silicon films without reflector bias voltage exhibit 0.119 nm RMS, but down to 0.171 nm RMS with reflector bias voltage of −120 V, respectively.
PVD Si
Clapping.CVD Si is generally prepared with high temperature (>600 • C), and the PVD Si film is widely used.The application of Si coating can reduce surface wearing resistance of SiC ceramic without changing mechanical properties of the bulk materials.The efficiency of surface finishing for large optical components can be greatly improved as well [23].Therefore, PVD Si appears to be very attractive because of its relative simplicity, low substrate temperature, and wide accessibility by industry.Si clapping is easier to be polished well because it is single-phase material without the different heterogeneous phase in the polishing process.PVD Si coating is now becoming a preferred method of the SiC surface modification.
Vacuum Evaporation Si Clapping.Vacuum evaporation deposition of Si film is processed in a high vacuum by heating the Si gasification or sublimation condensing into Si film and thus being deposited on SiC substrate surface.The method is relatively simple and has high deposition rate.But the columnar structure is present in Si film and the physical property of Si film is not stabile.Zhao [46] has obtained amorphous Si films by thermal evaporation.In recent years, this method has been improved.Fang et al. [47] deposited Si film on steel and alumina substrates by electron beam evaporation of solid silicon.Magnetron Sputtering Si Clapping.Magnetron sputtering (belongs to PVD) appears to be very attractive due to its relative simplicity and low substrate temperature.Many researchers have studied this method, Aoucher et al. [48] deposited amorphous silicon by DC magnetron sputtering on a quartz substrate at a rate around 1.5 nm/s.Tang et al. [22] has used RF magnetron sputtering method to prepare Si film with the surface roughness from 17.992 nm RMS to 1.183 nm RMS as shown in Figure 3.
Ion-Beam-Assisted Si Clapping.Hydrogenated amorphous silicon (a-Si:H) films are generally prepared by glowdischarge decomposition of silane or by sputtering of silicon in an argon-hydrogen mixture.The reaction temperature is low (200 • C) and the preparation parameters can be controlled.So it was used to prepare amorphous silicon films.Photoconductive hydrogenated amorphous silicon films were deposited by ion-beam-assisted evaporation using hydrogen-argon plasma.Surface modification for the RB-SiC substrate [49] is carried out using e-beam evaporation with plasma ion assisted.The surface roughness of the RB-SiC substrate is reduced to 0.0632 nm, the scattering coefficient is reduced to 2.81%, and the average reflectance from 500 nm to 1000 nm is raised to 97.05%; these data indicate that good optical quality similar to that of the fine polished glass ceramics can be obtained by the modification process.
Plasma-Ion-Assisted of Deposition Si Clapping.For the production of thin films of high quality standard, thermal evaporation techniques are applied with the assistance of ion sources which provide additional energy and momentum to influence the growth process.Larger ion current densities on increased substrate areas can be generated by employing plasma sources, and the process is plasma-ion-assisted deposition (PIAD) [50].In PIAD, a growing thin film is bombarded by energetic ions from a plasma ion source and the columnar microstructure of the film is disrupted, resulting in the improved optical and mechanical properties of thin films [51].Liu et al. [23] has prepared Si thin film by this method and obtained continuous, homogenous, wellbonded amorphous Si coatings on SiC ceramics.It means that the SiC substrate can be fully covered up and the effect of substrate surface defects on the surface morphology of the Si coating can be overlooked as shown in Figure 4.
Recently, the PVD Si coatings on SiC substrates have been investigated.SSG has applied PVD Si in optical systems of the GEO telescope [52], designed and constructed for an SBIR program funded by NASA's Marshall Space Flight Center (MSFC).The SiC telescope and "GOES-like" scan mirror are designed to "generic" GEO specifications, and the surface roughness decreased to 0.4 nm RMS after polishing.In China, the PVD Si coatings deposited on the surface of polished RB-SiC and S-SiC were demonstrated to improve the optical surface quality after being polished by Tang et al. [22].Both the surfaces of PVD Si coating on RB-SiC and S-SiC are smoother than that of bare SiC.The RMS can reach to the angstrom grade, and the reflectance improves significantly.
Conclusion
Silicon carbide as the third generation of space mirror material has attracted more and more attention and is widely applied.Silicon carbides prepared by different preparation methods have their advantages, but still cannot meet the optical requirements (<1 nm RMS) after the current optical processing.The surface roughness and reflection rate of the SiC material can be well improved after surface modification.Especially in the CVD SiC coating and PVD-Si coating on the SiC ceramic, surface roughness of Angstrom level can be reached.All kinds of surface-modification methods have been developed and every method has its disadvantages.The hot press glass has some drawbacks for usage of the glass cladding on large surfaces.The C/SiC coating may not be suitable for low surface roughness (<1 nm RMS).The CVD process with high temperature (>1000 • C) would lead to the deformation of SiC matrix.PVD method to prepare SiC-film is slower and the modified film is very difficult to be polished.In our view, PIAD Si has low reaction temperature (<300 • C) and is very easy to be polished.The preparation process is relatively simple, and reproducible preparation of siliconmodified layer gives a dense structure, combined with the base firmly.Therefore, PVD Si is the best choice for surface modification of SiC mirror because of its relative simplicity, low substrate temperature, and wide accessibility by industry, especial PIAD Si.
Figure 4 :
Figure 4: Surface topography of Si coating tested by AFM, reprinted with permission from [23].
Table 1 :
Different preparation methods and properties of silicon carbide.
[19][20][21]inishing for large optical components can be greatly improved as well.Moreover, microstructure defects on polished surface of SiC ceramic mentioned above can be covered up by the coating process.This paper presents the optical surface processes and the recent developments of SiC substrate by hot pressed glass, C/SiC clapping, CVD-and PVD-coated SiC and Si coating method in detail[19][20][21].
Table 2 :
Main features of optical fabrication methods of SiC.
Table 3 :
Different methods for preparing CVD SiC.
and it is one of the most effective methods for surface modification to prepare SiC-based reflection mirror.But the CVD process with high substrate temperature (>1000 • C) would lead to the deformation of SiC-matrix.Another disadvantage of the CVD process is time consumption.
Furthermore, because of the high heating rate of the ablated materials, PLD of crystalline films demands a much lower substrate temperature than other film growing techniques. | 4,085.2 | 2012-01-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Exploring Bioequivalence of Dexketoprofen Trometamol Drug Products with the Gastrointestinal Simulator (GIS) and Precipitation Pathways Analyses
The present work aimed to explain the differences in oral performance in fasted humans who were categorized into groups based on the three different drug product formulations of dexketoprofen trometamol (DKT) salt—Using a combination of in vitro techniques and pharmacokinetic analysis. The non-bioequivalence (non-BE) tablet group achieved higher plasma Cmax and area under the curve (AUC) than the reference and BE tablets groups, with only one difference in tablet composition, which was the presence of calcium monohydrogen phosphate, an alkalinizing excipient, in the tablet core of the non-BE formulation. Concentration profiles determined using a gastrointestinal simulator (GIS) apparatus designed with 0.01 N hydrochloric acid and 34 mM sodium chloride as the gastric medium and fasted state simulated intestinal fluids (FaSSIF-v1) as the intestinal medium showed a faster rate and a higher extent of dissolution of the non-BE product compared to the BE and reference products. These in vitro profiles mirrored the fraction doses absorbed in vivo obtained from deconvoluted plasma concentration–time profiles. However, when sodium chloride was not included in the gastric medium and phosphate buffer without bile salts and phospholipids were used as the intestinal medium, the three products exhibited nearly identical concentration profiles. Microscopic examination of DKT salt dissolution in the gastric medium containing sodium chloride identified that when calcium phosphate was present, the DKT dissolved without conversion to the less soluble free acid, which was consistent with the higher drug exposure of the non-BE formulation. In the absence of calcium phosphate, however, dexketoprofen trometamol salt dissolution began with a nano-phase formation that grew to a liquid–liquid phase separation (LLPS) and formed the less soluble free acid crystals. This phenomenon was dependent on the salt/excipient concentrations and the presence of free acid crystals in the salt phase. This work demonstrated the importance of excipients and purity of salt phase on the evolution and rate of salt disproportionation pathways. Moreover, the presented data clearly showed the usefulness of the GIS apparatus as a discriminating tool that could highlight the differences in formulation behavior when utilizing physiologically-relevant media and experimental conditions in combination with microscopy imaging.
Introduction
The development of generic oral drug products containing dexketoprofen trometamol (DKT, weak acid salt, Biopharmaceutics Classification System (BCS) class 1 drug) is challenging as the reference product does not dissolve rapidly. Since the dissolution of the reference product is not complete (<85%) in 30 min in the paddle apparatus at 50 rotations per minute (rpm) in any of the Biopharmaceutics Classification System's (BCS) buffer media, a biowaiver approach is currently not permitted [1,2].
In Spain, three out of four formulations of DKT tablets failed the first in vivo bioequivalence (BE) study [3]. These products were previously tested with the European Medicines Agency (EMA) dissolution method requested for biowaiver applications, i.e., performing dissolution tests in USP-2 apparatus at 50 rpm with different buffers at pH 1.2, 4.5, and 6.8. Garcia-Arieta and co-workers showed the relevance of the agitation rate (50 rpm versus 75 rpm) on the dissolution profile outcomes [3]. Dissolution profiles of one DKT product using USP apparatus 2 (pH 1.2, 4.5, and 6.8) exhibited profiles (f 2 < 50) that were not similar to in vivo BE. Another product exhibited in vitro BE (f 2 > 50) but failed the in vivo BE study. Therefore, the USP apparatus 2 did not reflect the in vivo BE outcome.
The aim of this work was to determine the reasons for the differences in dissolution behavior between bioequivalent (BE) and non-bioequivalent (non-BE) DKT products. First, a physiologically-relevant, multi-compartmental dissolution apparatus, the gastrointestinal simulator (GIS), was evaluated to ascertain whether it could reflect the in vivo BE outcomes. Both the DKT products as well as the reference product were studied in the GIS. In the second step, salt to free acid precipitation pathways during dissolution of DKT were examined by inverted microscopy to identify the factors that influenced drug precipitation.
Chemicals
Three different formulations were tested in the GIS and USP-2 apparatus: the reference Spanish marketed product (Enantyum ® , Laboratorios Menarini S.A., Barcelona, Spain) and two generic drug products. Acetonitrile was obtained from VWR International (West Chester, PA, USA). Methanol (MeOH), HCl, and trifluoroacetic acid (TFA) were purchased from Fisher Scientific (Pittsburgh, PA, USA). NaOH, NaCl, and NaH 2 PO 4 .H 2 O were received from Sigma-Aldrich (St. Louis, MO, USA). Purified water (i.e., filtrated and deionized) was used in the analysis methods and in dissolution studies to prepare the dissolution media (Millipore, Billerica, MA, USA). Simulated intestinal fluid (SIF) powder was obtained from Biorelevant (Croydon, UK). Table 1 represents the qualitative composition for each formulation in terms of excipients and coating material.
The main difference between both test products is that there is calcium phosphate in the tablet core of the non-BE product. Table 1. Qualitative differential composition of the reference marketed drug product and the test products. The ingredients in bold are the added excipients to the core or the coating of the tablet for both test products, which was not presented in the reference marketed drug product. Table 1. Qualitative differential composition of the reference marketed drug product and the test products. The ingredients in bold are the added excipients to the core or the coating of the tablet for both test products, which was not presented in the reference marketed drug product.
The main difference between both test products is that there is calcium phosphate in the tablet core of the non-BE product.
Design of the In Vitro Dissolution Studies Performed with the GIS
The GIS is a three-compartmental dissolution device, which consists of (i) a gastric chamber (GISstomach), (ii) a duodenal chamber (GISduodenum), and (iii) a jejunal chamber (GISjejunum). The design of the GIS is depicted in Figure 1. The different dissolution protocols that were applied to test the different formulations in the multicompartmental GIS device are shown in Tables 2 and 3. Table 2 represents the dissolution experiments that were performed in the absence of NaCl (i.e., Protocol 1). The gastric chamber contained simulated gastric fluid (SGF) and the duodenal compartment contained phosphate buffer, pH 6.8 (50 mM). We will refer to this test condition as the standard dissolution "Protocol 1" throughout the manuscript. To explore the impact of endogenous constituents present in the stomach (i.e., NaCl) and in the small intestine (i.e., bile salts and phospholipids), Protocol 2 was developed. In The different dissolution protocols that were applied to test the different formulations in the multicompartmental GIS device are shown in Tables 2 and 3. Table 2 represents the dissolution experiments that were performed in the absence of NaCl (i.e., Protocol 1). The gastric chamber contained simulated gastric fluid (SGF) and the duodenal compartment contained phosphate buffer, pH 6.8 (50 mM). We will refer to this test condition as the standard dissolution "Protocol 1" throughout the manuscript. To explore the impact of endogenous constituents present in the stomach (i.e., NaCl) and in the small intestine (i.e., bile salts and phospholipids), Protocol 2 was developed. In that case, the impact of NaCl on the conversion from salt to free acid could be investigated in the gastric compartment (i.e., acidic pH), and how the created solution concentrations will further behave in the duodenal compartment in a more biorelevant setting. Table 3 represents a higher level of biorelevant dissolution testing by using SGF in the gastric compartment in the presence of NaCl. The duodenal compartment contains fasted state simulated intestinal fluid (FaSSIF-v1). We will refer to this test condition as the standard dissolution "Protocol 2" throughout the manuscript. The above-mentioned formulations were introduced into the GIS stomach at the start of the experiment. Gastric emptying was set to a first-order kinetic process with a rate corresponding to a gastric half-life of 13 min, in accordance with the reported half-life in humans for liquids, ranging from 4 to 13 min [5]. Duodenal volume was kept constant at 50 mL by balancing the input (i.e., gastric emptying and duodenal secretion) with the output flow. The jejunal compartment was empty at the beginning of the experiment. Fluid from GIS stomach was transferred to the GIS duodenum and then to the GIS jejunum with the aid of two Ismatec REGLO peristaltic pumps (IDEX Health and Science, Glattbrugg, Switzerland). Same pumps were used for the gastric and duodenal secretion fluids. All peristaltic pumps were calibrated prior to the start of the experiment. The CM-1 overhead paddles (Muscle Corp., Osaka, Japan) stirred at a rate of 20 rpm in the gastric and duodenal chambers. For every 25 s, a high-speed, quick burst (500 rpm) was cyclically repeated to mimic gastrointestinal (GI) contractions and to homogenize the compartment facilitating the solid particle transfer from one chamber to the next one. The jejunal chamber was stirred with a magnetic bar at an approximate rate of 50 rpm. All experiments were performed at 37 • C. After 60 min, pumps were shut down as the gastric content was emptied. Concentrations in the GIS duodenum and GIS jejunum were still measured up to 120 min. Samples were withdrawn from the GIS compartments at predetermined time-points up to 120 min in order to measure the dissolved amount of DKT. The pumps and overhead paddles were controlled by an in-house computer software program. Solution concentrations were determined by centrifuging 300 µL of the withdrawn sample for 1 min at a speed of 17,000 g (AccuSpin Micro 17, Fisher Scientific, Pittsburgh, PA, USA). After centrifugation, 100 µL of the supernatant was diluted 1:1 with MeOH, and the MeOH sample was diluted 1:1 again with 0.1 N HCl and transferred to high performance liquid chromatography (HPLC) capped vials. All obtained samples were analyzed by HPLC (see below Section 2.8).
Design of the In Vitro Dissolution Studies Performed with USP-2
To investigate the impact of each region of the human GI tract separately, single-compartmental dissolution studies were performed. Dissolution studies in the USP-2 (paddle) apparatus were performed at 37 • C and 30 rpm in 500 mL of fluid. Three tablets of each formulation were tested in four different media: (1) FaSSIF-v1 at pH 6.5; (2) 0.01 N HCl (pH 2); (3) 0.01 N HCl + 34 mM NaCl; and (4) 0.01 N HCl + 135 mM. The concentrations of Na + and Cl − measured in human gastric fluids are equal to 68 ± 29 mM and 102 ± 28 mM, respectively [6]. Samples of 500 µL were taken and immediately centrifuged and diluted as described previously.
In Silico Deconvolution to Obtain In Vivo Bioavailability Input Rate
Intravenous pharmacokinetic data were obtained from Valles and co-workers [7]. A twocompartmental pharmacokinetic (PK) open model was fitted to the data to get DKT disposition constants as depicted in Table 4. Table 4. Disposition parameters of DKT for a two-compartmental pharmacokinetic (PK) model: V1 represents the central compartment volume; K 10 represents a first-order elimination rate constant; K 12 and K 21 reflect the two rate constants distributing the drug between the peripheral and central compartment, respectively. PK parameters were used to apply Loo-Riegelman mass balance deconvolution method in order to obtain the plots of bioavailable fractions versus time profile of all the assayed formulations. As oral plasma data were obtained from different BE studies, the plasma concentration-time profiles for all test formulations were normalized using the reference formulations ratios at each time point between both BE studies [8,9]. Similar normalization results were obtained by using the area under the curve (AUC) references ratios (data not shown).
Description of the Two-Step In Vitro-In Vivo Correlation (IVIVC)
Fractions dissolved in jejunal chambers of each formulation were used to develop the two-step IVIVC. To estimate the fractions dissolved, the maximum amount of DKT dissolved among the three formulations was used to transform amounts into fractions. Bioavailable fractions obtained by Loo-Riegelman method of each formulation at each time point versus the fractions dissolved of the corresponding formulation at the same time points were represented. For non-coincident sampling times in vitro versus in vivo, the corresponding dissolved or absorbed fractions were estimated by linear interpolation between the previous and next time point. The obtained IVIVC relationship was internally validated-theoretical fractions absorbed were calculated from the experimental fractions dissolved by using the IVIVC equation. The fractions absorbed were back-transformed toward concentrations by applying Equation (1) [10].
where C T is the plasma concentration at time t; C T−1 is the plasma concentration at the previous time point (T − 1); (X A ) T is the absorbed amount at time t; (X P ) T−1 is the amount in the peripheral compartment at the previous sampling time; ∆t is the time interval between two consecutive sampling times; V c is the central compartment volume; K 12 and K 21 are the distribution constants and K el the elimination rate constant from the central compartment. The peripheral "concentrations" were estimated with Equation (2) [11]: where K 12 and K 21 are the values obtained previously from literature in Table 4. The predicted plasma levels were used to estimate plasma C max and AUC predicted values to be compared with the experimental ones and to estimate the relative prediction error (Equation (3)):
Evaluation of DKT to Free Acid Conversion Pathways/Kinetics During Salt Dissolution
DKT to free acid conversion was studied in situ by optical microscopy. The studies were conducted at room temperature (22-23 • C) using an inverted optical microscope (Leica DMi8, Wetzlar, Germany) and 10×, 20×, or 40× magnification objective lenses. An inverted microscope has the advantage of a long focal length that allows examination of the phases formed during dissolution without having to remove the solution. Two concentration levels of both DKT and excipients were studied by varying the amount of DKT and excipients added to 96-well plates followed by the addition of 300 µL of hydrochloric acid (pH 2 (0.01 M) and 34.2 mM NaCl) with pre-dissolved tablet excipients. The influence of excipients was determined by dissolving formulation excipients in the dissolution media prior to DKT salt addition. The high concentration level (C H ) corresponds to 685 ± 23 µg of salt added to a 300 µL aliquot of a solution of 1 tablet dissolved in 20 mL, whereas the low concentration (C L ) corresponds to 38 ± 1 µg of salt added to a 300 µL aliquot of 1 tablet dissolved in 300 mL. From that point of view, the high concentration (C H ) is 18 times higher than the low concentration (C L ).
Brightfield images were collected with a Leica DMC2900 camera controlled with LAS v4.7 software (Leica Microsystems, Wetzlar, Germany). Solid particles of the free acid were added at two different levels, representing <3% (w/w) and 3% (w/w) relative to the total amount of salt present in the well. In that way, the influence of salt purity on drug precipitation could be determined.
Solubility and pH max Determination
Drug solubility was measured by adding the DKT to solutions at various pH values and stirring at 37 • C for 24 h. The pH was adjusted by adding HCl or NaOH to the solutions. Solubility values were used to calculate the salt solubility product, K sp , according to the following equation where [DK − ] represent the concentration of ionized drug and [TH + ] represents the concentration of counterion. The pH max was calculated from the intersection of the DKT and free acid solubility curves generated according to equations presented in the results section. The pH max refers to the pH where both the DKT and free acid have equal solubilities.
Concentration Analysis of DKT by HPLC
DKT concentrations in the samples were measured by HPLC-UV (Hewlett Packard series 1100 HPLC Pump combined with Agilent Technologies 1200 Series Autosampler). A volume of 75 µL was injected into the HPLC system (Waters 515 HPLC Pump with Waters 717 Autosampler). DKT was detected with an UV lamp at 262 nm (Water 996 Photodiode Array Detector). The mobile phase consisted of 60:40 mixture of acetonitrile and purified water A (both containing 0.1% TFA). Stationary phase was a C-18 Agilent Eclipse XDB (4.6 × 150 mm; 3.5 µm). Elution flow was 1 mL/min and retention time for DKT was 3.95 min. Calibration curves were made in mobile phase based on a stock solution of DKT in methanol. Linearity was observed between 1.5 µg/mL and 300 µg/mL covering all the experimental sample values. The observed peaks were integrated using Millenium software (Agilent Technologies, County of Santa Clara, CA, USA). The developed analytical method met the standards for precision and accuracy.
Data Analysis and Presentation
Dissolution profiles of DKT in all GIS compartments were plotted either as drug concentration or mass of drug versus time (average ± standard deviation; n = 4). Dissolution profiles from USP-2 experiments were represented as the fraction dose dissolved versus time (average n = 3).
Solubilities and Solution Stabilities of DKT and Free Acid Solid Forms as a Function of pH
Dexketoprofen is a lipophilic (LogP 3.61) weak acid with pK a of 4.02 at 37 • C [12,13]. The DKT salt was developed to enhance its solubility over the free acid and improve dissolution in the GI tract. Salt formation is a well-known strategy to increase the solubility of either lipophilic weak acids or bases in order to improve oral absorption. Nevertheless, the expected benefits of forming a salt may not work if the level of supersaturation leads to drug precipitation to the free acid or base, thereby reducing the drug exposure levels for absorption [14][15][16]. The "supersaturation/precipitation interactive process" depends on the characteristics of the weak acid or base and, not unimportant, on the dissolution study design with respect to media composition and hydrodynamics that will determine the bulk and interfacial pH around the dissolving particles.
the experimental sample values. The observed peaks were integrated using Millenium software (Agilent Technologies, County of Santa Clara, CA, USA). The developed analytical method met the standards for precision and accuracy.
Data Analysis and Presentation
Dissolution profiles of DKT in all GIS compartments were plotted either as drug concentration or mass of drug versus time (average ± standard deviation; n = 4). Dissolution profiles from USP-2 experiments were represented as the fraction dose dissolved versus time (average n = 3).
Solubilities and Solution Stabilities of DKT and Free Acid Solid Forms as a Function of pH
Dexketoprofen is a lipophilic (LogP 3.61) weak acid with pKa of 4.02 at 37 °C [12,13]. The DKT salt was developed to enhance its solubility over the free acid and improve dissolution in the GI tract. Salt formation is a well-known strategy to increase the solubility of either lipophilic weak acids or bases in order to improve oral absorption. Nevertheless, the expected benefits of forming a salt may not work if the level of supersaturation leads to drug precipitation to the free acid or base, thereby reducing the drug exposure levels for absorption [14][15][16]. The "supersaturation/precipitation interactive process" depends on the characteristics of the weak acid or base and, not unimportant, on the dissolution study design with respect to media composition and hydrodynamics that will determine the bulk and interfacial pH around the dissolving particles.
The influence of pH on the stability of the DKT salt and DK free acid was determined by examining the solubility-pH profiles presented in Figure 2. These results show that DKT salt has a pHmax at 6.7, where both salt and free acid have equal solubilities; thus, both phases are stable. Below pHmax, the salt is more soluble than the free acid, and generates supersaturation with respect to the acid. Supersaturation is expressed as the ratio of salt to free acid solubility, Ssalt/Sacid/intrinsic. The lower the pH below pHmax, the higher is the supersaturation that the salt may generate, and the higher is the driving force for salt to free acid conversion. On the other hand, the salt is stable at pH ≥ pHmax. Solubility−pH dependence of free acid and DKT salt indicating the stability regions for salt and free acid solid-state forms and the conditions under which dissolution-precipitation microscopy studies were carried out. Salt has a pHmax of 6.7 below which supersaturation with respect to free acid can occur. Two salt concentrations were studied: CL represents the low dose concentration. CH represents a higher concentration of 18x CL, as described in the Materials and Methods section. Arrows represent the pH changes that different salt formulations experienced. Green X represents initial concentration and pH. As the salt dissolves, the bulk pH increased to 2.7 ± 0.2 (orange X) for the bioequivalence (BE) and reference formulation excipients, whereas the pH increased up to 5.3 (orange+) for the non-bioequivalence (non-BE) formulation excipients. Solubility curve for salt (red Figure 2. Solubility−pH dependence of free acid and DKT salt indicating the stability regions for salt and free acid solid-state forms and the conditions under which dissolution-precipitation microscopy studies were carried out. Salt has a pH max of 6.7 below which supersaturation with respect to free acid can occur. Two salt concentrations were studied: C L represents the low dose concentration. C H represents a higher concentration of 18x C L , as described in the Materials and Methods section. Arrows represent the pH changes that different salt formulations experienced. Green X represents initial concentration and pH. As the salt dissolves, the bulk pH increased to 2.7 ± 0.2 (orange X) for the bioequivalence (BE) and reference formulation excipients, whereas the pH increased up to 5.3 (orange+) for the non-bioequivalence (non-BE) formulation excipients. Solubility curve for salt (red line) was calculated from Equation (6), using K sp and pK a,DK reported in the text and pK a,T = 8.1 [17]. The free acid solubility curve (blue line) was calculated according to Equation (7). Open circles represent DK measured solubilities at 37 • C. The dashed red line represents supersaturated conditions with respect to DK if solutions are saturated with salt.
The influence of pH on the stability of the DKT salt and DK free acid was determined by examining the solubility-pH profiles presented in Figure 2. These results show that DKT salt has a pH max at Pharmaceutics 2019, 11, 122 8 of 18 6.7, where both salt and free acid have equal solubilities; thus, both phases are stable. Below pH max , the salt is more soluble than the free acid, and generates supersaturation with respect to the acid. Supersaturation is expressed as the ratio of salt to free acid solubility, S salt /S acid/intrinsic . The lower the pH below pH max , the higher is the supersaturation that the salt may generate, and the higher is the driving force for salt to free acid conversion. On the other hand, the salt is stable at pH ≥ pH max .
Given the salt solubility at pH max and the free acid S 0 values, supersaturation with respect to free acid can be very high (>500) causing drug precipitation and depletion of drug concentration levels. Salt to drug conversions were examined by microscopy at two salt concentrations, one equivalent to the dose and one higher, as indicated in the graph. Although at C L the bulk solution is undersaturated with respect to free acid, the salt particles can exhibit supersaturation at the salt/liquid interface as this region is saturated with respect to salt. The solubility−pH profiles for the salt and the free acid were generated according to equations derived from the solution chemistry equilibria. For the salt, the equilibrium reaction is The equilibrium constant for this reaction is the salt solubility product (K sp ) given by Equation (4). K sp of DKT was determined to be 4.96 × 10 −1 M 2 , from the measured [DK − ] or salt solubility, S salt = 7.04 × 10 −1 M at pH 6.8. While K sp is constant with pH, salt solubility is not, and its dependence on pH is given (assuming no precipitation of protonated dexketoprofen and no solubility-limiting effect by other ions in the medium) by S salt = K sp 1 + 10 pK a,DK −pH 1 + 10 pH−pK a,T where K a,DK and K a,T are the acid and base dissociation constants of the salt constituents. For the free acid, the solubility in terms of pH is expressed by S acid = S 0 1 + 10 pH−pK a,DK where S 0 is the intrinsic solubility of the free acid, determined to be 1.36 × 10 −3 M. The pH max was also calculated applying the following equation: obtained by solving Equations (6) and (7) for pH when S salt = S acid at pK a,DT < pH < pK a,T , under conditions where both drug and counterion are fully ionized. The pH max value of 6.7 obtained by this equation is equal to that obtained graphically because trometamol is still almost completely ionized at this pH.
Formulation Performance of the DKT Formulations in the GIS with Protocols 1 and 2
Since the GIS can incorporate the dynamic shift in fluid pH and composition as the dosage form transits from the stomach to the intestine, it has previously shown utility in predicting the in vivo performance of weak bases [4,[18][19][20][21][22]. In this study, GIS dissolution experiments were performed using two different protocols, representing two different medium compositions in the gastric and intestinal compartments. Whereas Protocol 1 contained SGF in the gastric compartment and pH 6.8 phosphate buffer in the intestinal compartments, in Protocol 2, sodium chloride was added to SGF in the gastric compartment and FaSSIF-v1 was added to the intestinal compartment. Figures 3 and 4 include the observed solution DKT concentrations as a function of time for the three different formulations as tested in the GIS device, applying Protocol 1 and Protocol 2, respectively. Remarkably, differences in dissolution behavior were observed in the gastric compartment of the GIS apparatus in the presence and absence of NaCl. When NaCl was absent from the gastric medium (Protocol 1), the gastric dissolution profiles did not discriminate between the three formulations. However, when NaCl was added to the gastric medium (Protocol 2), differentiation was observed between the formulations, whereby the non-BE formulation dissolved earlier and to a great extent, as was observed in vivo (deconvoluted profiles). The addition of NaCl to SGF resulted in observed differences in the disintegration behavior in the gastric chamber as discussed further in the next section. Remarkably, differences in dissolution behavior were observed in the gastric compartment of the GIS apparatus in the presence and absence of NaCl. When NaCl was absent from the gastric medium (Protocol 1), the gastric dissolution profiles did not discriminate between the three formulations. However, when NaCl was added to the gastric medium (Protocol 2), differentiation was observed between the formulations, whereby the non-BE formulation dissolved earlier and to a great extent, as was observed in vivo (deconvoluted profiles). The addition of NaCl to SGF resulted in observed differences in the disintegration behavior in the gastric chamber as discussed further in the next section. The differences in dissolution rates across the three formulations as observed in the GISstomach with Protocol 2 were maintained after the transfer to the duodenal chamber. Finally, the GISjejunum accumulated the differences and the jejunum cumulative dissolution profiles of the three assayed formulations followed the same trend as the oral fractions absorbed obtained from deconvolution of plasma profiles, as depicted in Figure 5. The differences in dissolution rates across the three formulations as observed in the GIS stomach with Protocol 2 were maintained after the transfer to the duodenal chamber. Finally, the GIS jejunum accumulated the differences and the jejunum cumulative dissolution profiles of the three assayed formulations followed the same trend as the oral fractions absorbed obtained from deconvolution of plasma profiles, as depicted in Figure 5. Measured DKT concentrations in the GISstomach were the result of the balance between the supersaturation factor of DKT promoted by the salt and the precipitation of the free acid. That balance evolved differently in the presence or in the absence of NaCl. Potential reasons for these observations are the differences in solubility of sodium dexketoprofen versus the trometamol salt and the increased solubility of calcium monohydrogen phosphate in the presence of NaCl. After transfer to the duodenal chamber, a reflection of the gastric dissolution profiles was observed in the FaSSIF-v1 media, using Protocol 2. In Protocol 1, no differences between dissolution profiles were observed in the duodenal chamber. It could be due to the fact that the three formulations already behaved similarly in the GISstomach but, on the other hand, the higher buffer strength of the 50 mM phosphate buffer used in Protocol 1 readily promoted DKT dissolution hiding the effect of calcium phosphate on solid surface pH.
As for why did the differences between the studied formulations appear with Protocol 2 but not Protocol 1, the USP paddle dissolution results in FaSSIF-v1, shown in Figure 6, indicate that the incorporation of NaCl into the simulated gastric fluid rather than the use of FaSSIF-v1 is the primary reason. This is rather intriguing due to the low NaCl molarity present in the GISstomach owing to the six-fold dilution with water. Applying the ionic strength and activity coefficient calculations shows that any effect on the DKT and/or calcium phosphate behavior would be marginal at those NaCl concentrations. Our current hypotheses for possible causes have not yet been experimentally tested. Therefore, additional future studies are planned to investigate the possible causes behind this effect.
The Impact of NaCl on Disintegration and Dissolution
Results of the dissolution experiments in the USP-2 apparatus using four different media (FaSSIF-v1 at pH 6.5; 0.01 N HCl (pH 2); 0.01 N HCl + 34 mM NaCl; and 0.01 N HCl + 135 mM), which were designed to explore the impact of NaCl on formulation disintegration/drug dissolution, are depicted in Figure 6. The different concentrations of NaCl cover the observed values as observed in the human stomach [6]. Measured DKT concentrations in the GIS stomach were the result of the balance between the supersaturation factor of DKT promoted by the salt and the precipitation of the free acid. That balance evolved differently in the presence or in the absence of NaCl. Potential reasons for these observations are the differences in solubility of sodium dexketoprofen versus the trometamol salt and the increased solubility of calcium monohydrogen phosphate in the presence of NaCl. After transfer to the duodenal chamber, a reflection of the gastric dissolution profiles was observed in the FaSSIF-v1 media, using Protocol 2. In Protocol 1, no differences between dissolution profiles were observed in the duodenal chamber. It could be due to the fact that the three formulations already behaved similarly in the GIS stomach but, on the other hand, the higher buffer strength of the 50 mM phosphate buffer used in Protocol 1 readily promoted DKT dissolution hiding the effect of calcium phosphate on solid surface pH.
As for why did the differences between the studied formulations appear with Protocol 2 but not Protocol 1, the USP paddle dissolution results in FaSSIF-v1, shown in Figure 6, indicate that the incorporation of NaCl into the simulated gastric fluid rather than the use of FaSSIF-v1 is the primary reason. This is rather intriguing due to the low NaCl molarity present in the GIS stomach owing to the six-fold dilution with water. Applying the ionic strength and activity coefficient calculations shows that any effect on the DKT and/or calcium phosphate behavior would be marginal at those NaCl concentrations. Our current hypotheses for possible causes have not yet been experimentally tested. Therefore, additional future studies are planned to investigate the possible causes behind this effect.
The Impact of NaCl on Disintegration and Dissolution
Results of the dissolution experiments in the USP-2 apparatus using four different media (FaSSIF-v1 at pH 6.5; 0.01 N HCl (pH 2); 0.01 N HCl + 34 mM NaCl; and 0.01 N HCl + 135 mM), which were designed to explore the impact of NaCl on formulation disintegration/drug dissolution, are depicted in Figure 6. The different concentrations of NaCl cover the observed values as observed in the human stomach [6]. The presence of NaCl mainly affected the disintegration and dissolution process of the non-BE formulation resulting in an enhanced dissolution rate in the presence of NaCl, which was not observed for the BE-formulation and the reference drug product. While performing these dissolution experiments, remarkable differences in disintegration behavior could also be observed between the non-BE formulations and the other two formulations.
The faster dissolution of DKT from the non-BE formulation compared to the other formulations under Protocol 2 is most likely related to the high content of calcium phosphate in the tablet core of the non-BE formulation, which is not present in the other formulations. This high level of calcium phosphate in the tablet core of the non-BE formulation can increase the pH at the solid surface accelerating the dissolution of DKT and also facilitating tablet disintegration. Modulation of microenvironmental pH has been shown as an effective strategy to modulate the dissolution rate of GDC-0810, a weak acid of an oral anticancer drug, by using sodium bicarbonate to change solid surface pH [23]. This same strategy of using pH-modifiers has been proposed as a release modulating mechanism in solid dispersions [24] and other immediate-release dosage forms [25]. Solid surface pH data was not obtained in these dissolution experiments and bulk pH values of the media during dissolution experiments were available only in GISstomach at 13 min with Protocol 2. At that time, the non-BE formulation containing calcium phosphate presented a pH of 3.5, 1 unit higher than the pH of the reference and BE formulation that was approximately 2.5. Calcium phosphate can increase the pH at the solid surface of the drug-excipients particles, then increasing DKT solubility, and consequently decreasing the degree of supersaturation, which will, subsequently, prevent or reduce the precipitation gradient [25]. Besides calcium phosphate, FaSSIF-v1 surfactants seem to play a major role in the supersaturation/free acid precipitation balance as it has been reported for other ionizable compounds [26][27][28]. The presence of NaCl mainly affected the disintegration and dissolution process of the non-BE formulation resulting in an enhanced dissolution rate in the presence of NaCl, which was not observed for the BE-formulation and the reference drug product. While performing these dissolution experiments, remarkable differences in disintegration behavior could also be observed between the non-BE formulations and the other two formulations.
FaSSIF-v1
The faster dissolution of DKT from the non-BE formulation compared to the other formulations under Protocol 2 is most likely related to the high content of calcium phosphate in the tablet core of the non-BE formulation, which is not present in the other formulations. This high level of calcium phosphate in the tablet core of the non-BE formulation can increase the pH at the solid surface accelerating the dissolution of DKT and also facilitating tablet disintegration. Modulation of microenvironmental pH has been shown as an effective strategy to modulate the dissolution rate of GDC-0810, a weak acid of an oral anticancer drug, by using sodium bicarbonate to change solid surface pH [23]. This same strategy of using pH-modifiers has been proposed as a release modulating mechanism in solid dispersions [24] and other immediate-release dosage forms [25]. Solid surface pH data was not obtained in these dissolution experiments and bulk pH values of the media during dissolution experiments were available only in GIS stomach at 13 min with Protocol 2. At that time, the non-BE formulation containing calcium phosphate presented a pH of 3.5, 1 unit higher than the pH of the reference and BE formulation that was approximately 2.5. Calcium phosphate can increase the pH at the solid surface of the drug-excipients particles, then increasing DKT solubility, and consequently decreasing the degree of supersaturation, which will, subsequently, prevent or reduce the precipitation gradient [25]. Besides calcium phosphate, FaSSIF-v1 surfactants seem to play a major role in the supersaturation/free acid precipitation balance as it has been reported for other ionizable compounds [26][27][28].
In Vitro-In Vivo Correlations (IVIVC) for the Different Drug Products
When fractions absorbed of the three formulations were plotted against the fractions dissolved in jejunal chamber when Protocol 2 was applied, a single relationship was obtained, indicating dissolution was the limiting factor for DKT systemic input (Figure 7).
In Vitro-In Vivo Correlations (IVIVC) for the Different Drug Products
When fractions absorbed of the three formulations were plotted against the fractions dissolved in jejunal chamber when Protocol 2 was applied, a single relationship was obtained, indicating dissolution was the limiting factor for DKT systemic input (Figure 7). Nevertheless, the obtained relationship is not linear but curved due to a time-scale shift from in vivo to in vitro. In vivo dissolution and, consequently, absorption is faster than what was simulated in vitro. A time-scaling approach was not considered to be necessary as the time shift was less than 30 min and the non-linear equation presented a good predictive performance. The reason for the slight time shift could be the fact that jejunal dissolved amounts were used while in vivo dissolution/ absorption from duodenum can play a relevant/significant role. The internal validation of the obtained IVIVC was done by estimating fractions absorbed from the experimental dissolved ones using the obtained non-linear equation and then back-transforming fractions absorbed in plasma levels. Relative prediction errors of plasma Cmax and AUC were lower than 10% for all the formulations (Table 5). Nevertheless, the obtained relationship is not linear but curved due to a time-scale shift from in vivo to in vitro. In vivo dissolution and, consequently, absorption is faster than what was simulated in vitro. A time-scaling approach was not considered to be necessary as the time shift was less than 30 min and the non-linear equation presented a good predictive performance. The reason for the slight time shift could be the fact that jejunal dissolved amounts were used while in vivo dissolution/ absorption from duodenum can play a relevant/significant role. The internal validation of the obtained IVIVC was done by estimating fractions absorbed from the experimental dissolved ones using the obtained non-linear equation and then back-transforming fractions absorbed in plasma levels. Relative prediction errors of plasma C max and AUC were lower than 10% for all the formulations (Table 5).
Differences between Drug Salt to Free Acid Conversions for BE and non-BE Formulations
Drug exposure levels are influenced by the kinetics of salt dissolution and drug precipitation as well as the evolution of drug phases. Microscopic examination of salt dissolution in pH 2 identified two main pathways depending on the formulation excipients: (1) salt dissolved without conversion in the presence of calcium phosphate as one of the excipients (non-BE formulation), (2) salt dissolution formed a nano-phase that grew to spherical and island morphologies that converted to free acid crystals ( Figure 8). The time course of the second pathway was dependent on the salt/excipient concentration and the presence of free acid crystals in the salt phase.
Differences between Drug Salt to Free Acid Conversions for BE and non-BE Formulations
Drug exposure levels are influenced by the kinetics of salt dissolution and drug precipitation as well as the evolution of drug phases. Microscopic examination of salt dissolution in pH 2 identified two main pathways depending on the formulation excipients: (1) salt dissolved without conversion in the presence of calcium phosphate as one of the excipients (non-BE formulation), (2) salt dissolution formed a nano-phase that grew to spherical and island morphologies that converted to free acid crystals ( Figure 8). The time course of the second pathway was dependent on the salt/excipient concentration and the presence of free acid crystals in the salt phase. Shown in Figure 8 is the precipitated phase that appears as non-coalescing drops suspended in the dissolution media. This phase surrounds the fast dissolving salt particles, within seconds. Conversion of this fine precipitate to drug crystals was observed after 2 min or longer (up to 1 h) depending on initial salt concentration, formulation excipients, and presence of drug impurity in salt.
The massive phase separation appears initially hazy as its size is in the submicron range and below the level of detection of the microscope. This phenomenon has been referred to as spinodal, oiling out, or liquid-liquid phase separation (LLPS), consistent with that observed for other weakly basic drugs under high supersaturations, such as ritonavir [29][30][31][32][33]. The supersaturations with respect to drug, generated at the surface of the dissolving DKT salt particles are very high (>500 at pH 2) based on the solubilities of the salt and drug forms shown in Figure 2. While the interfacial pH was not evaluated, the surface of the dissolving salt is saturated with respect to the salt and generates much higher supersaturations than those in the bulk dissolution media. In fact, the appearance of LLPS occurred even when the bulk solution was below the drug solubility (σ = 0.3) (Figure 2). Table 6 summarizes DKT transformations to drug phases during dissolution. Observed dissolution of all phases initially or after the appearance of LLPS and drug crystals is consistent with undersaturated drug conditions in the bulk solution at dose concentration (CL). Non-BE formulation excipients in the media at higher concentrations (CH) increased pH to 5.3, and no precipitation was observed as the solution concentration is below salt and drug solubility. This is most probably because of the neutralization of HCl by the large level of calcium phosphate present. REF and BE formulation excipients exhibited different conversion behavior at CH; LLPS formed and crystallized after 10-30 min. The presence of drug crystals as impurity (less than or equal to 3%) in the salt phase Figure 8 is the precipitated phase that appears as non-coalescing drops suspended in the dissolution media. This phase surrounds the fast dissolving salt particles, within seconds. Conversion of this fine precipitate to drug crystals was observed after 2 min or longer (up to 1 h) depending on initial salt concentration, formulation excipients, and presence of drug impurity in salt.
Shown in
The massive phase separation appears initially hazy as its size is in the submicron range and below the level of detection of the microscope. This phenomenon has been referred to as spinodal, oiling out, or liquid-liquid phase separation (LLPS), consistent with that observed for other weakly basic drugs under high supersaturations, such as ritonavir [29][30][31][32][33]. The supersaturations with respect to drug, generated at the surface of the dissolving DKT salt particles are very high (>500 at pH 2) based on the solubilities of the salt and drug forms shown in Figure 2. While the interfacial pH was not evaluated, the surface of the dissolving salt is saturated with respect to the salt and generates much higher supersaturations than those in the bulk dissolution media. In fact, the appearance of LLPS occurred even when the bulk solution was below the drug solubility (σ = 0.3) (Figure 2). Table 6 summarizes DKT transformations to drug phases during dissolution. Observed dissolution of all phases initially or after the appearance of LLPS and drug crystals is consistent with undersaturated drug conditions in the bulk solution at dose concentration (C L ). Non-BE formulation excipients in the media at higher concentrations (C H ) increased pH to 5.3, and no precipitation was observed as the solution concentration is below salt and drug solubility. This is most probably because of the neutralization of HCl by the large level of calcium phosphate present. REF and BE formulation excipients exhibited different conversion behavior at C H ; LLPS formed and crystallized after 10-30 min. The presence of drug crystals as impurity (less than or equal to 3%) in the salt phase led to the faster conversion of LLPS to less soluble drug crystals, i.e., faster drug crystallization and LLPS dissolution. Faster conversion rates result in lower drug exposure levels.
It is important to consider that concentration levels varied for both salt and excipients, as the excipient concentrations were varied by diluting the dissolved tablet prior to adding the salt. Therefore, the different behavior of the high concentration of excipients with the non-BE shows the key role of alkalinizing excipients on stabilizing the salt, as the pH approaches the pH max .
Furthermore, the dilution of the gastric fluid by water in the GIS stomach of the GIS setup explains the C H results better matching the trend of the GIS data than the C L results. This is because the lower HCl concentrations caused by dilution were not sufficient to eliminate the pH differences caused by the presence of calcium phosphate in the non-BE formulation. This gave rise to an end effect similar to the high calcium phosphate levels under C H conditions being able to effectively neutralize the 0.01 M HCl. This is supported by the aforementioned observation of higher pH value in the GIS stomach for the non-BE formulation compared to the reference one, which is more in line with the C H than with the C L results. a within seconds; b C L , low concentration, dose in 300 mL solution (total drug concentration = 4.3 × 10 −4 M); c C H , high concentration = 18 × C L (total drug concentration = 7.7 × 10 −3 M); d DK solid, represents less than or equal to 3% free acid drug as impurity in the salt phase; and NA, not applicable as C H at this final pH is above free acid solubility.
Conclusions and Future Directions
Differences in dissolution behavior between the BE and non-BE DKT products are a result of drug salt to free acid phase conversion rates and mechanisms. In this case, the presence of an alkalinizing excipient in a tablet formulation of a salt of a weakly acidic drug suppresses salt disproportionation as pH approaches pH max , leading to a higher extent of drug dissolved and failing BE requirements. This is a case where salt disproportionation appears to modulate the behavior of a highly soluble salt in a favorable way by the formation of a transient phase prior to crystallization of the less soluble free acid. Rates of formation of less-soluble drug phases, LLPS, and crystal forms during DKT salt dissolution are dependent on the excipients, dissolution pH, and presence of DK free acid as an impurity in the salt. Excipients that increase pH (calcium phosphate) decreased free acid precipitation and enhanced dissolved levels of drug in the non-BE formulation. The BE product was associated with a faster conversion to KT crystals, whereas non-BE product experienced less drug precipitation under the same condition. Generic and non-generic DKT formulations were discriminated in vitro in the GIS device by adding NaCl to SGF and using FaSSIF-v1 media in the duodenum compartment. However, the relevant GI variables for the development of "In Vivo Product Predictive Dissolution Methods" need to be adapted to each compound. The selection of particular dissolution conditions as media and secretion fluids composition for the GIS device will depend on (i) the BCS profile of the drug, (ii) its ionization characteristics, and (iii) its formulation characteristics (e.g., presence of calcium monohydrogenphosphate). The ionic strength impact as well as the surfactants effects on the supersaturation/precipitation balance needs to be further investigated. Acknowledgments: The pharmaceutical companies that allowed the anonymous publication of the data included in these generic pharmaceutical developments are acknowledged.
Conflicts of Interest:
The authors declare no conflict of interest. Yasuhiro Tsume and Deanna Mudie are employees of Merck and co. and Lonza Pharma and Biotech, respectively. The company had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. | 11,097.8 | 2019-03-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Verification of the stability lobes of Inconel 718 milling by recurrence plot applications and composite multiscale entropy analysis
Correctness verification of the stability lobe diagrams of milling process determined by commercial software CutPro 9 is the aim of this work. The analysis is performed for nickel superalloy Inconel 718 which is widely used in aviation industry. A methodology of stability analysis which bases on advanced nonlinear methods such as recurrence plot, recurrence quantifications analysis and composite multiscale entropy analysis are applied to the experimental data. Additionally, a new criterion for the determination of the unstable areas is proposed.
Introduction
Recently nickel and titanium alloys have become very good for the construction of aircrafts parts, and medical and chemical processing industries. Generally, it is known that nickel-based superalloys are one of the most difficult materials to machine, mainly due to fast tool wear. High-speed machining is economically well founded where machine centers can operate at spindle speeds as high as half a million revolutions per minute [1,2]. Therefore, vibrations occurring during machining can generate a serious problem for engineers. Undesired relative vibrations between the tool and the workpiece may deteriorate the quality of machined surfaces or even damage the machine tool and scrap the workpiece. Then cutting forces which depend on the tool geometry, material properties, feed rate and cutting speed can have large amplitude. When a process is unstable, the amplitude of vibrations may grow exponentially. These unstable vibrations of the tool, known as chatter, can create large cutting forces. This is an undesired phenomenon, because the surface of the workpiece becomes nonsmooth as a result of significant vibrations of the cutter. Moreover, the cutting tool wears out rapidly when chatter exists.
In order to use a full capacity of a new machine and also to achieve potentially high material removal rate together with the desired surface quality, optimum machining parameters are necessary. One of the parameters which improve the efficiency of the cutting process is the cutting depth and rotational speed. Usually, the choice of cutting depth and rotational speed is based on a Stability Lobes Diagram (SLD). Stability diagrams can be applied in a high-speed machining processes to optimize the maximum depth of cut at the highest available spindle speed. Using high-speed machining, an increase of material removal rates is achieved through a combination of large axial depths of cut and high spindle speeds. When the cutting depth exceeds the critical value chatter vibrations can arise at some spindle speed, whereas, if the cut depth is below the critical value, the cutting is stable regardless of the spindle speed. In many practical cases, the choices of the optimal speed and of the depth of cut are difficult because of the limit set by vibrations that arise during the material removal process [3] and experience is needed in modal analysis of a tool-spindle system.
Many papers show the modeling of the milling process on the basis of which the stability diagrams are determined (for example [4][5][6][7]). Then SLD are obtained by simulation in the time domain equation of motion. The alternative solution is to calculate the stability lobes directly in the frequency domain or to resolve the delay-differential equations (DDE [8]). However, only a few papers present complete experimental verification of these stability lobes [9]. This work shows the experimental results of milling of Inconel 718 with continuously growing depth of cut for a fixed spindle speed. This way of the machining should demonstrate pass from stable milling to unstable. Next, the verification of process stability based on two new nonlinear methods: recurrence plot, recurrence quantifications [10,11] and entropy is shown [12]. Stability of the milling process is practically hardly distinguished looking only at the force time histories and its magnitude. Therefore, this analysis is extended to more advanced tools. Additionally, the recurrence plot quantifications to process stability classification is proposed.
Description of the experimental setup
The measurements are conducted on the laboratory equipment, presented schematically in fig. 1, which is composed of the numerical controlled milling machine FV580A, the piezoelectric dynamometer Kistler 9257B, the charge amplifier type 5017B, the "sample and hold" component (SC2040) and the analog-digital converter NI 6071E. Additionally, the LMS module controlled by TestLab software with the two piezoelectric accelerometers PCB 352B10 in order to measure vibrations acceleration in x and y directions and modal hammer PCB model 086C03 is used to perform modal analysis.
At this stage, an instrumented modal hammer is engaged to excite the tool at its free end (i.e., the tool point) and the resulting vibrations are measured using a low mass accelerometer mounted at the tool point.
In the second stage of the experiment the cutting forces (F x , F y and F z ) generated on the workpiece during the machining were measured by the dynamometer mounted on the milling machine. Next, the force signals are transmitted to charge amplifier next to the module sample&hold and finally to the analog-digital converter, which is connected to the computer. The sampling rate of data reordered during the test equals 5 kHz. The material used in the experiment is Inconel 718. In the tests the solid carbide end mill F4BT1200AWX45R100 (Kennametal) with a diameter of 12 mm and 4 cutting edges is used during down milling. The radial depth of cut is two thirds of the end mill diameter (8 mm).
Frequency response function
A modal analysis is the process of determining the dynamic characteristics of a system in forms of natural frequencies, damping factors and mode shapes as frequency response function results. Then the modal parameters can used to formulate a mathematical model. This technique describes the relationship between the system response at one location and excitation at the same or another location as a function of the excitation frequency. This relationship, which is often a complex mathematical function, is known as frequency response function (FRF). The FRF consists of two parts: -the real part of the FRF versus frequency (the black line in fig. 2), -the imaginary part of the FRF versus frequency (the grey line in fig. 2). The FRF is obtained by impact tests, where the instrumented modal hammer is used to excite the tool and the resulting vibrations are measured by the low-mass accelerometers mounted at the tool point. Using the LMS TestLab software, the structural dynamic parameters of a tool spindle system from FRF are calculated.
The frequency response function for the end mill mounted in the holder is determined in the x-direction ( fig. 2a) and y-direction ( fig. 2b). The natural frequency of the spindle system given by FRF is about 1 kHz.
Stability lobe diagram
The damping and stiffness can be calculated from FRF with the help of the commercial software CutPro9 in order to generate the corresponding stability lobe diagram ( fig. 3a). For comparison the SLD was also obtained by the package "machinist online" via the Internet [13], (fig. 3b). This application enables the user to produce stability lobe diagrams of the tool and holder system. Comparing the two diagrams we observe significant differences in the critical depth of cut (a p ) at which chatter exists. In experimental modal analysis the critical axial depth of cut equals 0.26 mm, in the package "machinist online" 0.1 mm. This is because, the modal analysis is more precise than the package "machinist online" which is based only on the geometry of a tool holder, milling parameters and the material data.
Stable cuts should occur in the region below the stability boundary (below the black line, fig. 3a), while unstable cuts should occur above the stability boundary. As shown, it is possible to increase the axial depth of cut without chatter by choosing the proper spindle speed which omits unstable lobes. This simple method of parameter selection can improve productivity. Our aim is to check the correctness of the stability lobe diagram by analyzing the cutting forces with the help of advanced nonlinear methods as the recurrence plot and the recurrence quantification analysis.
Milling with increasing depth of cut
Analyzing SLD result ( fig. 3a), the cutting depth below 0.26 mm should give stable cutting all the times regardless of the spindle speed. Therefore the experiment is performed, for continuously increasing depth of cut (from 0.05 to 0.505 mm) for fixed spindle speed n = 1000 rpm and is presented in fig. 4. The time series of the three components of the cutting forces F x , F y and F z are presented.
The forces F x (force in the feed direction, fig. 4a) and F y ( fig. 4b) have similar trends, but the values of the force F x are smaller in comparison to F y .
Force F z (axial end mill direction) is not analyzed further due to the fact that its value is almost constant in all the analyzed ranges of depth of cut. That is caused by strong tool stiffness in the z-direction compared to that in x, and y directions. Therefore, for further research only the force F x or F y is considered. To investigate the cutting force signal and to find symptoms of the stability loss, the recurrence plot technique (RP) is applied for representative F x force.
Recurrence plot analysis (RP)
The RP approach provides a qualitative interpretation of the hidden patterns of the dynamical systems, based on phase space reconstruction which begins with a time delay embedding of the data. This technique was originally introduced by Eckmann [11]. A recurrence plot is a visualization of a diagram (square matrix), in which the points correspond to those times at which a state of a dynamical system recurs. Technically, the RP reveals all the times (both axes i and j are time axes) when the phase space trajectory of the dynamical system visits roughly the same area in the phase space. A recurrence plot can be described by computing the matrix x i is the delay vector, calculated from The parameters m, d and ε denote an embedding dimension, a time delay and a threshold, respectively. The detailed description of recurrence plot and methods for selecting embedding parameters m, d and ε can be found in the papers [14,15]. In fig. 5a the recurrence plot of the force F x for the period of time 16-28 s, where the system passes through a stability boundary, at spindle speed n = 1000 rpm is presented. While fig. 5b presents RP diagram obtained for spindle speed n = 2000 rpm. The region of existence of stable milling has much more points (shaded area). The dashed line denotes the boundary between stable and unstable milling. In our case the stability boundary corresponds to depth of cut equal to 0.41 mm. The experimental critical value of stable cutting obtained by the RP method is greater than the value obtained by modal analysis and by the software CutPro (0.3 mm). In fig. 6a, zoom of the recurrence plot for stable milling in the time range from 16 to 18 s and the unstable one in the time range from 28 to 30 s (10000 data points) is presented. The recurrence diagram for unstable milling (range time from 28 to 30 s) is much more complex and consists of separate points, different length lines and empty spaces (fig. 6b).
The recurrence plot method can be extended to the recurrence quantifications analysis (RQA) based on points and lines in the RP.
Recurrence quantifications analysis (RQA)
The recurrence quantification analysis (RQA) is a method of nonlinear data analysis which quantifies the number and duration of recurrences of a dynamical system presented by its state space trajectory. Zbilut and Webber [16] defined several measures of complexity to quantify the small scale structures of RP. These measures are based on the recurrence points density and the diagonal and the vertical line structures of the recurrence plot. The most important RQA measures are Marwan et al. [15]: -Recurrence rate (RR), which describes the density of recurrence points. The physical meaning of RR is the probability that the system will recur -Laminarity (LAM), characterizes of recurrence points forming vertical lines. Vertical lines are typical for intermittency -Trapping time (TT), measures the mean time during which the system is trapped in one state or changes only very slowly -The length of the longest diagonal and vertical lines can be estimated In the above frame RQA provides us with the probability P (l) or P (v) of line distribution according to their lengths in diagonal (l) or vertical (ν) lines, N is number of points on phase space trajectory. The inverse of L max (Divergence DIV) is related to the positive Lyapunov exponents Marwan et al. [15]. More detailed description of RP and RQA techniques can be found in [10,17]. All quantifications have been tested in this study and next six of them were selected to better identify milling stability. The first is the RR which measures the ratio between recurrence points to all possible points on the RP diagram. In the unstable region the value of the RR tends to zero ( fig. 7a), while the DET/RR ratio is three times higher ( fig. 7b). The significantly lower values of L max (fig. 7c), LAM (fig. 7e), and V max (fig. 7f) show change in dynamics and can be used as stability criterion. However, DIV ( fig. 7d) shows a similar behavior as the DET/RR ratio. According to RQA the unstable region begins after 26 s of the milling test, that corresponds to a p = 0.42 mm.
Multiscale entropy analysis
A composite multiscale entropy (CMSE) method is another approach to estimate the behavior of the system [18]. In order to perform this analysis, the original time series undergo a coarse-grained procedure, according the formula The coarse-grained time series {y} is averaged on the origin data points within non-overlapping windows by increasing the scale factor τ . For each τ , the composite multiscale entropy calculation is based on the averaged time series {y} and reads as where μ = 2 is the pattern length and r is the similarity criterion and 10% of the standard deviation of the original time series {x} is adopted. In the above formula the SampEn is the logarithm of the conditional probability that two sequences with a tolerance r are similar to each other at the next points of the coarse-grained series. The calculated CMSE for cutting force F x at spindle speed n = 1000 rpm is presented in fig. 8. The measured signal was divided on four windows for the sake of the cut depth a p and the analysis was developed in four separated CMSE results. Figure 8 shows the signal behavior of the force acting along the feed direction x. It appears that the most disordered signal behavior seems to be at the lowest depth of cut a p ≈ (0.05-0.16) mm (red line) up to τ = 10. For larger scale factor τ , the tendency switches and the signal behaves the most unforeseen at the highest depth of cut a p ≈ (0.39-0.5) mm. The other time series windows corresponding to a p ≈ (0.16-0.39) mm seem to be more stable. Interesting phenomena can be observed in fig. 8, the entropy increases initially at scale factors τ up to τ ≈ 4 and next falls down. It can result from the irregularities found for the initially coarse-grained time series which transiently increases the entropy value.
Final conclusions and remarks
This study focused on the verification of the milling process stability in the case of nickel-chromium superalloy Inconel 718. The stability lobe diagram is determined by the commercial software CutPro (with modal analysis) and the "machinist online" package. Analysis of correctness of the SLD, done by the RP and the RQA methods showed, that the save and real depth of cut a p for n = 1000 rpm is 0.41 mm (by CutPro9 0.3 mm, by package "machinist online" 0.1 mm).
Among all recurrence quantifications, six have been chosen to identify the unstable process and detect the entry in unstable cutting. The possibility of short time series analysis is a very important advantage of RP and RQA methods. These methods can easily be adopted in the future for monitoring the cutting processes. The independent verification of both the RP and the RQA analysis has been developed by composite multiscale entropy analysis, which confirmed the range of cutting depth at which signal reveals an ordered behavior. In general it is clearly noticed that measured signals of milling forces are the most disordered during the process at relatively small depth of cuts a p .
In the near future, a new experiment is being planned for different materials including iron alloys and composites reinforced by glass or carbon fibers. In the future work the authors are going to apply RP and RQA as well as CMSE for results of numerical simulation and to control or monitor the milling process stability. Finally after positive numerical tests, the implementation to a real milling machine is planned.
Financial support of Structural Funds in the Operational Programme -Innovative Economy (IE OP) financed from the European Regional Development Fund -Project No POIG.0101.02-00-015/08 is gratefully acknowledged.
Open Access
This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | 4,058.6 | 2016-01-15T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
An integrated network visualization framework towards metabolic engineering applications
Background Over the last years, several methods for the phenotype simulation of microorganisms, under specified genetic and environmental conditions have been proposed, in the context of Metabolic Engineering (ME). These methods provided insight on the functioning of microbial metabolism and played a key role in the design of genetic modifications that can lead to strains of industrial interest. On the other hand, in the context of Systems Biology research, biological network visualization has reinforced its role as a core tool in understanding biological processes. However, it has been scarcely used to foster ME related methods, in spite of the acknowledged potential. Results In this work, an open-source software that aims to fill the gap between ME and metabolic network visualization is proposed, in the form of a plugin to the OptFlux ME platform. The framework is based on an abstract layer, where the network is represented as a bipartite graph containing minimal information about the underlying entities and their desired relative placement. The framework provides input/output support for networks specified in standard formats, such as XGMML, SBGN or SBML, providing a connection to genome-scale metabolic models. An user-interface makes it possible to edit, manipulate and query nodes in the network, providing tools to visualize diverse effects, including visual filters and aspect changing (e.g. colors, shapes and sizes). These tools are particularly interesting for ME, since they allow overlaying phenotype simulation results or elementary flux modes over the networks. Conclusions The framework and its source code are freely available, together with documentation and other resources, being illustrated with well documented case studies. Electronic supplementary material The online version of this article (doi:10.1186/s12859-014-0420-0) contains supplementary material, which is available to authorized users.
Background
Within the field of Systems Biology, the analysis of different types of biological networks is an important task in understanding the underlying biological processes. For this endeavour, a mathematical framework is provided by graph theory, which has allowed to verify that a multitude of organisms share relevant properties when analysing the topology of their networks [1]. Additionally, being able to capture the network in a visual form can provide useful insights. While, in the pregenomic era, the analysis and visualization of these networks were approached as independent computational problems, it is desirable that these two levels are well integrated [2].
Together with their regulatory [3] and signalling counterparts [4,5], metabolic networks represent a vastly studied class of biological networks. These are typically composed of two entities: metabolites and reactions. Metabolites can be converted by the cell into building blocks or decomposed to generate energy or other compounds.
Metabolic Engineering (ME) aims to rationally pinpoint genetic changes in selected host microbes that can optimize the production of compounds of industrial interest and, thus, it heavily makes use of computational analyses of metabolic networks. However, in many cases, the static view of metabolic systems provided by these networks is insufficient, and there is the need to reconstruct genome scale metabolic models (GSMMs) [6] with simulation capabilities, which are increasingly being created given the availability of genome sequences, annotation tools and omics data.
Given the lack of kinetic information to provide for large-scale dynamical models, stoichiometric models are the most common. The information about the metabolites and reactions from the metabolic network, together with stoichiometry, are the starting points for their reconstruction [7], being mathematically represented by a set of equations that describe the chemical transformations [8].
GSMMs are often used to simulate the metabolism of the cell using constraint-based approaches, where typically a pseudo steady state is assumed [9]. Using these models and specifying environmental conditions (e.g. media), it is possible to perform the calculation of flux distributions. The most used method is Flux Balance Analysis (FBA), where a flux (e.g. a biomass flux) to maximize (or minimize) is chosen to obtain an optimal flux distribution [10].
In this work, an integration of network visualization with ME methods is proposed, which demands that many issues related with the visualization of metabolic networks must be addressed. While the scalability of the networks is successfully addressed by generic visualization packages, usually the generic layouts available produce unsatisfactory results for metabolic networks. This is mostly due to the fact that the majority of layout algorithms do not take into consideration any biological knowledge, such as cell localization or molecular functions. Another problem comes with the filtering of the networks. It is necessary to have an easy way to query the network and visually filtering it to specific sets of nodes of interest in a given context. To address these problems, a visualization tool should offer some basic features: layout algorithms, a graphical notation, integration with analysis tools by providing information about the network and an user interface to allow interaction [7].
There are a myriad of software tools for ME, able to use metabolic models to perform phenotype simulations and implement strain optimization methods, being some of the best known examples: OptFlux [11], the COBRA Toolbox [12,13], CellNetAnalyzer [14] and FASIMU [15]. There are also several tools that perform visualization of metabolic (and other types) of biological networks. There are not, however, many examples of successful integration of these two distinct types of applications. CellDesigner [16,17], for instance, is one of the most popular tools for editing and visualizing biochemical networks, but it lacks specific methods for constraintbased approaches and does not deal well with large-scale GSMMs.
Cytoscape [18] became a standard tool for the integrated analysis and visualization of biological networks. One of its many plugins, FluxViz [19], provides features for the visualization of flux distributions in networks. FluxViz was primarily developed for FASIMU, a software for flux-balance computation, and it uses the generated result files as input for visualization.
Another tool worth mentioning is VANTED [20], an application for the visualization and analysis of networks with related experimental data. The usefulness of this tool for ME purposes is provided by two of the available plugins, FluxMap, that allows the visualization of measured or simulated fluxes in the network, and FBASim-Vis for constraint-based analysis of metabolic models, with a special focus on the dynamics and visual exploration of metabolic flux data resulting from model analysis. It supports wild type and knock-out FBA simulations.
Both the COBRA toolbox and CellNetAnalyzer are based on the commercial software Matlab, and therefore are not freely available for the community. COBRA already includes some network visualization tools in the original release, but the generated maps are static built-in maps, that can be exported in a single format (SVG), optionally including overlaps with simulation results. An extension, Paint4Net [21], that allows some editing features (mainly node dragging), has been recently proposed. However, editing is limited to the models and layouts from the related BiGG database, editing options are quite limited and layouts can only be exported as images and not reused. On the other hand, CellNetAnalyzer only enables the visualization of small or medium scale biological networks. More recently, MetDraw [22], that is capable of generating layouts for large metabolic models, was published providing the means to visualize "omics" data overlaid in the network. However, it does not support editing layouts, requiring the use of an external tool for that aim, and is not integrated with any ME tool.
In the majority of these tools, biological entities/ interactions are represented as shapes/ lines, with different colours/formats standing for their classes. Although this seems a reasonable solution, the complexity of the integrated information and the range of possible interactions motivated the development of standard notations. The most successful was the Systems Biology Graphical Notation (SBGN) [23], where networks are modelled in a state-transition way. Another successful standard format is the Systems Biology Markup Language (SBML) [24], which aims at storing and exchanging biological models. Combined with the development of the SMBL Layout package, this makes up a very promising effort in network visualization as well. The full support to these standards is not guaranteed by most of the tools and this would be an advantage in the interoperability of these tools with other relevant software.
Overall, and in spite of the aforementioned tools, network visualization has been traditionally apart from ME-related methods. Some notable exceptions were already mentioned, but an effective framework, which would facilitate the agile integration of simulation results with dynamic layouts of metabolic networks, is, in the authors' point of view, still lacking. Indeed, this work focuses on the development of a visualization framework based on a well-defined abstract representation of metabolic networks, which will provide researchers with visualization tools, to be used in the context of ME projects. This has been developed as a plugin for the OptFlux platform, allowing its integration with other tools, building a useful tool to assist ME researchers. To highlight the main features of the tool, described in detail next, as compared to the other tools, Table 1 provides a comparative analysis of their features.
Implementation
Metabolism can be represented as a series of transformations of metabolites, being easy to represent as a graph. There are two main entities that will be addressed by the visualization platform: reactions and metabolites. A reaction is a chemical transformation that uses a set of metabolites as reactants and produces another set of metabolites to be used by other reactions.
For the representation, a reaction-compound network ( Figure 1A) was chosen, represented by a bipartite graph, which can be divided into two distinct sets of vertices (nodes), such that every element of a set only connects with vertices of the other set. This provides a descriptive and visually attractive representation.
A metabolic layout can then be defined by a core list of reactions, each represented by a reaction node, a set of reactants, a set of products and a set of information nodes. The reactants and products are represented as sets of metabolite nodes, representing the compounds that are part of that specific chemical reaction. The reaction node, and the respective metabolite nodes, will be connected by edges, represented as lines with a shape defined according to the reversibility of the reaction. If the reaction is irreversible, the edges that connect reactions to the metabolites will have arrows only pointing to the products ( Figure 1B), while in reversible reactions they will have arrow shapes pointing to both metabolite ends ( Figure 1C). The metabolite nodes can have two distinct types: regular and currency metabolites. The ones in the latter group will be differentiated since they typically represent highly connected hubs (e.g. water, co-factors) with reduced interest in most analyses.
Other features
Layout generation A metabolic layout is based on the reactions contained in the metabolic model. Since one of the goals of this work is to provide a link between the visualization and the metabolic model, a strategy must be defined to map the entities of the visualization framework with the entities of the model. It is also desirable that a layout can represent just a part of the metabolism of an organism (allowing different layouts for the same model), as well as the possibility to use the same layout on different models (e.g. different strains or model versions). Another important aspect is to make networks visually more understandable, replicating some of the nodes, a feature typically applied to currency metabolites. To comply with these features, each reaction and metabolite node will have a list of model identifiers that will provide the link between the model and the layout.
The two main tasks of the software are to build these layouts from external sources (being able to export them as well), and to visually represent them. This leads to a twolayer architecture (Figure 2), where the first implements the capabilities to read and write metabolic layouts, while the other, the visualization layer, handles the visualization and edition of the metabolic layout. The main features of each are given below: Visualization layer: provides all the functionalities related with the visualization and edition of a layout, including automatic layouts, creation and edition of layouts, visual filters and operations to change the aspect of the network (colours, shapes, etc.). Input and output layer: implements several tools that provide network creation and exportation capabilities for a multitude of file formats. It has the objective of reading networks in specific file formats and building the metabolic layout, used by the visualization layer. At the same time, it also provides the possibility to export those layouts into some of those formats.
The strategy adopted in the development of the framework had the goal of creating a tool that can be used Figure 1 Network representation used in the visualization framework. A reaction-compound network; B adopted irreversible reaction representation; C adopted reversible reaction representation.
independently, but at the same time build it in a way such that the integration with an ME tool (OptFlux in this case) was facilitated. This brings to light the importance of the MVC (model-view-controller) design pattern, not only in the development of the framework, but also in the integration with OptFlux, that also follows this principle. Indeed, OptFlux is built on top of AIBench (http:// www.aibench.org/), a software development framework developed by researchers from the University of Vigo in Spain. Building applications over AIBench facilitates the creation of applications composed of units of work with high coherence that can easily be combined and reused, by incorporating three main object types: operations, datatypes and datatype views.
The basis of the implementation of the operations in OptFlux, is a core library implementing relevant ME methods and algorithms, including phenotype simulation methods (e.g. FBA, parsimonious FBA, MOMA and ROOM), strain optimization algorithms and with many other features. It also contains all data structures and methods used to represent metabolic models, and reading/ writing files in different formats. OptFlux's plugin-based architecture facilitates the development of additional features. The visualization plugin is such an example, which provides a direct connection between the metabolic model, simulation and optimization methods used in OptFlux, and the metabolic layout defined in the visualization core framework.
Visualization layer features
As stated previously, the visualization layer provides all the functionalities related with the visualization and editing of the metabolic layout. One of these features allows to change the default colours and shapes of the nodes. The graphical user interface (GUI) is composed of two major elements (Figure 3): the network view, where it is possible to edit the network and click/drag the nodes ( Figure 3A), and the side panel where filters, overlaps and node information are available ( Figure 3B). In this way, it is possible for the user to easily interact with the network, using all the features the interface has to offer.
This layer provides all the features that allow navigating and obtaining additional information about the network by clicking the nodes. Some of them are the basic features that are typical in any visualization framework, Figure 2 Visualization framework architecture overview. The Input/Output layer provides the reading and writing capabilities for several file formats. The visualization layer contains the layout representation, and provides the visualization capabilities. External sources can also provide filters and overlaps, being OptFlux one such example (through the visualization plugin).
such as highlighting, zooming and dragging, which allow navigating through the network. By clicking a node, the node information panel will display information on that node. It is possible for advanced users to implement an information panel and add it to the interface, to visualize information of specific interest.
One of the main features allows loading layouts from different sources. Some layouts may not specify the coordinates of (some of ) the nodes. The layout used by the visualizer, by default, is the Force Directed Layout (FDL) [27] with some modifications to support fixed nodes. This was coupled with the possibility to fix/unfix nodes, allowing the user to fix a node to the specific position it is in, or drag it to a desired position; unfixing a node will remove the position information of the node, making it susceptible to the FDL algorithm that can adjust its position according to its surroundings. It is also possible to unfix and fix nodes by type, allowing a user to fix/unfix all reaction or metabolite nodes at the same time.
Another crucial aspect is the ability to edit the metabolic layout. This feature, when combined with the import and export capabilities, provides users with the means to create and edit their layouts, being able to export them for later use.
As stated above, the same metabolite can be represented several times in a layout by different nodes. If a metabolite node is connected by several reactions the user can replicate it, resulting in a metabolite node for each reaction. Also, metabolite nodes representing the same compound can be merged. It is also possible to replicate a reaction node, resulting in two, or more, reactions connected to the same set of metabolite nodes. Merging two reactions is only possible if they are exactly the same, i.e. are connected to the same nodes and have the same reversibility. The type of a metabolite can be changed, e.g. to a currency metabolite.
Filtering and overlaying capabilities are also provided. It is possible to filter the network, by hiding parts of it, based in the node type (e.g. hide all currency metabolites) or by reaction identifier. To overlay information over the network, the visualizer allows altering its visual aspect, supporting the change of the direction, thickness and colours of the edges, while for nodes it is possible to change the colour and shape. This feature will allow, for instance, overlaying flux distributions in the metabolic layouts.
Input and output features
The input/output layer provides support for different input/output formats: CellDesigner SBML (CD-SBML): a graphical notation system proposed by Kitano [28], where layouts are stored using a specific extension of the Systems Biology Markup Language (SBML). (AF). For the purpose of this work, which focuses on metabolism, support was only developed for the PD language, based on Kitano's proposal used in CellDesigner's graphical representation, using bipartite graphs. COBRA Layouts: maps developed for the COBRA Toolbox. There are, currently, several maps on this format for many of the models hosted in the BiGG knowledgebase (http://bigg.ucsd.edu). These can be used on different models that have similar pathways, with a correct mapping of the identifiers between the layout and the BiGG model. Pathway generation: It is possible to generate a layout by using a list of reactions from a GSMM. This can be done following two strategies: choosing a list of reactions or, in the case where the model has pathway information, building layouts with the reactions from a set of pathways.
OptFlux plugin
The visualization plugin for OptFlux has the main goal of providing a connection between the GSMMs loaded into OptFlux, their phenotype simulation and optimization results, and the layouts from the visualization framework.
Through the plugin's operations, it is possible for the user to map the identifiers of the metabolic model with the identifiers of the reaction and metabolite nodes of the layout. There are two different mapping methods available: loading a two-column file with the explicit mapping or applying regular expressions to the identifiers in the model and/or the layout. Another available operation allows the importation of KGML layouts, which can be automatically downloaded from the KEGG site.
The third operation allows the creation of layouts from reactions of a metabolic model, using the pathway layout generation feature described above. The generation of this type of layouts can be made by selecting a pathway from the model or by selecting a list of reactions manually. It is also possible to select an existing layout as a basis for the new layout. This will allow creating new layouts or adding new reactions to existing ones.
Each model can have a list of layouts associated, being possible to navigate from one layout to another by clicking the elements of that list. If the user clicks a metabolite that is present in another layout from the list, the information panel will display access buttons for those layouts.
The most desired functionality of the connection between a ME and a visualization tool, is the ability to visualize phenotype simulation results (mainly flux distributions) overlaid in the network. This allows using the visualization tool to better understand the organism's metabolism and design changes that can improve it towards some defined aim.
To allow this operation, there is a conversion from a simulation result in OptFlux, to an overlap object that is used in the visualization. In OptFlux, simulation results have two major elements of interest for the visualization: flux distributions and genetic conditions. A flux distribution contains the flux values for each reaction. To represent it, a conversion of identifiers is needed. It can happen that two or more fluxes are mapped to the same reaction node, and the methodology chosen was to sum all those values (although alternative options can be easily implemented). In the end, all these flux values, now mapped by reaction node, are normalized and used to determine the thickness of the edges. Additionally, the labels of the reaction nodes are also changed, adding the numerical value of the flux after the reaction name.
The genetic conditions of a simulation are defined as all genetic changes made to the organism for that specific simulation. It contains all knock-outs (reaction deletions), and under/over expressed reactions. For the visual representation, some node shapes and colours were adopted to highlight these affected reactions. As seen in Figure 4A, a knocked-out reaction will be indicated by a red cross, with reaction edges also coloured red. An upward arrow will indicate an over-expressed reaction, where both the arrow and the edge are green ( Figure 4B). Finally, an underexpressed reaction is coloured orange and accompanied by a downward orange arrow ( Figure 4C).
Another type of overlap was also developed to visualize the comparison of two phenotype simulation results. The methodology followed was similar to the simulation overlaps. The genetic conditions are represented using the same symbols, but the colours of the edges follow a different strategy. Each simulation will have a colour by default, for instance, simulation 1 will have the colour red, and simulation 2 will have the colour green. Then, according to the flux for each reaction in each simulation, the colours will vary. If the amount of flux is larger in simulation 1, the colours will vary in a gradient that spans from red to black (where black means that there is no difference in fluxes), and if the flux value in simulation 2 is greater, the colours will vary from green to black. This will allow the user to identify where flux paths differ in the simulations (pure colours) and where both share fluxes (darker colours). At the same time, for reversible reactions, the fluxes of the compared simulations can take different directions. In this case, edges will have the colour of the simulation that follows the direction they are pointing, also giving the user an easy way to understand where the simulations differ. The thickness of the edges is calculated using the mean of the flux values in both simulations. On top of this, some filters are also generated, where it is possible to hide zero value fluxes in a simulation.
OptFlux also provides a plugin that calculates the set of Elementary Flux Modes (EFMs) of a model. EFMs are the set of all routes through the network that cannot be decomposed to simpler routes [30], while maintaining steady-state, so they provide a way to analyse the set of pathways in the metabolic network. This plugin provides an interface that allows filtering these results, including the selection of EFMs based on presence/absence of external metabolites or sorting by yield. It is possible to select sets of EFMs browsing these results, to visualize the EFMs in a column-wise table, and to obtain the flux values for each reaction within the EFM.
The visualization plugin can convert these flux distributions into an overlap, in a way similar to the one used for the phenotype simulation results. Considering that, in this case, the only information available is the set of flux distributions, only the thickness and labels of the edges are changed. A visual filter can be applied hiding the reactions with zero value fluxes, thus allowing the visualization of the reactions that are part of the EFM.
Regarding customization, the plug-in allows for the configuration of the style of the visualization through the Preferences option in the Help menu. This allows to personalize the layouts, defining parameters such as the colour and shape of the nodes or the font, size and colour of the labels. Perhaps more importantly, it is also possible to define the content of the labels of reaction and metabolite nodes, choosing which attributes to include from the ones available in the model. An example and more details of this process are given in the OptFlux documentation and in the case study description available in the Additional file 1.
Usage example: succinate production with E. coli
To best illustrate the main features of the proposed tool, two case studies will be used, focusing on succinic acid (this section) and glycine production (next section) with E. coli. The full description of the workflow and required materials for both case studies are provided as Additional file 1.
Succinic acid is an important compound to industry that has been produced mainly by chemical processes. Recently, there has been an effort to use microbial fermentation processes with anaerobic bacteria [31] and optimizing micro-organisms to over-produce succinic acid is one goal of interest for ME researchers. For this case study, the E. coli metabolic model iJR904 [32] was used. The model is available for download directly from OptFlux's internal repository, and it is composed of 1075 For visualization purposes, a COBRA layout was loaded for this particular model from the BiGG Database (http://bigg.ucsd.edu/). After some manual and automatic curation, using the tools offered by the visualization framework, a second version of the layout was created and exported in the XGMML format, fully mapped to the model. A set of knockouts was selected from optimization results obtained using OptFlux [33] (Table 2). This set was chosen because it is not composed of a large number of knockouts while, at the same time, it is somewhat complex.
In this case, the layouts used are of large dimensions so the full layout representation as an image is not practical. Figure 5 represents parts of the layout and important genetic modifications in the network.
By analysing the solution, and with the support from the visualization, it was possible to infer that NADPH balance was the key factor for the increase in succinate production. Figure 5A represents a part of the central metabolism of the E. coli iJR904 model. There, it is possible to visualize the deletion of Transketolase I (R_TKT1 - Figure 5C) that causes a decrease of flux in the pentose phosphate pathway leading to a decrease in NADH production. The inactivation of the pyridine nucleotide transhydrogenase (R_THD2 - Figure 5B) will also contribute to the shortage of NADH. The Serine hydroxyl methyltransferase reaction (R_GHMT2 - Figure 5E) is knocked-out to prevent the formation of NADPH in the glycine production pathway. Finally, to prevent the consumption of succinate, there is the inactivation of the succinate dehydrogenase, in the TCA cycle ( Figure 5D), which leads to the excretion of succinate.
Usage example: glycine production with E. coli The second case study was performed with the iAF1260 E. coli metabolic model [34]. This model is larger than the one used in the previous example. It is composed of 2389 reactions (304 drains and 2085 internal), 1668 metabolites (304 external and 1364 internal) and 1260 genes. The goal of the optimization for this example was the production of glycine, and the results obtained are described in Table 3, chosen following the same strategy used in the previous case.
For the visualization, several layouts were loaded containing most pathways of the metabolic model. Indeed, although it would be possible to load a layout for the entire model, this would not be practical to conduct the type of analyses shown below. In these cases, the user should seek to work with partial layouts with about 30 to 50 nodes (or up to 15 reactions) to improve the quality of the generated results.
After loading the pathway layouts, a simulation comparison of the wild type and knock-out mutant was performed. Visualizing the overlap of that comparison, it is possible to understand how both flux distributions differ by analysing the colours of the edges. The reference flux distribution is the wild type, represented by the green colour, while the knock out solution will be coloured red. As explained above, the more similar both flux distributions are, the darker the resulting colours will be, ranging from black to the colour of the simulation with the higher value. Figure 6 shows parts of the layout and the relevant genetic modifications that lead to the optimized results.
The inactivation of Isocitrate lyase (R_ICL) and Phosphoenolpyruvate carboxylase (R_PPC), both present in the central metabolism (partially represented by Figure 6A and B), lead to the necessity of another route for the production of oxaloacetate. It is possible to see that the TCA cycle has mostly a prevalence of green colour, which means that the reference simulation, the wild-type, has more flux in those reactions. The other route taken due to these knockouts can be seen in two different pathways, being clearly visible as a chain of red reactions, both in the Alternate Carbon Sources layout ( Figure 6C) and in the Nucelotide Metabolism ( Figure 6E and F) culminating in the production of glyoxylate, which can be converted to L-malate by malate synthase (R_MALS -Central Metabolism Figure 6A) and then transformed into oxaloacetate. The inactivation of the glycine cleavage complex (R_GLYCL - Figure 6D) and phosphoribosylglycinamide formyltransferase 2 (R_GART - Figure 6E) are necessary to the accumulation of glycine, being produced as a by-product of fprica synthesis. Both reactions can be used to recycle glycine, which means that both deletions are essential to the solution.
Conclusions
In this work, a metabolic network visualization framework was presented, which has the ability to load networks from a variety of formats and display them using a dynamic layout. It provides features for the straightforward creation and editing of these layouts, as well as exportation capabilities. On top of this, it is possible to overlay the network with visual changes, a functionality that allows, for instance, visualizing fluxes in a phenotype simulation, identifying the genetic conditions imposed in a simulation, addressing the comparison of two simulation results, analysing results from strain optimization methods or visualizing the set of elementary modes in a model.
The framework was integrated with OptFlux, a ME framework, by the development of a plugin. This allows ME researchers to use the visualization directly from within OptFlux, and use a series of operations that will allow loading and exporting layouts with a user-friendly interface.
This framework presents itself as a useful tool that can help researchers involved in ME projects to have a way of easily addressing the visualization of the metabolic networks they are studying. The ability to dynamically visualize phenotype simulations is an important asset. The combination of visualization with simulation and optimization processes will help researchers to achieve Phosphoribosyglycinamide formyltrasferase 2 R_GART 5-phospho-ribosyl-glycineamide + Formate + ATP <= > 5'-phosphoribosyl-N-formylglycineamide + ADP + Phosphate + H + knowledge about the structure and functioning of organisms of interest that was not available before.
While a number of features are planned, an interesting line of future work is the development of tools that allow importing other types of omics data (e.g. gene expression or metabolomics), providing its integrated visualization with GSMMs. The general-purpose nature of the core layer of the visualization framework allows the easy development of such tools, providing a good basis for the extension of the proposed software also in other directions.
Availability and requirements
The described plugin is included in the base distribution of OptFlux that can be downloaded and installed from the homepage given below. The site also includes documentation for the plugin in the form of a wiki.
Additional file
Additional file 1: Full description of the workflow followed in the case studies, including the full set of instructions to conduct them using OptFlux. All materials needed for the tutorial are available on the URL: http://darwin.di.uminho.pt/optflux/suppmaterial/visualization/ materials.zip. | 7,568.4 | 2014-12-01T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science",
"Biology"
] |
Use of zinc phosphate cement as a luting agent for Denzir™ copings: an in vitro study
Background The clinical success rate with zinc phosphate cemented Procera crowns is high. The objective with this study was to determine whether CADCAM processed and zinc phosphate cemented Denzir copings would perform as well as zinc phosphate cemented Procera copings when tested in vitro in tension. Methods Twelve Procera copings and twenty-four Denzir copings were made. After the copings had been made, twelve of the Denzir copings were sandblasted on their internal surfaces. All copings were then cemented with zinc phosphate cement to carbon steel dies and transferred to water or artificial saliva. Two weeks after cementation, half of the samples were tested. The remaining samples were tested after one year in the storage medium. All tests were done in tension and evaluated with an ANOVA. Results Sandblasted and un-sandblasted Denzir copings performed as well as Procera copings. Storage in water or artificial saliva up to one year did not decrease the force needed to dislodge any of the coping groups. Three copings fractured during testing and one coping developed a crack during testing. The three complete fractures occurred in Procera copings, while the partly cracked coping was a Denzir coping. Conclusion No significant differences existed between the different material groups, and the retentive force increased rather than decreased with time. Fewer fractures occurred in Denzir copings, explained by the higher fracture toughness of the Denzir material. Based on good clinical results with zinc phosphate cemented Procera crowns, we foresee that zinc phosphate cement luted Denzir copings are likely to perform well clinically.
Background
CADCAM technologies have found increased use in dentistry during the past 15 years. Cerec, a system invented by Mörmann and Brandistini [1,2], was the first commercially available CADCAM system. Cerec was designed for making ceramic inlays and veneers, and these should be etched and bonded to the tooth with resin based luting agents [3,4]. Resin bonding was promoted because it im-proved retention and sealed gaps around Cerec restorations. Such gaps were often wider around the early Cerec restorations than they were around cast restorations. In addition, clinical experience evolving at that time suggested that the fracture rate of ceramic restorations decreased if they were resin bonded rather than cemented with traditional zinc phosphate or glass ionomer cements [5]. However, because of high equipment cost and a not yet optimised technology, the Cerec system did not capture a big market share. Instead, it was Procera, a system originally developed for industrial production of titanium crowns that become the CADCAM system of choice during the late 80 th and the early 90 th [6,7]. Procera did not become popular because of its titanium crowns but rather for its all-ceramic crowns [8]. These crowns consisted of Al 2 O 3 copings [8] with good fit and high strength on which dental ceramics were fired to produce strong and aesthetically appealing all-ceramic crowns. In contrast to Cerec, Procera did not rely on an intraoral camera to make an "electronic impression." Instead, Procera relied on traditional impressions and gypsum dies. The x, y, z-coordinates of the dies were recorded at a dental laboratory by use of an electronic stylus [9] and transferred electronically to the Procera laboratory where the Al 2 O 3 coping was made. As a result, very little extra investment cost was needed for the dentist. The lower cost probably explains why Procera rather than Cerec was the CADCAM that took off among dentists.
At the time the first ceramic Procera crowns were introduced, ceramic restorations were often cemented with zinc phosphate or glass ionomer cements, despite the fact that research had started to show the advantages with resin bonded ceramic restorations [10]. Resin bonding was achieved by first etching the ceramic surface with hydrofluoric acid and treating the ceramic surface with a silane [10]. However, acid etching did not work on the hydrofluoric acid resistant Al 2 O 3 copings. Because of the acid resistance of Al 2 O 3 , and the knowledge that existed when the first Procera crowns were introduced at the end of the 80 th and the early 90 th , the first Procera crowns were cemented with zinc phosphate and glass ionomer cements [11]. These cements were used because it was believed that the high fracture toughness of Al 2 O 3 copings, a property superior to that of traditional dental ceramics, would result in strong ceramic crowns. Several years earlier McLean [12] had showed that after seven years of clinical service, only 2.1% of anterior aluminous core crowns cemented with zinc phosphate cement failed. His explanation was that the higher fracture toughness of Al 2 O 3 decreased the risk of fracturing the all-ceramic crown. In addition, by using zinc phosphate and glass ionomer cements rather then resins, Procera profited from other advantages too. For example, at the time of the introduction of Procera crowns, dentists were better-trained and more used to zinc phosphate and glass ionomer cements than they were with bonding resins. In addition, removal of set phosphate cement excess was perceived as being easier to do with zinc phosphate cement than with resin cements. As a consequence, dentists felt more comfortable with using zinc phosphate and glass ionomer cements, something that facilitated the introduction of Procera crowns. Today, we know the outcome of cementing Procera crowns with zinc phosphate and glass ionomer cements [11,13]. Of the placed 87 crowns, 79 had been cemented with zinc phosphate cement and the remaining 8 crowns with glass ionomer. After 5 and 10 years of clinical service, the cumulative survival rate showed to be 97.7% and 93.5%, respectively [11,13]. The failure rate after 10 years due to coping/porcelain fractures was 5%, while the remaining 1.5% failure rate was due to poor marginal fit that had resulted in caries [13]. In addition to these failures, minor fractures occurred in 5% of the remaining crowns [13]. These chipped crowns were polished and continued to function normally. A total of 14% of the crowns came loose during the observation period and were recemented [13]. It is important to realize that these crowns were not included in the failure frequency [13]. However, the published results [13] suggest that the use of zinc phosphate and/or glass ionomer cement is not a major factor considered contributing to permanent failures of Procera crowns.
During the past few years, ZrO 2 has been introduced to dentistry [14,15]. The partially stabilized ZrO 2 has a fracture toughness twice that of A 2 O 3 [16] suggesting that ZrO 2 based copings could become a major competitor to Procera in the future. One such ZrO 2 based system is the Decim system that makes ZrO 2 copings (Denzir™) by milling zirconium dioxide rods. These ZrO 2 copings, like the Procera copings, cannot be etched because of the acid resistance of ZrO 2 . Although a resin-based cement such as Panavia is the recommended luting agent for Denzir at the present time, there is an interest in determining whether zinc phosphate and glass ionomer cements are acceptable alternatives. That interest relates primarily to properties such as simplicity of use, easiness of removing excess from marginal regions after cementation, and last, but not least important, easiness of removing a previously cemented crown if so needed. As we know from the previously quoted Procera study [13], 6.5 % of the crowns were remade because of coping/dental porcelain fractures and caries (6.5%). Another 5% suffered from acceptable chipping [13]. These findings are important, because they suggest that fractures and caries may require removal of the ceramic restoration. If the coping is well bonded to the tooth surface, the old unit must be cut away. Such a removal is not easy to do with strong ceramics, and there is a potential risk that incomplete cooling during cutting could cause pulp irritations. Because of the latter aspects, a clinically important question to address is whether zinc phosphate cemented Denzir crowns compare as well clinically regarding low ceramic fractures as resin based cemented Denzir crowns do. However, before such clinical trials can be justified ethically, in vitro tests must prove that the retention of Denzir crowns is as good as that of Procera crowns. If the retention of Denzir is as good as that of Procera, one would expect that Denzir crowns will provide as good or even better clinical results than those reported with zinc phosphate or glass ionomer cemented Procera crowns [11,13].
Because of the above considerations, the objectives of this study was to determine in vitro whether Denzir copings cemented with zinc phosphate cement to metal dies could provide as good retention strength as Procera crowns ce-mented to similar metal dies. We also wanted to determine if sandblasting would improve the retention of the Denzir copings, or if retention over time would behave differently if the cemented crowns were stored in water or artificial saliva.
Metal dies
Thirty-six metal dies were machined out of carbon steel to dimensions shown in Figure 1. During the machining procedure all the surfaces to which the zinc phosphate cement would be attached were finished to roughness values around 6.3 µm. The reason we used the 6.3 µm surface roughness was that a preliminary evaluation of dies with surface roughness values of 3.2, 6.3, 8.0 and 12.5 µm had revealed that a surface roughness value of 6.3 µm was ideal for our study. With such surface texture, the cement did not separate from the cement-model surface. Instead it fractured within the cement or at the crown-cement interface.
To verify the surface roughness values, the finished die surfaces were recorded with a profilometer (Federal Surfanalyzer System 5000, Federal Products Co, Providence, RI). The surface roughness value, R a , represents the arithmetic average of the absolute values of the measured roughness profile height deviations taken within the scanned length and measured from the mean line. These scanned recordings were made in a cervical to occlusal direction over a length of 3 mm on each metal die. The surface roughness value for that distance was then used to determine the average value for all the dies.
Impressions and gypsum dies
Before impressions were made of the metal dies, a 1.6 mm thick ring was inserted and located to the marginal part of the simulated crown preparation ( Figure 2). Impressions were then made in a polyvinylsiloxane impression material (Light Body, President, Coltène AG, Altstätten, Switzerland) supported by an individual tray. The tray had been covered with an adhesive to secure a reliable tray-impression attachment, and the space between the tray and the die was 2 mm. One hour after the impression had been made it was poured with a Type IV gypsum (Silky-Rock, Whip-Mix Corporation, Louisville, KY) and allowed to set during the night. After impression removal and die inspection, the 36 gypsum dies were sent to laboratories making Denzir and Procera copings.
Ceramic copings
Twenty-four Denzir and twelve Procera copings were ordered from Denzir and Procera certified laboratories. All ceramic copings were made 0.6 mm thick and with a cement space corresponding to 60 µm. That spacing started 0.8 mm from the cervical margin and reached its maximal
Sandblasting
Twelve of the Denzir copings were sandblasted on the inner surfaces with A1 2 O 3 (particle size = 50 µm) using an air pressure of 2 bars (200 kPa). The sandblasting process was done with the sandblasting tip located at a distance of 10 mm from the ceramic surface. The centre of the sandblasting stream targeted the transition from the occlusal to the proximal inner surfaces. The entire inner surface was then sandblasted by rotating the coping four times, each time 90 degrees. Each of these locations was sandblasted for 5 s.
Inner surface roughness
The inside of each coping was scanned with the profilometer. The scans were collected within the 0.5 to 1.5 mm interval from the cervical margin. From these scans the R a values were calculated.
Cementation
The 1.6 mm ring, located in the cervical region of the preparation when the silicone impression was made was removed and replaced with a 1.55 mm thick washer with an outer diameter of 18 mm (Figure 3). A zinc phosphate cement (Phosphate Cement, Heraeus Kulzer, Dormagen, Germany) was mixed on a room tempered glass plate. For each portion, 1.2 g powder was mixed with 0.5 mL liquid. The powder was divided into six portions (two 1/16 th , one 1/8 th , and three 1/4 th portions). First, one 1/16 th portion was mixed for 10 s, then the second 1/16 th portion for 10 s, followed by the 1/8 th portion for another 10 s. A 1/4 th portion was then added and mixed for 15 s followed by another 1/4 th portion, also mixed for 15 s. The final 1/ 4 th was then added and mixed for 30 s. Thus, a total mixing time of 1 min and 30 s was used. The mixed cement was then placed inside the ceramic coping, which was rotated 90 degrees as the coping was seated on the metal die. Thirty seconds after completed mixing, a load of 2 N was placed on the crown, and the load acted on the crown for 5 min. Excess material was removed after 7.5 min counted from the time the coping was loaded with the 2 N load.
Retention force
Fourteen days after cementation, half of the specimens (3 Denzir as received, 3 Denzir being sandblasted, and 3 Procera, all stored in distilled water; and 3 Denzir as received, 3 Denzir being sandblasted, and 3 Procera, all stored in artificial saliva) were tested in tension (Figure 4) until failure using a specially designed testing device in an Instron Universal Testing machine at a load rate of 0.5 mm/min. After one year (3 Denzir as received, 3 Denzir being sandblasted, and 3 Procera, all stored in distilled Before the copings were cemented, the metal ring shown in Figure 2 was removed and replaced with a machined washer, shown to the left (side and top view). The placement of that washer is shown as the grey field on the die to the right. water; and 3 Denzir as received, 3 Denzir being sandblasted, and 3 Procera, all stored in artificial saliva), the remaining 18 specimens were also tested as described earlier.
Fifteen minutes after the initiation of the cementation process the cemented copings with the steel dies and washers were transferred to distilled water or artificial saliva and then stored in an oven at 37°C. The artificial saliva [17] was of the following composition: 0.1 L each of 25 mM K 2 HPO 4 , 24 mM Na 2 HPO 4 , 150 mM KHCO 3 , 100 mM NaCl, and 1.5 mM MgCl 2 . To this were added 0.006 L of 25 mM citric acid and 0.1 L of 15 mM CaCl 2 . The pH was then adjusted to 6.7 with NaOH or HCl and the volume made up to 1 L. To avoid bacterial growth, we added 0.05% by weight thymol to the artificial saliva. All chemicals were ACS-grade (American Chemical Society).
Statistical evaluation
The force values, needed to dislodge the copings, were used for the statistical evaluation. One-way and two-way ANOVA:s were used to determine significant differences between materials, storage medium and storage time as well as interactions of these (ANOVA, SAS Institute, Cary, NC, USA). Comparisons between the individual groups were also conducted using Duncan's multiple range tests. All test were conducted on the 95% significance level.
Metal dies
The profilometer readings of the metal die surfaces gave an average surface roughness value (R a ) of 5.49 ± 0.98 µm. These roughness values were primarily based on wave shaped surface where the distance between the peaks was around 200 µm and where the main peak-to-main valley distance was around 20 µm ( Figure 5).
Ceramic copings and effect of sandblasting
The surface roughness values of the insides of the different coping groups are shown in Table 1. No significant difference existed between the three combinations (p = 0.2239).
Retention force
The statistical analysis revealed that the most important factor affecting the retention force was storage time (Tables 2 and 5). There was no difference between the two main materials or whether the Denzir copings had been sandblasted or not (Table 3).
Comparing the storage media could not prove whether such a difference existed (p = 0.082) ( Table 4). In this comparison, no consideration was taken for the different material groups and storage times. When storage time only was compared there was a significant increase in retention force with time ( Table 5).
As seen in the Tables 6, 7 and 8, there are large differences among the different test groups (standard deviation ~30% of the mean value). From Table 2, we can also see that there are no significant interactions, although the material/storage and storage/time interactions are pretty close.
Figure 4
The die with the cemented coping was inserted into a specially designed testing device (left drawing). The washer was located under the horizontal top bar shown above and the die protruded through that bar. A metal pin (P) was inserted through a metal band and a hole drilled through the die. That attachment is shown in the central drawing. The 300.0 mm long metal band was attached to the universal testing machine that generated a recordable force (in the direction of the arrow shown in the figure). The metal band and the attachments are shown in the reduced drawing to the right. Of the Procera copings, two copings fractured during testing after 14 days of storage and one coping fractured after one year. Of all tested Denzir copings, not a single coping fractured. However, careful inspection with transilluminating light revealed that one of the Denzir copings tested after 1 year had a crack that extended from the cervical region to the occlusal region. Table 1 shows that there was no significant difference in surface roughness between the three evaluated ceramic coping groups. The low value of the sandblasted Denzir copings suggest that the machining process generated a surface roughness that was at least as rough as a machined and sandblasted Denzir surface. Because of these findings, sandblasting conducted under the conditions evaluated in this study is not recommended for Denzir copings.
Effect of storage
The statistical evaluation of variables such as material, storage and time as well as interactions of these variables revealed that storage time was significant regarding retention force ( Table 2). Storage medium and interaction between time and storage medium were almost significant on the 95% significance level.
There was no difference between the three material groups regarding retentive force (Table 3). That finding most likely relates to the similarities in surface roughness values among the three groups ( Table 1). The similarities in retentiveness among the three material groups are important. Because of the published success rate of Procera after 10 years in clinical service [13], our in vitro results suggest that Denzir copings, sandblasted or not, and cemented with zinc phosphate cement are likely to perform equally well as Procera crowns, at least regarding retention.
Based on the lower fracture frequency identified among the Denzir copings, our findings suggest that Denzir copings might perform even better than Procera crowns. Of the twelve tested Procera copings, three complete fractures occurred in the Procera copings, while of the twenty-four tested Denzir copings only one had a detectable crack that did not even result in a clear fracture during testing. At the present time, though, one cannot exclude that these differences are coincidental. However, the higher fracture toughness of Denzir, almost twice as high as that of Procera, probably explains the lower fracture tendency of Denzir identified in this study. Future studies regarding CADCAM technologies need to focus on flaw formation that might be induced during manufacturing. One may suspect that a milling process like the one used to make the Denzir copings, induce more flaws than a pressing and sintering technique as the one used to make Procera copings. However, there is no proof available supporting that assumption at the present time. In the case of Procera, one cannot exclude the possibility that flaws are induced when copings are pressed and that these flaws may not heal completely during sintering. Besides, during sintering and cooling, thermal stresses may be induced that trigger crack formation in the future. From the above argumentation, flaws may very well be induced during manufacturing of both Denzir and Procera copings. Thus, differences in either fracture toughness or flaw sizes/densities, or a combination of the two, would explain why the Procera copings had higher fracture tendency. The higher fracture toughness of zirconia favours Denzir and would explain the lower fracture frequency seen in these copings. However, whether the flaws introduced in Denzir copings are smaller or bigger than those present in Procera copings is not known and needs to be investigated further. Flaw formation during manufacturing becomes very important when we compare different zirconia crowns that now are available on the market. Some of them are made by milling industrially sintered and processed zirconia, while other are made by milling presintered zirconia that is then sintered.
During our evaluation, we used the force levels generated by the copings that fractured during testing. One could argue that such values should be excluded because the samples fractured. However, we did not exclude those samples of the following reasons: First. the force levels on the copings that fractured were not lower than those of those that did not fracture. Second, we were not able to determine whether the fracture occurred before or after debonding had occurred because of the speed of the dislodgement/ fracturing process.
Storage of the copings in artificial saliva resulted in force values almost significantly higher than those stored in water (Table 4). A possible explanation is that some of the ions, for example phosphate ions, diffused into the cement and pushed the setting reaction toward an increased precipitation reaction. Such an explanation can be related to the setting reaction of zinc phosphate cements. As the storage time increased, the required force needed to dislodge the copings also increased ( Table 5). A likely explanation is that as time passed the setting reaction became more complete. There is also a possibility that corrosion of the steel dies and release of iron ions from the dies affected the setting reaction of the zinc phosphate cement. Such a corrosion process might also have increased the surface roughness at the cement-dye interface and thereby also increased the mechanical retention.
Even though time improved the retention of the cemented copings, one should not extrapolate that value to the clinical situation. Clinically, the coping would be exposed to different loads during the entire observation time. In our study, no such forces acted on the cemented coping from time of cementation to time of testing. However, the improved results with time shows that storage media such as water and artificial saliva by themselves do not decrease the retention force. This finding is important, because it implies that other factors are more important when we try to explain why the retention of zinc phosphate cemented Our results suggest that a clinical evaluation of Denzir crowns cemented with zinc phosphate cement are likely to perform as well as Procera crowns cemented with zinc phosphate cement. However, based on Burke et al.'s [5] review, which supported the use of resin cements, one can question the rational of even considering using zinc phosphate cement as a luting agent in a clinical study. There are at least two reasons justifying such a clinical study. First, by assuming that the high success rate of zinc phosphate cemented Procera crowns is likely to be equally high with Denzir copings, justifies such a study ethically. Second, the simplicity of using zinc phosphate cements, their ease of removal from marginal regions after setting, and the ease with which a zinc phosphate cemented crown can be removed if remake is needed, are beneficial clinical advantages that cannot be neglected.
Having justified the use of zinc phosphate cement in a clinical study, it is also important to emphasize that such an evaluation should consider retention and solubility of the luting agent too. In the Procera study conducted by Ödman and Andersson [13], retention failures requiring recementation were not included in their impressive success rate. Present evidences suggest that resin bonding improves the results with ceramic restorations [5], even though these claims are not conclusive [18,19]. There is no doubt that retention is an important factor to consider, but one must also accept that strong bonding can also be a drawback if the crown needs to be removed. In the latter case, a well-bonded ceramic restoration can be a bigger clinical challenge than the need for recementing a less well-bonded restoration.
One often hears the claim that resin cements decrease the fracture frequency of ceramics. Such a claim is justified for some ceramic systems, but is may not be valid when we are dealing with high strength ceramic copings like the ones used in both Procera and Decim. Instead, some clinical studies dealing zinc phosphate cemented alumina copings are so good that one can question whether resin bonded copings will outperform these results. It is first when comparative studies take all these pros and cons into consideration, as we know whether resin bonded alumina or zirconia copings outperform zinc phosphate cemented alumina or zirconia copings.
Conclusions
Denzir copings, cemented with zinc phosphate cement to steel dies, perform at least as well as Procera copings cemented with the same zinc phosphate after storage in water and artificial saliva for one year when tested in vitro. The use of sandblasting under the conditions given in this study does not enhance the internal surface roughness or the retentiveness of the Denzir copings. During a one-year storage time in water or artificial saliva, the retentiveness did not decrease. Instead, the retentiveness of the samples increased. | 6,073.2 | 2003-02-07T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
INFINITELY MANY POSITIVE AND SIGN-CHANGING SOLUTIONS FOR NONLINEAR FRACTIONAL SCALAR FIELD EQUATIONS
. We consider the following nonlinear fractional scalar field equation where K ( | x | ) is a positive radial function, N ≥ 2, 0 < s < 1, and 1 < p < N +2 s N − 2 s . Under various asymptotic assumptions on K ( x ) at infinity, we show that this problem has infinitely many non-radial positive solutions and sign-changing solutions, whose energy can be made arbitrarily large.
Here, the fractional Laplacian of a function f : R N → R is expressed by the formula where C N,s is some normalization constant (See Sect.2).
Problem (1) arises from looking for standing waves Ψ(t, x) = exp(iEt)u(x) for the following nonlinear equations where i is the imaginary unit, E ∈ R. This equation is of particular interest in fractional quantum mechanics for the study of particles on stochastic fields modelled by Lévy processes. A path integral over the Lévy flights paths and a fractional Schrödinger equation of fractional quantum mechanics are formulated by Laskin [24] from the idea of Feynman and Hibbs's path integrals (see also [25]). The Lévy processes occur widely in physics, chemistry and biology. The stable Lévy processes that give rise to equations with the fractional Laplacians have recently attracted much research interest, and there are a lot of results in the literature on the existence of such solutions, e.g., [4,7,18,35,5,1,40,26,30,10,11] and the references therein.
A partner problem of (1) is the following Schrödinger equation In the sequel, we will assume that V, K is bounded, and V (x) ≥ V 0 > 0, K(x) ≥ K 0 > 0 in R N . It is well known, but not completely trivial, that (−∆) s reduces to the standard Laplacian −∆ as s → 1. When s = 1, the classical nonlinear Schrödinger equation and scalar field equation have been extensively studied in the last thirty years. Moreover, if then, using the concentration compactness principle [31,32], one can show that (1) (resp. (4)) has a least energy solution (See for example [17,31,32,15,40]). But if (5) does not hold, (1) or (4) then it is easy to see that problem (4) has no least energy solutions. So, in this case, one needs to find solutions with higher energy. Recently, Cerami et al. [8] showed that problem (4) with s = 1 has infinitely many sign-changing solutions if V (x) goes to its limit at infinity from below at a suitable rate. In [38], Wei and Yan gave a surprising result which says that (1) or (4) with s = 1 and V (x) or K(x) being radial has solutions with large number of bumps near infinity and the energy of this solutions can be very large. This kind of results was generalized by Ao and Wei in [2] very recently to the case in which V (x) or K(x) does not have any symmetry assumption. We should also mention another interesting paper [9], where infinitely many positive solutions to (4) with s = 1 were verified to exist by Cerami and Passaseo by using mini-max argument and no symmetry assumption was imposed on V (x). For more results on (1) and (4) with s = 1, we can refer to [16,17] and the references therein. For results on this aspect when 0 < s < 1, the readers can refer [18,27,34] and the references therein. In recent years, the singularly perturbed problem of (4) with s = 1 has been widely researched, such as [3,6,13,22,28,29,12,14]. When 0 < s < 1, Chen and Zheng [12] studied the following singularly perturbed problem They showed that when N = 1, 2, 3, ε is sufficiently small, max{ 1 2 , n 4 } < s < 1 and V satisfies some smoothness and boundedness assumptions, equation (6) has a nontrivial solution u ε concentrated to some single point as ε → 0. Very recently, in [14], Dávila, del Pino and Wei generalized various existence results known for (6) with s = 1 to the case of fractional Laplacian. For results which are not for singularly perturbed type of (1) and (4) with 0 < s < 1, the readers can refer to [4,18,36] and the references therein.
As far as we know, it seems that there is no result on the existence of multiple solutions of equation (1) which is not a singularly perturbed problem. The aim of this paper is to obtain infinitely many non-radial positive solutions and sing-changing solutions for (1) whose functional energy are very large, under some assumptions for K(x) = K(|x|) > 0 near the infinity. For convenience, in this paper, we consider the following nonlinear fractional scalar field equation Let N + 2s N + 2s + 1 < m < N + 2s.
We assume that 0 < K(|x|) ∈ C(R N ) satisfies the following conditions at infinity for some a > 0, θ > 0. Without loss of generality, we may assume that K 0 = 1.
Remark 1. The radial symmetry can be replaced by the following weaker symmetry assumption: after suitably rotating the coordinated system, where a > 0, θ > 0 and K 0 > 0 are some constants.
Remark 2. Using the same argument, we can prove that if where V 0 > 0, θ > 0, then problem (4) has infinitely many positive non-radial solutions if a > 0 and has infinitely many non-radial sign-changing solutions if a < 0. Now let us outline the main idea to prove our main results. We will use the unique ground state U of to build up the approximate solutions for (7). It is well known that when s = 1, the ground state solution of (9) decays exponentially at infinity. But from [19,20], we see that when s ∈ (0, 1), the unique ground solution of (9) decays like 1 |x| N +2s when |x| → ∞.
WEI LONG, SHUANGJIE PENG AND JING YANG
For any integer k > 0, define To prove Theorem 1.1, it suffices to verify the following result: Under the assumption of Theorem (1.1), there is an integer k 0 > 0, such that for any integer k ≥ k 0 , (7) has a solution u k of the form N +2s−m ] for some constants r 1 > r 0 > 0 and as k → +∞, To consider the sign-changing solutions, for any integer k > 0, we definē N +2s−m ] for somer 1 >r 0 > 0. WriteŪ To prove Theorem 1.2, we only need to show that: Then there is an integer k > 0, such that for any integer k ≥k, (7) has a solutionū k of the form N +2s−m ] for some constantsr 1 >r 0 > 0 and as k → +∞, The idea of our proof is inspired by that of [38] where infinitely many positive non-radial solutions to a nonlinear Schrödinger equations (3) with s = 1 are obtained when the potential approaches to a positive constant algebraically at infinity. We will use the well-known Lyapunov-Schmidt reduction scheme to transfer our problem to a maximization problem of a one-dimensional function in a suitable range. Compared with the operator −∆, which is local, the operator (−∆) s with 0 < s < 1 on R N is nonlocal. So it is expected that the standard techniques for −∆ do not work directly. In particular, when we try to find spike solutions for (7) with 0 < s < 1, (−∆) s may kill bumps by averaging on the whole R N . For example, the ground state for (7) with 0 < s < 1 decays algebraically at infinity, which is a contrast to the fact that the ground state for −∆ decays exponentially at infinity. This kind of property requires us to establish some new basic estimates and give a precise estimate on the energy of the approximate solutions.
This paper is organized as follows. In Sect.2, we will give some preliminary properties related to the fractional Laplacian operator. In Sect.3, we will establish some preliminary estimate. We will carry out a reduction procedure and study the reduced one dimensional problem to prove Theorems 1.3 and 1.4 in Sect.4. In Appendix, some basic estimates and an energy expansion for the functional corresponding to problem (7) will be established.
2. Basic theory on fractional Laplacian operator. In this section, we recall some properties of the fractional order Sobolev space and the ground state solution U of the limit equation (9).
Let 0 < s < 1. Various definitions of the fractional Laplacian (−∆) s f (x) of a function f defined in R N are available, depending on its regularity and growth properties.
It is well known that it can be defined as a pseudo-differential operator where is Fourier transform. When f have some sufficiently regular, the fractional Laplacian of a function f : R N → R is expressed by the formular . This integral makes sense directly when s < 1 In the remarkable work of Caffarelli and Silvestre [5], this nonlocal operator was expressed as a generalized Dirichlet-Neumann map for a certain elliptic boundary value problem with nonlocal differential operator defined on the upper half-space R N +1 and The equation (11) can also be written as div(y 1−2s ∇u) = 0, which is clearly the Euler-Lagarnge equation for the functional Then, it follows from (10)-(12) that and the norm is Here the term is the so-called Gagliardo (semi) norm of u. The following identity yields the relation between the fractional operator (−∆) s and the fractional Laplacian Sobolev space for a suitable positive constant C depending only s and N . Clearly, · H s (R N ) is a Hilbertian norm induced by the inner product On the Sobolev inequality and the compactness of embedding, one has Theorem 2.1. [33] The following imbeddings are continuous: Now, we recall some known results for the limit equation (9). If s = 1, the uniqueness and non-degeneracy of the ground state U for (9) is due to [23]. In the celebrated paper [19], Frank and Lenzemann proved the uniqueness of ground state solution U (x) = U (|x|) ≥ 0 for N = 1, 0 < s < 1, 1 < p < 2 * (s) − 1. Very recently, Frank, Lenzemann and Silvestre [20] obtained the non-degeneracy of ground state solutions for (9) in arbitrary dimension N ≥ 1 and any admissible exponent 1 < p < 2 * (s) − 1.
For convenience, we summarize the properties of the ground state U of (9) which can be found in [19,20].
Then the following hold.
(i) (Uniqueness) The ground state solution U ∈ H s (R N ) for equation (9) is unique.
(ii)(Symmetry, regularity and decay) U (x) is radial, positive and strictly decreasing in |x|. Moreover, the function U belongs to H 2s+1(R N ) ∩ C ∞ (R N ) and satisfies , its kernel is given by By Lemma C.2 of [20], it holds that, for j = 1, · · · , N, ∂ xj U has the following decay estimate, From Theorem 2.2, the ground bound state solution like 1 |x| N +2s when |x| → +∞. Fortunately, this polynomial decay is enough for us in the estimates of our proof.
3. Some preliminaries. In the sequel, we mainly concentrate on the existence of positive solutions. For the existence of sign-changing solutions, we will only give a sketch at the end of Section 4. Let for some r 1 > r 0 > 0. Define Note that the variational functional corresponding to (7) is
WEI LONG, SHUANGJIE PENG AND JING YANG
Letting We can expand J(ϕ) as follows: where and In order to find a critical point for J(ϕ), we need to discuss each term in the expansion (13). Now we come to the main result in this section.
Proof. By direct calculation, we know that Firstly, we deal with the case p > 2. Since On the other hand, it follows from Lemma A.2 that U r is bounded. We obtain So, R (ϕ) ≤ C ϕ s . Using the same argument, for the case 1 < p ≤ 2, we also can obtain that 4. The finite-dimensional reduction and proof of the main results. In this section, we intend to prove the main theorem by the Lyapunov-Schmidt reduction. Associated to the quadratic form L(ϕ), we define L to be a bounded linear map from E to E, such that Now, we assume that r ∈ S k and define where α > 0 is a small constant, B 0 and B 1 are defined in Proposition A.1. Next, we show the invertibility of L in E.
Proposition 4.1. There exists an integer k 0 > 0, such that for k ≥ k 0 , there is a constant ρ > 0 independent of k, satisfying that for any r ∈ S k , Lu ≥ ρ u s , u ∈ E.
Proof. We argue it by contradiction. Suppose that there are n → +∞, r k ∈ S k , and u n ∈ E, such that By symmetry, we have In particular, where o n (1) → 0 as n → +∞. Set u n (x) = u n (x − x 1 ). Then for any R > 0, since One can obtain So we suppose that there is a u ∈ H s (R N ), such that as n → +∞, and u n → u, in L 2 loc (R N ). Since u n is even in x j , j = 2, · · · , N, it is easy to see that u is even in x j , j = 2, · · · , N. On the other hand, from Now, we claim that u satisfies Indeed, we set For any R > 0, let ϕ ∈ C ∞ 0 (B R (0)) ∩ E be any function, satisfying that ϕ is even in x j , j = 2, · · · , N. Then ϕ 1 (x) = ϕ(x − x 1 ) ∈ C ∞ 0 (B R (x 1 )). We may identify ϕ 1 (x) as elements in E by redefining the values outside Ω 1 with the symmetry. By using (15) and Lemma A.2, we find On the other hand, since u is even in x j , j = 2, · · · , N, (18) holds for any ϕ ∈ C ∞ 0 (B R (0)) ∩ E. By the density of C ∞ 0 (R N ) in H s (R N ), it is easy to show that We know ϕ = ∂U ∂x1 is a solution of (19), thus (19) is true for any ϕ ∈ H s (R N ). One see that u = 0 because u is even in x j , j = 2, · · · , N and (17). As a result, Now, using the Lemma A.2, we obtained that for any 1 < η ≤ N + 2s, there is a constant C > 0, such that which is impossible for large R.
As a result, we get a contradiction.
Proposition 4.2.
There is an integer k 0 > 0, such that for each k ≥ k 0 , there is a C 1 map from S k to H : ω = ω(r), r = |x 1 |, satisfying ω ∈ E, and J (ω) Moreover, there exists a constant C > 0 independent of k such that ω s ≤ Ck where τ > 0 is a small constant.
Proof. We will use the contraction theorem to prove it. By the following Lemma 4.1, l(ω) is a bounded linear functional in E. We know by Reisz representation theorem that there is an l k ∈ E, such that l(ω) = l k , ω .
WEI LONG, SHUANGJIE PENG AND JING YANG
We shall verify that A is a contraction mapping fromS k to itself. In fact, on one hand, for any ω ∈S k , by Lemmas 4.1 and 3.1, we obtain On the other hand, for any ω 1 , ω 2 ∈S k , Then the result follows from the contraction mapping theorem. The estimate (20) follows Lemma 4.1.
On the other hand, we have for some small constant τ > 0, where the last inequality is due to the assumption N +2s N +2s+1 < m < N + 2s. Inserting (24)- (25) into (22), we can complete the proof. Now, we are ready to prove our main theorem. Let ω = ω(r) be the map obtained in Proposition 4.2. Define It follows from Lemma 6.1 in [14] that if r is a critical point of F (r), then U r + ω is a solution of (7).
Proof of Theorem 1.3. It follows from Propositions 4.2 and A.1 that
WEI LONG, SHUANGJIE PENG AND JING YANG
Define We consider the following maximization problem Suppose thatr is a maximizer, we will prove thatr is an interior point of S k .
We can check that the function By direct computation, we deduce that On the other hand, we find and similarly where we have used the fact that the function f (t) = t − N +2s N +2s−m (t − 1) attains its maximum at t 0 = N +2s The above estimates implies thatr is indeed an interior point of S k . Thus ur = Ur + ωr is a solution of (7).
At last, we claim that ur > 0. Indeed, since ωr s → 0 as k → ∞, noticing the fact (see [14] for example) that whereũr(x, y) is the s-harmonic extension of ur satisfyingũr(x, 0) = ur(x), we can use the standard argument to verify that (ur) − = 0 and hence ur ≥ 0. Since ur solves (12), we conclude by using the strong maximum principle that ur > 0.
The sketch of proof of Theorem 1.4. Set where B 0 , B 1 are given in Proposition A.2, α > 0 is a small constant. For r ∈S k , letŪ We will find a solution for equation (7) of the formŪ r (x) +ω with To this end, we should also perform the same procedure as the proof of Theorem 1.3. Proceeding as we prove Proposition 4.2, we conclude that for any r ∈S k , there is uniqueω r ∈ C 1 (S k ,Ē), such thatŪ r (x)+ω r is a critical point of J onĒ. Moreover, ω s ≤ Ck Now to prove thatŪ r (x) +ω is a critical point of J on H s (R N ) can be reduced to finding a minimum point of functionF (r) inS k , which can be realized exactly as we have done in the proof of Theorem 1.3.
Then, we have the following basic estimate: For any x ∈ Ω 1 , and η ∈ (1, N + 2s], there is a constant C > 0, such that Proof. The proof of this lemma is similar to Lemma A.1 in [38], we sketch the proof below for the sake of completeness.
For any x ∈ Ω 1 , we have for i = 1, So, we find Since So, there is a constant B > 0, such that There is a small constant τ > 0, such that Proof. Using the symmetry, we have It follows from Lemma A.1 that Hence, there exists B 0 (which may depend on k) in [C 2 , C 1 ], where C 1 and C 2 are independent of k, such that Now, by symmetry, we see where κ > 0 satisfies min{ p+1 2 (N + 2s − κ), 2(N + 2s − κ)} > N + 2s. Hence, we get .
So, we have proved Now, inserting (27)- (29) into I(U r ), we complete the proof. | 4,706.8 | 2015-08-01T00:00:00.000 | [
"Mathematics"
] |
Exact Solutions for Equations of Bose-Fermi Mixtures in One-Dimensional Optical Lattice
We present two new families of stationary solutions for equations of Bose-Fermi mixtures with an elliptic function potential with modulus $k$. We also discuss particular cases when the quasiperiodic solutions become periodic ones. In the limit of a sinusoidal potential ($k\to 0$) our solutions model a quasi-one dimensional quantum degenerate Bose-Fermi mixture trapped in optical lattice. In the limit $k\to 1$ the solutions are expressed by hyperbolic function solutions (vector solitons). Thus we are able to obtain in an unified way quasi-periodic and periodic waves, and solitons. The precise conditions for existence of every class of solutions are derived. There are indications that such waves and localized objects may be observed in experiments with cold quantum degenerate gases.
Introduction
Over the last decade, the field of cold degenerate gases has been one of the most active areas in physics. The discovery of Bose-Einstein Condensates (BEC) in 1995 (see e.g. [1,2]) greatly stimulated research of ultracold dilute Boson-Fermion mixtures. This interest is driven by the desire to understand strongly interacting and strongly correlated systems, with applications in solid-state physics, nuclear physics, astrophysics, quantum computing, and nanotechnologies.
An important property of Bose-Fermi mixtures wherein the fermion component is dominant is that the mixture tends to exhibit essentially three-dimensional character even in a strongly elongated trap. During the last decade, great progress has been achieved in the experimental realization of Bose-Fermi mixtures [3,4], in particular Bose-Fermi mixtures in one-dimensional lattices. Optical lattices provide a powerful tool to manipulate matter waves, in particular solitons. The Pauli exclusion principle results in the extension of the fermion cloud in the transverse direction over distances comparable to the longitudinal dimension of the excitations. It has been shown recently, however, that the quasi-one-dimensional situation can nevertheless be realized in a Bose-Fermi mixture due to strong localization of the bosonic component [5,6]. With account of the effectiveness of the optical lattice in managing systems of cold atoms, their effect on the dynamics of Bose-Fermi mixtures is of obvious interest. Some of the aspects of this problem have already been explored within the framework of the mean-field approximation. In particular, the dynamics of the Bose-Fermi mixtures were explored from the point of view of designing quantum dots [8]. The localized states of Bose-Fermi mixtures with attractive (repulsive) Bose-Fermi interactions are viewed as a matter-wave realization of quantum dots and antidots. The case of Bose-Fermi mixtures in optical lattices is investigated in detail and the existence of gap solitons is shown. In particular, in [8] it is obtained that the gap solitons can trap a number of fermionic bound-state levels inside both for repulsive and attractive boson-boson interactions. The time-dependent dynamical mean-field-hydrodynamic model to study the formation of fermionic bright solitons in a trapped degenerate Fermi gas mixed with a Bose-Einstein condensate in a quasi-one-dimensional cigar-shaped geometry is proposed in [9]. Similar model is used to study mixing-demixing in a degenerate fermion-fermion mixture in [10]. Modulational instability, solitons and periodic waves in a model of quantum degenerate bosonfermion mixtures are obtained in [11].
Our aim is to derive two new classes of quasi-periodic exact solutions of the time dependent mean field equations of Bose-Fermi mixture in one-dimensional lattice. We also study some limiting cases of these solutions. The paper is organized as follows. In Section 2 we give the basic equations. Section 3 is devoted to derivation of the first class quasi-periodic solutions with non-trivial phases. A system of N f + 1 equations, which reduce quasi-periodic solutions to periodic are derived. In Section 4 we present second class (type B) nontrivial phase solutions. In Section 5 we obtain 14 classes of elliptic solutions. Section 6 is devoted to two special limits, to hyperbolic and trigonometric functions. In Section 7 preliminary results about the linear stability of solutions are given. Section 8 summarizes the main conclusions of the paper.
Basic equations
At mean field approximation we consider the following N f + 1 coupled equations [7,8,12,11] a BB and a BF are the scattering lengths for s-wave collisions for boson-boson and boson-fermion interactions, respectively. In recent experiments [13,14] the quantum degenerate mixtures of 40 K and 87 Rb are studied where m B = 87m p , m B = 40m p and ω ⊥ = 215 Hz. Equations (2.1), (2.2) have been studied numerically in [7]. The formation of localized structures containing bosons and fermions has been reported in the particular case in which the interspecies scattering length a BF is negative, which is the case of the 40 K-87 Rb mixture. An appropriate class of periodic potentials to model the quasi-1D confinement produced by a standing light wave is given by [15] where sn (αx, k) denotes the Jacobian elliptic sine function with elliptic modulus 0 ≤ k ≤ 1. Experimental realization of two-component Bose-Einstein condensates have stimulated considerable attention in general [16] and in particular in the quasi-1D regime [17,18] when the Gross-Pitaevskii equations for two interacting Bose-Einstein condensates reduce to coupled nonlinear Schrödinger (CNLS) equations with an external potential. In specific cases the two component CNLS equations can be reduced to the Manakov system [19] with an external potential. Important role in analyzing these effects was played by the elliptic and periodic solutions of the above-mentioned equations. Such solutions for the one-component nonlinear Schrödinger equation are well known, see [20] and the numerous references therein. Elliptic solutions for the CNLS and Manakov system were derived in [21,22,23].
In the presence of external elliptic potential explicit stationary solutions for NLS were derived in [15,24,25]. These results were generalized to the n-component CNLS in [18]. For 2-component CNLS explicit stationary solutions are derived in [26].
Stationary solutions with non-trivial phases
We restrict our attention to stationary solutions of these CNLS where j = 1, . . . , N f , κ 0 , κ 0,j , are constant phases, q j and Θ 0 , Θ j (x) are real-valued functions connected by the relation . . , N f being constants of integration. Substituting the ansatz (3.1), (3.2) in equations (2.1) and separating the real and imaginary part we get We seek solutions for q 2 0 and q 2 j , j = 1, . . . , N f as a quadratic function of sn (αx, k): Inserting (3.5) in (3.4) and equating the coefficients of equal powers of sn (αx, k) results in the following relations among the solution parameters ω j , C j , A j and B j and the characteristic of the optical lattice V 0 , α and k: where j = 1, . . . , N f . Next for convenience we introduce Table 1.
In order for our results (3.5) to be consistent with the parametrization (3.1)-(3.3) we must ensure that both q 0 (x) and Θ 0 (x) are real-valued, and also q j (x) and Θ j (x) are real-valued; this means that C 2 0 ≥ 0 and q 2 0 (x) ≥ 0 and also C 2 j ≥ 0 and q 2 j (x) ≥ 0 (see Table 1, ). An elementary analysis shows that with l = 0, . . . , N f one of the following conditions must hold Although our main interest is to analyze periodic solutions, note that the solutions Ψ b , Ψ f j in (2.1), (2.2) are not always periodic in x. Indeed, let us first calculate explicitly Θ 0 (x) and Θ j (x) by using the well known formula, see e.g. [27]: where ℘, ζ, σ are standard Weierstrass functions.
In the case a) we replace v by iv 0 and v by iv j , set sn 2 (iαv 0 ; k) = β 0 < 0, sn 2 (iαv j ; k) = β j < 0 and and rewrite the l.h.s in terms of Jacobi elliptic functions: Skipping the details we find the explicit form of These formulae provide an explicit expression for the solutions Ψ b , Ψ f j with nontrivial phases; note that for real values of v 0 Θ 0 (x), v j Θ j (x) are also real. Now we can find the conditions under which Q j (x, t) are periodic. Indeed, from (3.9) we can calculate the quantities T 0 , T j satisfying: Then Ψ b , Ψ f j will be periodic in x with periods T 0 = 2m 0 ω/α, T j = 2m j ω/α if there exist pairs of integers m 0 , p 0 , and m j , p j , such that: where ω (and ω ′ ) are the half-periods of the Weierstrass functions.
Type B nontrivial phase solutions
For the first time solutions of this type were derived in [15,24,25] for the case of nonlinear Schrödinger equation and in [18] for the n-component CNLSE. For Bose-Fermi mixtures solutions of this type are possible • when we have two lattices V B and V F , We seek the solutions in one of the following forms: In the first case (4.1) we have We remark that due to relations B 1 we have that all q j of the fermion fields are proportional to q 1 .
Examples of elliptic solutions
Using the general solution equations (3.6)-(3.8) we have the following special cases: (these solutions are possible only when we have some restrictions on g BB , g BF , and V 0 see the Table 1) For the frequencies ω 0 and ω j we have as well as C 0 = C j = 0.
The coefficients A 0 and A j have the same form as (5.2). The frequencies ω 0 and ω j now look as follows The constants C 0 and C j are equal to zero again.
Example 3. B 0 = −A 0 /k 2 and B j = −A j /k 2 . In this case we obtain As before C 0 = C j = 0.
By analogy with the previous examples the constants A 0 , A j , C 0 and C j are given by formulae (5.2) and C 0 , C j are all zero.
Example 5. B 0 = 0 and B j = −A j /k 2 . Thus one gets All these cases when V 0 = 0 and j = 2 are derived for the first time in [11].
Mixed trivial phase solution
. . , N f the solutions obtain the form q 0 = A 0 sn (αx, k), q 1 = A 1 sn (αx, k), Using equations (3.6)-(3.8) we have . . , N f . Therefore the solutions read Then we obtain for frequencies the following results The frequencies are , Certainly these examples do not exhaust all possible combinations of solutions and it is easy to extend this list.
.1 Vector bright-bright soliton solutions
When k → 1, sn (αx, 1) = tanh(αx) and B 0 = −A 0 , B j = −A j we obtain that the solutions read where A 0 ≤ 0 as well as A j ≤ 0. Using equations (3.6)-(3.8) we have As a consequence of the restrictions on A 0 and A j one can get the following unequalities Vector bright soliton solution when V 0 = 0 is derived for the first time in [11].
Vector dark-dark soliton solutions
When k → 1 and B 0 = B j = 0 are satisfied the solutions read The natural restrictions A 0 ≥ 0 and A j ≥ 0 lead to For the frequencies ω 0 and ω j and the constants C 0 and C j we have
Vector bright-dark soliton solutions
When k → 1, B 0 = −A 0 and B j = 0, we have The parameters A 0 and A j are given by (6.1). In this case we have the following restrictions
Vector dark-bright soliton solutions
When k → 1 and provided that B 0 = 0 and B j = −A j the result is , By analogy with the previous examples the constants A 0 , A j , C 0 and C j are given by formulae (6.1) and (6.2) respectively. The restrictions now are
Vector dark-dark-bright soliton solutions
Let B 0 = B 1 = 0 and B j = −A j where j = 2, . . . , N f . Therefore the solutions read Then we obtain for frequencies the following results These examples are by no means exhaustive.
Nontrivial phase, trigonometric limit
In this section we consider a trap potential of the form V trap = V 0 cos(2αx), as a model for an optical lattice. Our potential V is similar and differs only with additive constant. When k → 0, sn (αx, 0) = sin(αx) 3)
4)
Using equations (3.6)-(3.8) again we obtain the following result when (see Table 3) This solution is the most important from the physical point of view [8].
Linear stability, preliminary results
To analyze linear stability of our initial system of equations we seek solutions in the form and obtain the following linearized equations The analysis of the latter matrix system is a difficult problem and only numerical simulations are possible. Recently a great progress was achieved for analysis of linear stability of periodic solutions of type (3.1), (3.2) (see e.g. [15,24,25,18,26] and references therein). Nevertheless the stability analysis is known only for solutions of type (5.1)-(5.6) and solutions with nontrivial phase of type (6.3) and (6.4). Linear analysis of soliton solutions is well developed, but it is out scope of the present paper. Finally we discuss three special cases: Case I. Let B 0 = B j = 0 then for j = 1, . . . , N f and q 0 = √ A 0 sn (αx, k), q j = A j sn (αx, k) we have the following linearized equations: Case II. Let B 0 = −A 0 , B j = −A j then for q 0 = √ −A 0 cn (αx, k), q j = −A j cn (αx, k) we obtain the following linearized equations: Case III. Let B 0 = −A 0 /k 2 , B j = −A j /k 2 therefore the solutions are q 0 = −A 0 dn (αx, k)/k, q j = −A j dn (αx, k)/k, and we obtain the following linearized equations These cases are by no means exhaustive.
Conclusions
In conclusion, we have considered the mean field model for boson-fermion mixtures in optical lattice. Classes of quasi-periodic, periodic, elliptic solutions, and solitons have been analyzed in detail. These solutions can be used as initial states which can generate localized matter waves (solitons) through the modulational instability mechanism. This important problem is under consideration. | 3,473.4 | 2007-03-28T00:00:00.000 | [
"Physics"
] |
Predicting drug-target interactions by dual-network integrated logistic matrix factorization
In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.
interactions from the benchmark dataset 11 . Results from their kernel-based support vector machine model presented high performance in terms of AUC (area under curve of receiver operating characteristic) and AUPR (area under precision-recall curve). van Laarhoven et al. 13 used a kernel regularized least squares (KRLS) algorithm to predict DTI by solely using the topological information from the adjacency matrix of drug-target network. They defined a Gaussian interaction profile kernel based on the topology profiles. Using this kernel, their model exhibited the significant improvement for AUPR over the state-of-the-art methods at that time. They also found that by combining the topological information with the chemical and genomic information, model performance could be further improved. However, it should be pointed out that the above-mentioned methods were focusing on the setting where both drugs and targets were known, which means that at the stage of building models, each drug or target has at least one known interaction with the corresponding targets or drugs, respectively. In order to extend the methods to the prediction of drugs and targets without any known interaction in the dataset, Mei et al. 14 introduced a neighbor-based interaction-profile inferring method and integrated it into the bipartite local model. As a result, their model performance presented a large improvement. However, the previous kernel-based methods 13,14 only used a simple linear combination technique to form the final kernel matrix from several individual kernels.
In fact, such a simple linear setting may not be appropriate when the linear relationship is not evident among kernels. Thus, Hao et al. 15 employed a nonlinear kernel diffusion technique, motivated by the work from Wang et al. 16 , to combine different kernels and then using the diffused kernel they adopted KRLS to perform DTI predictions. As a result, the model with the diffused kernel showed better performance than that with the linearly combined kernel. However, when testing with more rigorous validations such as 10-fold cross-validation for the whole dataset, the KRLS algorithm failed to yield satisfied results though it has already adopted an advanced kernel diffusion technique. Recently, Liu et al. 17 proposed a neighborhood regularized logistic matrix factorization (NRLMF) for DTI predictions. The NRLMF model showed an encouraging result based on the 5 trials of 10-fold cross-validation and became the state-of-the-art algorithm in the field. The good performance can be attributed to the following reasons: (1) they took advantage of the merit of logistic matrix factorization, which is especially suitable for binary variables; (2) they proposed an augmented known interaction pairs technique attempting to balance the imbalanced characteristics between known and unknown pairs to some extent; (3) they adopted a neighborhood regularized manner in the objective function; and (4) they used a neighborhood smoothing method to generate new drug/target prediction scores. However, they did not consider the drug-target profile information at all when building the model, which is actually very important for DTI predictions [13][14][15] . Thus, to integrate the profile information into the model, we propose a four-step procedure for DTI predictions: (1) inferring new drug or target profiles and calculating the profile kernels; (2) diffusing drug kernels; (3) diffusing target kernels; and (4) predicting interaction scores based on the diffused kernels using the proposed algorithm by adding the "trust ensemble" idea into the model. We compare our method to prior arts based on two groups of benchmark datasets. Moreover, we also compile a new DTI dataset on the basis of the latest DrugBank records to enrich the diversity of existing benchmark datasets.
Material and Methods
Dataset. Two benchmark datasets were used to validate the proposed algorithm for DTI predictions. One was obtained from the study of Yamanishi et al. 11 , which contains the DTI interaction information as retrieved from the KEGG BRITE 18 , BRENDA 19 , SuperTarget 20 and DrugBank 21 databases. Protein sequences of targets were obtained from the KEGG GENES database 18 . Chemical compounds were obtained from the KEGG DRUG and COMPOUND databases 18 . The dataset was classified into four groups: enzymes (445 drugs, 664 targets); ion channels (210 drugs, 204 targets); G-protein coupled receptors (223 drugs, 95 targets); and nuclear receptors (54 drugs and 26 targets) as listed in Table 1. Another dataset used in this work was retrieved from the work of Kuang and co-workers 22 . The dataset consists of 3,681 known interaction pairs including 786 drugs and 809 targets (Table 1), whereas: (1) drugs were approved by FDA; (2) drugs included at least one ATC code; and (3) drug structure information was deposited in the KEGG database. Herein, target sequence similarity matrix is denoted by S ts (similarity scores among proteins for both datasets were computed using a normalized version of Smith-Waterman score 23 ). Chemical structure similarity matrix is denoted by S cs (similarity scores among compounds for both datasets were computed using the SIMCOMP tool 24 ). The interaction adjacency matrix is denoted by Y, where Y ij = 1 if drug i interacts with target j, and Y ij = 0 otherwise. The used datasets here are the same as those used in the previous studies 11,22 . Problem description. Given three matrices, S ts , S cs and Y, the task is how to make use of them to predict interactions between drug compounds and target proteins, which includes four scenarios (Fig. 1) in Hao et al. 15 . These scenarios are illustrated by four matrices of 5 drugs (i.e., D1 through D5) and 4 targets (i.e., T1 through T4). Thus, the D1-T1 interaction pair surrounded by circle consists of four cases: (1) known drug -known target (Scenario 1 in Fig. 1D). Herein, a "known drug" refers to a drug that has at least one interaction with targets (e.g., D1 in Fig. 1A,B, respectively) while a "new drug" refers to a drug that does not have any interaction with targets (e.g., D1 in Fig. 1C,D, respectively) in the dataset. Similar definitions are applied when referring to a "known target" (e.g., T1 in Fig. 1A and C, respectively) and a "new target" (e.g., T1 in Fig. 1B and D, respectively). The goal of this work is to develop a novel algorithm to improve the prediction performance of drug-target interactions. Specifically, the algorithm assigns a score to a drug-target pair estimating the likelihood of an interaction between them, whereas the higher the score is, the more likely the drug and target interact with each other.
Profile inferring and kernel construction. Though at least one interaction exists for each drug and target in the original benchmark dataset, the scenario of new drug and new target (i.e., Scenario 4) would occur when the dataset is split in the cross-validation process. Thus, the new drug/target interaction profiles were first inferred by its nearest neighbors (number of neighbors, K, was set to 5 empirically). The original similarity matrices were converted to kernel matrices (denoted by K c and K p for the compound (drug) kernel matrix and protein (target) kernel matrix, respectively, see Fig. 2) according to our previous method 15 . Specifically, for a new drug, the inferred drug-target interaction profile was calculated by the multiplication of the chemical similarity of its nearest neighbors with the corresponding drug-target interaction profile. Inferred profiles were normalized at the end by the sum of similarity values between the current drug and its neighbors. Target-drug interaction profile for a new target was calculated in a similar way. Once drugs/targets profiles were inferred for all new drugs and targets (denoted by Yi in Fig. 2), the Gaussian kernel matrices were computed, which are denoted by K d and K t based on the drug profiles and target profiles, respectively.
Similarity diffusion. Given four kernel matrices, K d , K c for drugs and K t , K p for targets, the goal of the similarity diffusion technique 15 is to diffuse K d and K c into one final kernel matrix, S d , and diffuse K t and K p into one final kernel matrix, S t (see steps 2 and 3 in Fig. 2). The important steps for similarity diffusion are summarized as follows: (1) constructing the "local" similarity matrix for each of the four kernel matrices, which means that given the number of nearest neighbors for the current drug/target (number of nearest neighbors was empirically set to 3), the nearest neighbors were kept while others were set to zeros; (2) diffusing the "local" similarity matrices and the "global" similarity matrices iteratively with a given iteration step number (number of iteration was empirically set to 2) for drugs and targets, respectively. After finishing the iteration process, the status matrices were averaged and normalized to be used as the final diffused matrices (i.e., S d for drugs and S t for targets). For details of the diffusion procedure, one can refer to the previous studies 15, 16 . Dual-network integrated logistic matrix factorization algorithm. Having obtained the diffused drug similarity matrix S d and target similarity matrix S t , together with the interaction profile matrix Y, a dual-network integrated logistic matrix factorization (DNILMF) algorithm was developed for DTI predictions.
Herein, a logistic function (i.e., , was used to yield the interaction probabilities between drugs and targets. In NRLMF, x = UV T , where U and V are two latent matrices for drugs and targets, respectively, and V T denotes the transpose of V. It can be noted that the used logistic function in NRLMF just considered the information from the drug-target interaction network (Y) itself. In fact, besides this, the probabilities for predicted interactions may also be influenced by the similarity network information between drugs (S d ) and between targets (S t ). For example, to check if drug D1 interacts with target T1, one intuitive idea is to see if the neighbors of drug D1 interact with target T1, if so, then drug D1 has a higher probability of interacting with target T1. Mathematically, the process can be expressed by x = S d UV T . Similarly, if drug D1 interacts with the neighbors of target T1, then there is a higher probability that drug D1 interacts with target T1. Mathematically, it can be expressed by x = UV T S t . Actually, a similar idea ("Social Trust Ensemble") has been proposed in the recommender systems field 25 , which gives the detailed explanation how the similarity network plays a role in the model prediction. Thus, in the current work, the interaction probability scores (ranging from 0 to 1) for drug-target pairs were calculated in the following equation: where α, β, γ are the corresponding smoothing coefficients with the summation of them as 1 (they were empirically set to α = 0.5, and β = γ = 0.25). Note that equation (1) simultaneously considers both the interaction profile network information (Y) and the two similarity network information between drugs (S d ) and targets (S t ).
According to the study 17 , by augmenting each known interaction pair to c (c ≥ 1) folds and by assuming all samples are independent, the probabilities of drug-target interactions were given as follows: where c is the augmented folds for known DTI pairs (c was set to 5 empirically). P ij refers to the interaction probability between drug i and target j. The zero-mean spherical Gaussian priors were placed on the drug and target latent vectors as shown in the following equation: where σ d 2 and σ t 2 are parameters controlling the variances of Gaussian distributions, U i denotes the latent variable for drug i, V j denotes the latent variable for target j and I denotes the identity matrix. Through a Bayesian inference, the following equation was obtained: Thus, from the above equations, the log of the posterior distributions for DNILMF were yielded as follows: where C is a constant which does not depend on the parameters. Thus, two latent variable matrices, U and V, were generated by maximizing the following objective (log-likelihood, denoted by LL) function: , λ u and λ v are regularized coefficients for U and V, respectively (they were empirically set to 5 and 1, respectively), ⋅ F 2 denotes the Frobenius norm, and ° denotes the Hadamard product (element-wise product). Herein, the gradient ascend algorithm was used to solve for U and V from the above objective function. As a result, the gradient variables for both U and V were obtained as follows: and Q T denotes the transpose of Q. In this work, the AdaGrad algorithm 26 was used to update U and V. The detailed procedure can be referred to ref. 17. Smoothing new drug/target predictions by incorporating neighbor information. As reported in the work 17 , for new drugs/targets, when the drug latent matrix (U) and target latent matrix (V) were obtained, they were replaced with new ones inferred by using their neighbor information (number of neighbors was empirically set to 5) according to the following equations: where S iu d denotes the similarity between a new drug i and a known drug u; U u denotes the latent variable of a known drug u. Similar definitions were applied to a new target. Thus, after inferring the latent matrices for new drugs and targets, the predicted interaction probability scores were calculated according to equation (1).
Results
Prediction procedure. With the given problem formulation for DTI predictions as described in the method section, we develop a complete algorithm flowchart as shown in Fig. 2. It can be noticed that the prediction procedure includes four steps. The first step is for profile inferring and kernel construction. Given the interaction adjacency matrix Y, we first infer the new drug/target profiles (all zeros for the entire row or column in Y, which may occur in the cross-validation stage), based on the respective neighbors. The inferred matrix is denoted by Yi. At the end of step 1 (see Fig. 2), all the new drug/target profiles are inferred. Based on the complete adjacency profiles (Yi), we then calculate the kernels from the drug profiles and target profiles, respectively. Herein, we adopt the Gaussian kernel in the same way as used in our previous work 15 , which results in two kernel matrices, K d and K t for drug profiles and target profiles, respectively. In the second step, we employ the kernel diffusion method 15,16 , an effective but less explored technique in the DTI prediction field, to diffuse two classes of similarity matrices for drugs, K d and K c (converted from the original compound similarity matrix in the benchmark dataset to the kernel matrix) into one final similarity matrix, denoted by S d . A similar process is performed for generating the target kernel matrices, K t and K p (converted from the original protein similarity matrix in the benchmark dataset to the kernel matrix). As a result, a final diffused matrix, S t , is generated from step 3 as shown in Fig. 2. In step 4, we finally employ our proposed DNILMF (dual-network integrated logistic matrix factorization) algorithm to perform DTI predictions. It should be pointed out that, in this last step, new drug/target interaction scores are re-computed based on their neighbor prediction values instead of their own values generated directly by the model. Our source code is available at: https://github.com/minghao2016/DNILMF.
Comparison with the state-of-the-art algorithms.
To validate DNILMF, we compare our results with those from the state-of-the-art algorithms. Firstly, we compare the DNILMF algorithm with NRLMF which previously achieved the best performance based on the benchmark dataset proposed by Yamanishi and co-workers 11 . Using the same dataset and similar cross-validation methods (i.e., 5 trials of 10-fold cross-validation under three settings: (1) CVP, cross-validation based on the drug-target pairs (see Fig. 1A); (2) CVR, cross-validation based on the rows (see Fig. 1C); and (3) CVC, cross-validation based on the columns (see Fig. 1B)), and for all of the four sub-groups, our proposed DNILMF algorithm outperforms NRLMF in terms of both AUC and AUPR, especially for AUPR as shown in Tables 2-4. In fact, all of the four sub-groups in the benchmark dataset possess the imbalanced characteristics, which means that the number of drug-target pairs with known interactions is far less than the number of pairs with no interaction evidence. Therefore, a more sensitive AUPR metric is generally preferred for assessing the prediction results for those imbalanced datasets. It can also be noted that DNILMF outperforms NRLMF with larger ratios of AUPR (i.e., AUPR1/AUPR2) than AUC (i.e., AUC1/AUC2), indicating DNILMF exhibits a stronger power for handling highly imbalanced datasets. In particular, it is interesting to note that for the GPCR class in Table 2, DNILMF outperforms NRLMF by over 6% in terms of AUPR under the setting CVP, indicating that DNILMF is more powerful to predict interactions between ligands and the target class of membrane proteins using the ligand-based method. Thus, DNILMF provides a complementary technology for the receptor-based methods (such as docking), which experience more challenges when applied to the GPCR class since the 3D crystal structures for membrane proteins are difficult to obtain. Under the setting CVR (i.e., new drugs, see Fig. 1C), DNILMF largely outperforms NRLMF indicating it can handle the new drug scenario better than NRLMF (see Table 3). Under the setting CVC (i.e., new targets, see Fig. 1B), DNILMF also consistently outperforms NRLMF (see Table 4). By comparing various settings for DTI predictions, it is evident that CVP is the easiest case for DNILMF, since more known information is available to train a model compared to the settings of CVR and CVC. It can also be noted that for the datasets with more samples (e.g., Enzymes and IC), the AUPR and AUC metrics from CVC in DNILMF are better than those from CVR. By contrast, for the datasets with less samples (e.g., GPCR and NR), the AUPR and AUC values from CVR are better than those from CVC. The phenomenon can basically be confirmed by NRLMF except that for the GPCR dataset, NRLMF presents better AUPR and AUC values from CVC than those from CVR. Under the settings of CVR and CVC, the decreased performance is due to the fact that there exists less known information in the training phase and the obtained latent variables for new drugs/targets may not be accurate 17 . Among the four types of scenarios, the most difficult case for DTI predictions is Scenario 4 (i.e., new drug -new target, see Fig. 1D), which may be generated during cross-validation. Taking the setting CVP as an example, in the course of cross-validation whereas Table 3. The comparison of DNILMF with NRLMF using 5 trials of 10-fold cross-validation based on the setting CVR.
datasets of training and testing are re-generated by a randomized procedure, samples of new drugs and targets may be left in the testing dataset so that the drug-target pairs fall into the new drug -new target category (see the D1-T1 pair in Fig. 1D). We compare DNILMF with NRLMF (it is derived from our implementation using the R software 27 , which is slightly different from the original one) in such a difficult case. We take the GPCR data under the setting CVP as an example and run 5 times of "5 trials of 10-fold cross-validation". As a result, DNILMF gives AUPR of 0.633 ± 0.025 and AUC of 0.897 ± 0.004, while NRLMF exhibits AUPR of 0.385 ± 0.006 and AUC of 0.706 ± 0.008 indicating that DNILMF has an advantage in making DTI predictions for new drug -new target pairs over NRLMF. The results are obtained based on the default parameters for both algorithms. Our source code shows the detailed process. We argue that the better performance may benefit from the diffused kernels.
To validate this, we plug the diffused kernels into another popular DTI prediction algorithm, KBMF 28 . We run KBMF with the default parameters except that the number of latent variables is set to 20. We take the NR data as an example for computational consideration and run the algorithm, under the setting CVP with 5 trials of 10-fold cross-validation, KBMF gives AUPR of 0.514 ± 0.026 and AUC of 0.883 ± 0.012 when using similarity matrices just from the structure information (i.e., K c and K p in steps 2 and 3 shown in Fig. 2). When plugging the diffused kernels (i.e., S d and S t in steps 2 and 3 shown in Fig. 2), KBMF gives AUPR of 0.643 ± 0.017 and AUC of 0.919 ± 0.012. Undoubtedly, the diffused kernels play a critical role in the performance improvement for KBMF. The detailed comparison is given in our source code. Besides testing with the commonly used benchmark dataset, we also validate our algorithm with an additional benchmark dataset compiled by Kuang et al. 22 , which is a larger dataset with 3,681 known interactions including 786 drugs and 809 targets used together in an eigenvalues transformation technique (denoted by EigenTrans) to boost the prediction accuracy of DTI. As shown in Table 5, our algorithm outperforms EigenTrans by around 2% in terms of AUC, and more significantly by 10% in terms of AUPR based on the setting CVP as used in EigenTrans. In summary, the proposed DNILMF algorithm shows better performance in comparison to the state-of-the-art approaches based on the benchmark datasets under the all four types of scenarios.
Influence of parameters.
It should be pointed out that all obtained DNILMF results described above are based on the empirical setting of parameters. However, the optimal performance of most algorithms depends on the parameter settings. Thus, we vary six parameters and investigate their influence on the performance of DNILMF. The number of latent variables (numLatent) is changed from 30 to 100 incremented by 10 at a step. The augmented number for known interaction pairs (c) is changed from 3 to 10 incremented by 1 at a step. The coefficient of latent matrix product, α, is changed from 0 to 1 incremented by 0.1 at a step. The λ u and λ v , regularized coefficients of latent variables for drugs and targets, are changed from 1 to 10 incremented by 1 at a step, respectively. The number of neighbors (K) for inferring new drug/target profiles and smoothing new drug/target predictions is changed from 1 to 10 incremented by 1 at a step. Herein, we only change one parameter at a time while fixing others at the default parameters (i.e., numLatent = 50, c = 5, α = 0.5, λ u = 5, λ v = 1, K = 5). Thus, under the setting CVP and taking the GPCR data as an example, we finally obtain AUPR of 0.853 and AUC of 0.979 based on the optimal parameters (i.e., numLatent = 90, c = 6, α = 0.4, λ u = 2, λ v = 2, K = 2). Evidently, the tuned parameters boost the performance of DNILMF comparing to the results from the default parameters, i.e., AUPR of 0.812 and AUC of 0.975. It should also be emphasized that if one explores the parameter space largely using techniques such as genetic algorithm, the model performance and efficiency of hyper-parameter optimization may be further improved. However, it is worthwhile to point out that, even without parameter optimization, the obtained results have already exhibited better performance than those from the state-of-the-art algorithms, which is the reason that we take the quicker path for parameter tuning rather than taking the approach for an exhaustive search to explore the entire parameter space and the utmost optimal combination. Prediction and validation of new compiled DTI dataset. To enhance the diversity of benchmark datasets and facilitate more rigorous assessment for DTI prediction algorithms, we have compiled a new DTI dataset with PubChem CID identifier for drugs and UniProt identifier for targets. First, we obtain the mapping (denoted by CID-DBID) for CID (PubChem Compound ID) and DBID (DrugBank drug ID) from PubChem (https://pubchem.ncbi.nlm.nih.gov/), publicly available biological and chemical information database, and manually inspect the obtained file to make sure that all the CID-DBID mappings are on the one to one basis. We then extract the approved drug-target interaction information from DrugBank 21 (released on April 20 2016) and we only keep the small molecule drugs which are mapped to CID. For the protein sequence file, we use the FASTA format of sequences provided by DrugBank, which are approved target polypeptide sequences (released on April 20 2016). We keep the sequences of Homo sapiens only and have the obsoleted ones removed. At this point, a total of 5,249 known drug-target interactions annotated by DrugBank are obtained. A few filters are applied subsequently to the drug molecules for the consideration of data consistency, including removing mixture drugs and drug molecules with molecule weight falling out of the range of 150 to 500 Dalton. For the target sequences, we keep those with the number of amino acids in the range of 100 to 900. Several duplicated interactions (e.g., interactions from the same CID and target sequence pair) are also removed. Finally, a new dataset is compiled with 3,688 known interactions consisting of 829 unique drugs and 733 unique targets, which is summarized together with the other two benchmark datasets in Table 1, and the detailed information for the new compiled dataset is provided in the Supplementary Dataset. A sparsity value (known interactions divided by all possible interaction pairs) is calculated for each dataset. From Table 1, one can notice that the dataset from Yamanishi et al. 11 has higher sparsity values due to that the targets are classified into four sub-groups. On the contrary, the datasets from Kuang et al. 22 and ours use the interaction information from DrugBank as a whole without sub-setting, which leads to a lower sparsity value (0.006 for both, see Table 1). In fact, a sparser dataset (with lower sparsity values) will make the prediction more challenging. Based on the new compiled dataset, we apply our proposed algorithm (all parameters are fixed to the default values) to perform DTI predictions. Herein, we calculate the similarity matrix for drugs using the Tanimoto coefficient based on two classes of fingerprints (PubChem fingerprint (denoted by pcfp) and a path-based fingerprint (denoted by fp2)) using the R software 27,29,30 . For targets, we also obtain two kinds of similarity matrices based on the clustal omega software (denoted by clusto) 31 and the spectrum kernel (denoted by kmer3, one parameter kmers is set to 3) 32 . It should be pointed out that clusto generates distant matrix (denoted by distM), and the result of (1 -distM) is calculated to obtain the corresponding similarity matrix. Thus, four classes of combinations (i.e., fp2-clusto, fp2-kmer3, pcfp-clusto and pcfp-kmer3) are formed for testing the algorithm based on the setting CVP, and the respective results obtained are shown in Table 6. It can be noted that despite of the extreme sparsity of the dataset, the model performance is still satisfactory with AUC and AUPR of more than 0.970 and 0.772, respectively. Among them, the pcfp-kermer3 combination gives the best result with AUC of 0.972 and AUPR of 0.775. Our previous study also showed that the pcfp-kmer3 combination generated the better results 15 .
Since the DNILMF algorithm combined with pcfp-kmer3 gives the best results for the new compiled dataset, we in the following take this test as an example to further analyze the results in a greater detail by looking into the novel predictions. Table 7 lists the top 5 predicted interactions (i.e., interactions not indicated in the new compiled dataset) sorted in descending order of the prediction scores. The top one predicted interaction occurs between DB00370 (Mirtazapine) and P08908 (5-hydroxytryptamine receptor 1A, 5HR1A), a membrane protein, with a prediction score of 0.921. Mirtazapine, with a tetracyclic chemical structure, is an antidepressant used for the treatment of moderate to severe depression. Originally, Mirtazapine interacts with 22 targets as reported in the DrugBank database (see Supplementary Dataset). Here, the DNILMF algorithm predicts that it may also interact with 5-hydroxytryptamine receptor 1A (5HR1A). To validate the predicted interaction between Mirtazapine and 5HR1A, we search PubChem using this drug (CID 4205) and notice that PubChem BioAssay ID (AID) 438555 derived from the in-silico work of Langham et al. 33 reports a positive result regarding Mirtazapine's binding with 5HR1A. The second top prediction with a score of 0.906 is formed between Flunitrazepam (DB01544) and Gamma-aminobutyric acid receptor subunit alpha-1 (GARSA1, ion channel). Flunitrazepam consists of a benzodiazepine with pharmacologic actions similar to diazepam that can cause anterograde amnesia. Due to the fact that it may precipitate violent behavior, the US government has banned the importation of this drug. Having six known interactions in the compiled dataset, it is predicted to form interaction with another target, GARSA1. In fact, the prediction can be supported by the experimental result from Collins and co-workers 34 with data reported in PubChem BioAssay AID 72640. DB0036 (Clozapine) interacts with 26 targets as reported in the DrugBank database. Herein, it is predicted to interact with Dopamine D5 receptor (DD5R), a member of the GPCR 1 family, with a score of 0.903. The experimental study 35 and the data in PubChem AID 392466 confirm our prediction. The fourth predicted interaction occurs between Methysergide (DB00247) and 5-hydroxytryptamine receptor 1D (5HR1D), a GPCR 1 family. Methysergide is used prophylactically in migraine and other vascular headaches and used to antagonize serotonin in the carcinoid syndrome, which forms 8 interactions with targets in the compiled dataset. Our prediction is supported by the result reported in T3DB 36 (T3D2726). Loxapine, an antipsychotic agent used in schizophrenia, forms 32 interactions in the compiled dataset. It is predicted to interact with 5-hydroxytryptamine receptor 2B (5HR2B), a GPCR 1 family. The prediction result is consistent with the study by Alaimo and co-workers 37 that ranked the prediction score between Loxapine and 5HR2B at the seventh position out of all 117 pairs.
Discussion
Various methods have been proposed to perform DTI predictions such as similarity-based methods, conventional machine learning methods as well as matrix factorization-based methods. Among them, MF (matrix factorization)-based ones have shown the best prediction accuracy according to the recently reported work by Liu and co-workers 17 . They used the neighborhood regularized logistic matrix factorization (NRLMF) approach to perform DTI predictions based on the benchmark dataset 11 . The strength of NRLMF is contributed by: (1) the logistic function used; (2) the augmented known DTI pairs; (3) neighbor-based regularization; and (4) neighbor-based inference at the prediction step. In this work, we first take advantage of some of the strength in NRLMF. We also re-formulate the objective function by adding the network regularization into the logistic function to determine the predicted scores. More importantly, we employ the nonlinear diffusion technique among similarity matrices, which is less exploited in the past except in our recent work 15 . As a result, predictions are significantly improved. The underlying idea in our proposed objective function lies in the fact that similar drugs (or targets) may contribute to the accuracy of the predictions for their neighbors.
In fact, the recommender systems based on the social networks have proposed the idea called "Social Trust Ensemble" 25 . Indeed, progress for one field may be accelerated by "borrowing" ideas, concepts or theories from a different discipline. To the best of our knowledge, it is the first time to incorporate the "Trust Ensemble" idea to the drug-target prediction subject in this work. It is worthy of mentioning that strategy development for constructing metrics to boost the model performance is an important research subject to be studied in different fields such as neural image 38 . Two categories of combination methods are often used to obtain ultimately learned metrics with better prediction ability. One is derived from supervised multiple kernel learning, and the other is unsupervised learning. The latter one is easier and flexible to combine with other algorithms since it can be obtained before the model building step, while the former should be integrated with the model learning process. Thus, unsupervised algorithms are often adopted by researchers in the medicinal and computational chemistry fields due to the simplicity and easy implementation 15 . In the previous studies of DTI predictions, a simple linear combination of multiple similarity matrices was often used. Although the combination improved the prediction accuracy compared to those models derived from a single similarity matrix just based on the structure/sequence information, we argue that such a simple linear combination may not always be appropriate due to the possible nonlinear relationship among the similarity metrics. Thus, nonlinear combination technologies should be employed to extract the proper information from different metrics. Kernel diffusion is one of the nonlinear techniques to effectively extract and combine the rich information in different similarity metrics, which augments the usage of the most important information while suppressing the signal from the least useful information through a complementary diffusion process 16 . The technique has been successfully applied to various fields such as genomic research 16 . However, little attention is paid to this advanced technique in the DTI prediction area except in a previous work from our group 15 . Thus, to further explore the technique, we employ the nonlinear diffusion procedures in this work to combine similarity matrices for drugs and targets leading to the final and optimized matrices which contain the most powerful information. Such a nonlinear diffusion technique has proven to play a critical role in improving the model performance. It should also be pointed out that the neighbor information based post-processing of prediction scores for the new drugs/targets, which also takes advantage of the diffused similarity matrices, is also important for model performance enhancement. In summary, a new algorithm, DNILMF, is developed with improved DTI predictions in comparison to the previous studies. The gained performance in the current work is contributed not only by the proposed dual-network integrated logistic matrix factorization function, but also, and even more importantly, by the advanced nonlinear diffusion technique. Therefore, we hope that the nonlinear combination technique can be extensively explored in the DTI prediction field, and we plan to explore other diffusion algorithms in future work with the adapted weight for different similarity metrics. We also compile a new dataset to increase the diversity of benchmark datasets in the field. We believe the current work will increase research productivity toward drug repositioning and polypharmacology. | 8,279.6 | 2017-01-12T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Algebraic Integers as Chromatic and Domination Roots
Let G be a simple graph of order n and λ ∈ N. A mapping f : V G → {1, 2, . . . , λ} is called a λ-colouring of G if f u / f v whenever the vertices u and v are adjacent in G. The number of distinct λ-colourings of G, denoted by P G, λ , is called the chromatic polynomial of G. The domination polynomial of G is the polynomial D G, λ ∑n i 1 d G, i λ , where d G, i is the number of dominating sets of G of size i. Every root of P G, λ and D G, λ is called the chromatic root and the domination root of G, respectively. Since chromatic polynomial and domination polynomial are monic polynomial with integer coefficients, its zeros are algebraic integers. This naturally raises the question: which algebraic integers can occur as zeros of chromatic and domination polynomials? In this paper, we state some properties of this kind of algebraic integers.
Introduction
Let G be a simple graph and λ ∈ N. A mapping f : V G → {1, 2, . . ., λ} is called a λ-colouring of G if f u / f v whenever the vertices u and v are adjacent in G.The number of distinct λ-colourings of G, denoted by P G, λ , is called the chromatic polynomial of G.A zero of P G, λ is called a chromatic zero of G.For a complete survey on chromatic polynomial and chromatic root, see 1 .
For any vertex v ∈ V , the open neighborhood of v is the set N v {u ∈ V | uv ∈ E}, and the closed neighborhood is the set N v N v ∪ {v}.For a set S ⊆ V , the open neighborhood of S is N S v∈S N v , and the closed neighborhood of S is N S N S ∪ S. A set S ⊆ V is a dominating set if N S V , or, equivalently, every vertex in V \ S is adjacent to at least one vertex in S.An i-subset of V G is a subset of V G of cardinality i.Let D G, i be the family of dominating sets of G which are i-subsets and let d G, i |D G, i |.The polynomial D G, x We denote the set of all roots of D G, x by Z D G, x .For more information and motivation of domination polynomial and domination roots, refer to 2-6 .
We recall that a complex number ζ is called an algebraic number respectively, algebraic integer if it is a zero of some monic polynomial with rational resp., integer coefficients see 7 .Corresponding to any algebraic number ζ, there is a unique monic polynomial p with rational coefficients, called the minimal polynomial of ζ over the rationals , with the property that p divides every polynomial with rational coefficients having ζ as a zero.The minimal polynomial of ζ has integer coefficients if and only if ζ is an algebraic integer.
Since the chromatic polynomial and domination polynomial are monic polynomial with integer coefficients, its zeros are algebraic integers.This naturally raises the question: which algebraic integers can occur as zeros of chromatic and domination polynomials?
In Sections 2 and 3, we study algebraic integers as chromatic roots and domination roots, respectively.
As usual, we denote the complete graph of order n and the complement of G, by K n and G, respectively.
Algebraic Integers as Chromatic Roots
Since chromatic polynomial is monic polynomial with integer coefficients, its zeros are algebraic integers.An interval is called a zero-free interval for a chromatic domination polynomial, if G has no chromatic domination zero in this interval.It is well known that −∞, 0 and 0, 1 are two maximal zero-free intervals for chromatic polynomials of the family of all graphs see 8 .Jackson 8 showed that 1, 32/27 is another maximal zerofree interval for chromatic polynomials of the family of all graphs and the value 32/27 is best possible.
For chromatic polynomials clearly those roots lying in −∞, 0 ∪ 0, 1 ∪ 1, 32/27 are forbidden.Tutte Also they extended this result to show that φ 2n and all their natural powers cannot be chromatic zeros, where φ n is called n-annaci constant 12 .
For some times it was thought that chromatic roots must have nonnegative real part.This is true for graphs with fewer than ten vertices.But Sokal showed the following.Theorem 2.2 see 13 .Complex chromatic roots are dense in the complex plane.
Theorem 2.3. The set of chromatic roots of a graph G is not a semiring.
Proof.The set of chromatic roots is not closed under either addition or multiplication, because it suffices to consider α α * and αα * , where α is nonreal and close to the origin.Theorem 2.4.Suppose that a, b are rational numbers, r ≥ 2 is an integer that is not a perfect square, and a − |b| √ r < 32/27.Then a b √ r is not the root of any chromatic polynomial.
Proof.If λ a b √ r is a root of some polynomial with integer coefficients e.g., a chromatic polynomial , then so is λ * a − b √ r.But λ or λ * cannot belong to −1, 0 ∪ 0, 1 ∪ 1, 32/27 , a contradiction.We know that for every graph G with edge e xy, P G, λ P G e, λ P G • e, λ , where G • e is the graph obtained from G by contracting x and y and removing any loop.By applying this recursive formula repeatedly, we arrive at where b i 's are some constants and Let us recall the definition of join of two graphs.The join of two graphs G 1 and G 2 , denoted by Theorem 2.6 see 14 .Let G 1 and G 2 be any two graphs with P G i , λ expressed in factorial form, i 1, 2. Then where ⊗ is called umbral product, and acts as powers (i.e., λ i ⊗ λ j λ i j .
Here we state and prove the following theorem.
Theorem 2.7.For any graph H, Proof.It suffices to prove it for n 1. Assume that P H, λ i≥1 b i λ i .By Theorem 2.6,
2.5
Here we state and prove the following theorem.
Theorem 2.8.If α is a chromatic root, then for any natural number n, α n is a chromatic root. Proof.Since we have the result.
By Theorem 2.1, τ and τ 1 τ 2 are not chromatic roots.However τ 3 is a chromatic root see Theorem 2.14 .Therefore by Theorem 2.8 we have the following corollary.
Corollary 2.9.For every natural number n ≥ 3, τ n is a chromatic root.
There are the following conjectures.
Conjecture 2.10 see 15 .Let α be an algebraic integer.Then there exists a natural number n such that α n is a chromatic root.Conjecture 2.11 see 15 .Let α be a chromatic root.Then nα is a chromatic root for any natural number n. Definition 2.12 see 15 .A ring of cliques is the graph R a 1 , . . ., a n whose vertex set is the union of n 1 complete subgraphs of sizes 1, a 1 , . . ., a n , where the vertices of each clique are joined to those of the cliques immediately preceding or following it mod n 1. Theorem 2.13 see 15 .The chromatic polynomial of R a 1 , . . ., a n is a product of linear factors and the polynomial
2.7
We call the polynomial in Theorem 2.13 the interesting factor.
Remark 2.15.We observed that τ n is a chromatic root for every n ≥ 3. Also we saw that τ 1 is not a chromatic root, but we do not know whether τ 2 is a chromatic root or not.Therefore this remains as an open problem.
Algebraic Integers as Domination Roots
For domination polynomial of a graph, it is clear that 0, ∞ is zero-free interval.Brouwer 16 has shown that −1 cannot be domination root of any graph G.For more details of the domination polynomial of a graph at −1 refer to 17 .We also have shown that every integer domination root is even 18 .
Let us recall the corona of two graphs.The corona of two graphs G 1 and G 2 , as defined by Frucht and Harary in 19 , is the graph G G 1 • G 2 formed from one copy of G 1 and |V G 1 | copies of G 2 , where the ith vertex of G 1 is adjacent to every vertex in the ith copy of G 2 .The corona G • K 1 , in particular, is the graph constructed from a copy of G, where for each vertex v ∈ V G , a new vertex v and a pendant edge vv are added.
Here we state the following theorem.
Theorem 3.1 see 2 .Let G be a graph.Then D G, x x n x 2 n if and only if G H • K 1 for some graph H of order n.
By above theorem there are infinite classes of graphs which have −2 as domination roots.Since −1 is not domination root of any graph, so we do not have result for domination roots similar to Theorem 2.8.Also we think that the following conjecture is correct.Conjecture 3.2 see 18 .If r is an integer domination root of a graph, then r 0 or r −2.Now we recall the following theorem.
The following corollary is an immediate consequence of above theorem.The following theorem state that −τ cannot be a domination root.Here we will prove that −τ n for odd n, cannot be a domination root.We need some theorems.
Theorem 3.8 see 20 .For every natural number n,
3.1
Corollary 3.9.For every natural number n Proof.This follows from Theorem 3.8.Now, we recall the Cassini's formula.
Using this formula, we prove another property of golden ratio and Fibonacci numbers which is needed for the proof of Theorem 3.13.Theorem 3.11.
3.4
Proof.Suppose that n is even, therefore n − 1 is odd, and by Corollary 3.9, we have Hence, and by multiplying F n in this inequality, we have 3.8 By Theorem 3.10, we have 3.9 Hence, for even n,
3.10
Similarly, the result holds when n is odd.
Corollary 2 . 5 .
Let b be a rational number, and let r be a positive rational number such that √ r is irrational.Then b √ r cannot ba a root of any chromatic polynomial.
Theorem 3 . 5 . 5 / 2 >Theorem 3 . 6 .Corollary 3 . 7 .
−τ cannot be a domination root.Proof.Let G be any graph.Since D G, −τ is a polynomial with integral coefficients, we have D G, 0, a contradiction.The following theorem is similar to Theorem 3.6 for domination roots.Suppose that a, b are rational numbers, r ≥ 2 is an integer that is not a perfect square, and a − |b| √ r < 0. Then −a − b √ r is not the root of any domination polynomial.Proof.If λ −a−b √ r is a root of some polynomial with integer coefficients e.g., a domination polynomial , then so is λ * −a b √ r.But λ * ∈ 0, ∞ , a contradiction.Let b be a rational number, and let r be a positive rational number such that √ r is irrational.Then −|b| √ r cannot ba a root of any domination polynomial.
Theorem 3.12 see 20, page 78 .For every n ≥ 2,τ n F n τ F n−1 n ≥ 2 .Now we are ready to prove the following theorem.Let n be an odd natural number.Then −τ n cannot be domination roots. | 2,790.2 | 2012-05-14T00:00:00.000 | [
"Mathematics"
] |
Haze Pollution Levels, Spatial Spillover Influence, and Impacts of the Digital Economy: Empirical Evidence from China
: With the development of digital technologies such as the Internet and digital industries such as e-commerce, the digital economy has become a new form of economic and social development, which has brought forth a new perspective for environmental governance, energy conservation, and emission reduction. Based on data from 30 Chinese provinces from 2011 to 2018, this study applies the space and threshold models to empirically examine the digital economy’s influence on haze pollution and its spatial spillover. Furthermore, it investigates the spatial diffusion effect of regional digital economic development and haze pollution by constructing a spatial weight matrix. Subsequently, an instrumental variable robustness test is performed. Results indicate the following: (1) Haze pollution has spatial spillover effects and high emission aggregation characteristics, with haze pollution in neighbouring provinces significantly aggravating pollution levels in the focal province. (2) China’s digital economy has positively impacted haze pollution, with digital economic development having a significant effect (i.e., most prominent in eastern China) on reducing haze pollution. (3) Changing the energy structure and supporting innovation can restrain haze pollution, and the digital economy can reduce the path mechanism of haze pollution through the mediating effect of an advanced industrial structure. It shows a non-linear characteristic that the influence of haze reduction continues to weaken. Thus, policymakers should include the digital economy as a mechanism for ecologically sustainable development in haze pollution control.
Introduction
Since China's reform and opening up, factor cost advantages have enabled the nation to achieve rapid economic development. However, this long-term and extensive economic development model has caused severe environmental pollution. As haze effects are wideranging, long-lasting, and difficult to treat, this form of air pollution has attracted extensive attention from many researchers. Many studies show that severe haze pollution greatly harms people's physical and mental health and reduces life expectancy, and the resulting welfare cost hinders sustainable economic development [1][2][3][4]. Thus, haze pollution detracts from improvements to health, living standards, and quality of economic development, making its effective control a priority.
Scholars have studied the influence of haze on different aspects, such as the economy [5,6], population [7][8][9][10][11], and energy [12][13][14][15]. The existing research has comprehensively explored the mechanism of haze pollution. However, technological and industrial revolutions, global warming, water pollution, air pollution [16,17], and other environmental problems have occurred frequently. Thus, cloud computing, 5G, artificial intelligence, big data, and other digital technologies attempt to break the information asymmetry, and they are expected to play an important role in global environmental governance [18][19][20]. Moreover, the low-cost, high-efficiency digital economy industry has witnessed constant development; as a consequence, many new industries have appeared. The transformation and upgrading of traditional industries have been accelerated, particularly as the Chinese government has been making efforts to coordinate environmental protection and economic development. At the national level, the digital economy is becoming increasingly important for societal development. According to China's Digital Economy Development White Paper [21], the digital economy grew by 15.6% annually to 35.8 trillion yuan in 2019 or 36.2% of the gross domestic product (GDP). Societies worldwide are moving toward rapid optimal allocation and regeneration of resources through the digital industry. This is reflected, for example, in the 'Made in China 2025' strategy and the 'Industrial Internet' in the United States. The influence of emerging industries on environmental governance can be analysed through the identification, selection, filtering, storage, and use of big data.
Whether digital technology can improve environmental pollution is related to whether digitalisation can help reduce both energy consumption and the cost of environmental governance. The previous literature has studied the overall association between economywide energy consumption and information and communication technologies (ICTs). Some scholars argue that ICT has reduced the demand for energy through energy efficiency and sectoral changes. Schulte et al. [22] found that in the Organisation for Economic Cooperation and Development (OECD) countries, 'a 1% increase in ICT capital results in a 0.235% reduction in energy demand'. This is not due to a decrease in electricity consumption but a decline in other non-electric energy sources, possibly arising from the direct impact of ICTs and services on electricity and the indirect impact on non-electric energy carriers in other parts of the economy. ICTs can enrich environmental quality through dematerialisation of production, thereby supporting a less resource-intensive and lightweight economy [23,24]. Ren et al. [25] used the provincial data, systematic GMM method, and intermediate effect model of China from 2006 to 2017 to demonstrate that the relationship between Internet development and energy consumption structure has a negative impact. However, some scholars believe that ICT application will increase energy consumption due to the 'rebound effect' [26]; Zhou et al. [27] analysed the carbon emissions at the industry level in China by using the input-output method; the ICT sector can induce a large amount of emissions by requiring carbon-intensive intermediate inputs from non-ICT sectors. In other words, the application of ICT does not significantly improve the environment and may even worsen environmental problems. Some scholars believe that this influence is not good or bad. Noussan and Tagliapietra [28] forecasted the future European scenario and analysed the potential impact of digital technologies such as the Internet of Things on energy consumption and carbon dioxide emissions in the transportation field. The impact on green sustainability depends on user behaviour, economic conditions, transport, and environmental policies.
Information asymmetry is another challenge in environmental governance. It not only increases environmental governance costs and weakens the effectiveness of environmental policies, but it also leads to a lack of regulatory bodies in environmental governance and reduces the public's enthusiasm for environmental governance. In 2016, China launched an ecological and environmental protection big data service platform as part of the Belt and Road Initiative. 'Internet +', big data, remote sensing satellites, and other information technologies provide environmental information support to China and other countries along the initiative. The Internet's openness, interactivity, and real-time nature make public participation in environmental governance both possible and convenient [29]. Moreover, the Internet promotes environmental supervision, management, intelligence, accurate services, and rectifies previous environmental governance deficiencies [30,31]. Zuo et al. [32] made recommendations to adopt IOT technology to dynamically collect real-time product data related to energy consumption to improve energy efficiency and large-scale utilisation of clean energy. Li et al. [33] empirically concluded that digital technology promotes environmental sustainability in Chinese manufacturing.
Simultaneously, the digital economy is reshaping the global value chain. According to the 'smiling curve' theory, high added value is located at both tails of the curve, representing the upstream (pre-production research and development) and downstream (post-production services) of the value chain. Processing and assembly activities are located at the midpoint of the curve, indicating little added value [34]. In the past, China's manufacturing sector embraced economies of scale for profitability with high volume, low-value production that also created severe air pollution. As the energy factor shifts from the industrial to the service sector, growth in the more energy-efficient sectors will reduce emissions; consequently, the overall economy will be more energy efficient [35][36][37]. Original elements and resources are transferred from industries with low distribution efficiency to technology-intensive industries with high distribution efficiency [38]. Thus, upgrades to the industrial structure would have a substantial impact on pollution.
Additionally, a characteristic of the digital economy is the physical sharing of information. Spatial changes have completely overhauled logistics links, resulting in the emergence of new industries, such as e-commerce, which is witnessing rapid growth due to the high penetration of the Internet and the large numbers of mobile users [39]. E-commerce can improve environmental pollution, as it significantly reduces information search costs and product prices and does a better job of matching. Thus, these supply-demand resources significantly reduce transportation and distribution costs, require less energy consumption, and reduce carbon dioxide emissions compared to in-person shopping [40]. E-commerce can also significantly optimise the corporate structure and management, thereby improving production efficiency [41]. The digital economy changes the smile curve, reconstructs the industrial value chain, and realises green development under the value chain sharing economy.
By reviewing the previous literature, we find that, first, the existing literature discusses the impact of digitisation on carbon emissions, SO 2 emissions and energy consumption through the use of the Internet, output value proportion of the tertiary industry, and investment in the ICT industry as proxy indicators. It is worth noting that the digital economy has received more and more attention, while little empirical research has been conducted to explore whether the development of the digital economy can improve air pollution in China. Second, previous studies have always carried out regression analysis on ordinary panels or dynamic panels, ignoring the spatial correlation and spatial spillover effect of haze pollution. In reality, the diffusion of haze between different regions will lead to spatial correlation and spatial dependence. In spatial econometrics, neglecting spatial effects may lead to errors in estimation and analysis. In a digital environment, search costs are lower, which increases the potential scope and quality of the search. Digital products are often not competitors; that is, they can be replicated at zero cost. As the cost of transporting digital goods and information approaches zero, the role of geographical distance is also expected to change. Digital technology makes it easier to track behaviour [32], and the digital economy containing the above characteristics undoubtedly brings a new perspective for environmental governance. Therefore, we must ask, what impact does the digital economy have on haze pollution? Moreover, what are the channels through which this influence is generated? To answer the above questions, we empirically test the effects of the digital economy on haze pollution and its spatial spillover using data from 30 provinces in China. This study aims to provide insights into the potential impact of the digital economy on future environmental governance. It argues that to take full advantage of the digital economy in environmental sustainability, it is necessary to adopt appropriate policies, support efficient deployment, and shape the digital process politically and socially [42].
This study's main contributions are as follows: First, we construct the second-level indicators of digital infrastructure (representing digital technology) and digital industry (representing emerging industries) and evaluate the development of the digital economy using the entropy weight method. Second, from the perspective of spatiotemporal evolution characteristics of the digital economy and haze pollution, the relationship between them is discussed using the spatial model, filling the gap between the digital economy and ecological geography. Third, accurately solving the two-way causality between haze pollution and the digital economy leads to endogeneity problems. Two methods were used to test the robustness: the replacement of spatial matrix, and the construction of instrumental variables. The number of telephones per 10,000 people per city in 1984 further confirms the robust results of our quantitative research. Finally, this study discusses the mechanism of digital economy influencing haze pollution through industrial structure change using the threshold model.
Construction of the Spatial Weight Matrix
The spatial weight matrix reflects the spatial interaction between different regional research samples. Spatial statistical analysis begins with the establishment of a spatial weight matrix. In this study, we set up a spatial weight matrix with sample size n. All elements of W ij are i, j = 1, ···, n, and the 0-1 adjacency weight matrix (W1) is expressed as where W ij = W ji ; at W ij = 0, location j is not a neighbour of location I; and at W ij = 1, location j is the neighbour of location i, where i = 1, 2, . . . , n; j = 1, 2, . . . , n. At W ij = 1, province i has a common boundary with province j; otherwise, W ij = 0.
Then, we constructed the weight matrix of geographical distance (W2), which is expressed as follows: Let the distance between the geographic centres of province i and province j be d ij ; the latitude and longitude of geographic centre point A of province i be β 1 and α 1 , respectively; and the latitude and longitude of geographic centre point B of province j be β 2 and α 2 , respectively. The Earth's radius is:
Spatial Autocorrelation Analysis
For a comprehensive investigation of the spatial spillover effect of haze pollution and the digital economy, we use the global and local spatial correlation indexes. First, we test whether the research object has a spatial effect by conducting a spatial autocorrelation test for the development index of the digital economy and haze pollution. Spatial correlation analysis can measure the spatial effect of each year in the geographical distance matrix. We calculate the global Moran's index (Moran's I) as The value range of Moran's I is [−1,1]. When I > 0, a positive autocorrelation exists between the two regions. Haze pollution or the development of the digital economy is characterised by spatial agglomeration. When I < 0, a negative correlation exists between the two regions or spatial discreteness. When I = 0, the distribution of haze pollution is random, and no spatial autocorrelation exists.
Global spatial correlation analysis examines the aggregation of the entire space. Local spatial correlation analysis is used to understand the development of the digital economy within each region or the degree of correlation between the haze pollution level in the focal region and nearby regions. The local Moran's I is calculated as Here, a positive I represents some areas with high (low) values surrounded by other regions with high (low) values-either high-high (H-H) or low-low (L-L). Moreover, a negative I represents an area with high (or low) values surrounded by other areas with a low (or high) value-either high-low (H-L) or low-high (L-H).
Econometric Methodology
The following model was established: Here, DIGE i,t is an indicator of the development level of the digital economy in province i in period t; X control i,t is a series of control variables: population structure (PS), fixed assets (FA), energy situation (ES), and degree of innovation (IN) in Equation (6); µ i , refers to the individual fixed effect of province i that is time-invariant; δ t controls the time fixed effect; and ε i,t is a random perturbation term.
Spatial Autoregressive Model
Spatial correlation existed between our variables, and OLS may lead to inconsistencies in the parameter estimates. Therefore, this study introduced a spatial econometric model and analysed the influence of the digital economy on haze pollution in depth from both the space and time perspective. We selected the spatial autoregressive model (SAR) and spatial error model (SEM). The SAR is A variable is affected not only by its explanatory variable but also by variables in other spaces. Here, Y is the explained variable, X is the independent variable, α is the constant term, W is the spatial weight matrix, WY is a vector of the spatial lag dependent variable, ρ denotes a spatial regression coefficient reflecting the spatial dependence of the sample observations, and ε is a random perturbation term. Substituting Equation (6) into the test of Equation (7), we obtain the following spatial econometric model:
Spatial Error Model
Equation (9) represents the SEM. The space disturbance term is related to the space population, and the disturbance term of a particular space affects and other spaces via the space effect.
where Y is the explained variable; X is the independent variable of exogenous influencing factors; α is the constant term; ε is a random error term; β represents the influence of the independent variable on the dependent variable; λ is the unevaluated coefficient of the spatial autocorrelation error term (also known as the spatial autocorrelation coefficient); and µ is an error term. Substituting Equation (6) into the test of Equation (9), we obtain the following spatial econometric model:
Threshold Model
This study tested whether the industrial structure mediates the relationship between the digital economy and haze pollution measured as particulate matter (PM2.5). The specific steps are as follows: in the digital economy development index (DIGE), the coefficient of α 1 is significant throughout the analysis in the linear regression model (6) for haze pollution of PM2.5, based on the construction of DIGE for the mediating variable IS in the linear regression equation of the industrial structure and DIGE for IS in the regression equation of PM2.5 by β 1 , γ 1 , γ 2 . The significance of the regression coefficient determines whether a mediation effect exists. The specific form of the regression model is and ln PM2.
In addition to the mediating effect model, the empirical test for the indirect transmission mechanism should consider Metcalfe's law-the value of the Internet is proportional to the square of the number of users. The development level and industrial structure upgrading of the digital economy may also indirectly reduce the non-linear dynamic spillover of haze pollution in the digital economy. Therefore, in order to study whether the digital economy has a non-linear impact on haze pollution through the intermediary mechanism of industrial structure change, the following panel threshold model is set: In Equation (13), Adj i,t is a threshold variable such as the digital economy and industrial structure, and I (·) represents indicator functions valued at 1 or 0, which meet the conditions in the parentheses-namely 1; otherwise 0. Equation (13) . To address the lack of historical data on PM2.5 concentration levels, we used raster data from the atmospheric composition analysis group based on the annual average of global PM2.5 concentrations monitored by satellites [43]. Using ArcGIS software, we analysed the specific value of the annual mean PM2.5 concentration in Chinese provinces from 2011 to 2018. Using these data, the difficulty in using surface monitoring data based on point source data to measure the PM2.5 concentration of an area accurately was addressed.
Core Explanatory Variable
The core explanatory variable is the DIGE. With regards to the measurement of the digital economy's development level, as officials have not yet disclosed a comprehensive index of concrete information for it, the calculation faces certain difficulties and challenges [44]. Based on the method of Huang et al. [45], the present study adopted the indicators of Internet penetration rate, relevant practitioners, relevant output, and mobile phone penetration rate. Based on the 2011-2018 panel data of 30 provinces, to build the digital infrastructure and digital industry variable, this study developed secondary indices where the secondary index of digital infrastructure corresponds to mobile telephone exchange capacity (10,000 families), optical fibre cable line length (km), number of Internet broadband access ports (10,000 units), number of websites (10,000 units), popularisation rate of mobile telephones (unit/100), and number of Internet broadband access users (10,000 units). The secondary index of digital industry is number of computers per 100 people in the enter-prise, number of websites per 100 enterprises, proportion of enterprises with e-commerce transaction activities on the Internet per 100 enterprises, and proportion of e-commerce sales in the GDP. Using the entropy method, the data of these 10 indicators were processed to obtain the DIGE.
Intermediate Variable
Industrial structure (IS) is the intermediate variable. The proportion of the tertiary sector's output value indicates whether an economy has an advanced industrial structure [46]. The larger the value, the smaller the negative impact on haze pollution. Therefore, the sign of the coefficient is expected to be negative.
Control Variables
The control variables include the following: Population structure (PS). Owing to livelihood pressures, young people are more willing to risk high pollution emissions to earn higher incomes, and an increase in the proportion of the labour population aggravates haze pollution [47]. In this study, the proportion of people aged 15-64 years in the total population was used to measure the influence of total regional population distribution on haze pollution. Therefore, this study expected the coefficient sign to be positive.
Fixed assets (FA). Following Li et al. [48], FA is expressed as the total investment in fixed assets. FA investment is positively correlated with digital economy development and is an essential source of funds for promoting technological innovation. Therefore, this study expected a negative coefficient sign.
Energy situation (ES). Burning fossil fuels, especially coal, is regarded as an important source of haze pollution [49], and China is among the few countries whose energy consumption structure is dominated by coal. Therefore, the total amount of energy consumption (tons of standard coal) is used. The higher the proportion of coal consumption, the less likely it is to decrease haze concentration. We expected a positive coefficient sign.
Innovation degree (IN). IN is the number of patents granted by each province. The larger its value, the stronger the technological innovation ability, which helps improve the factor utilisation efficiency and reduce pollution emission intensity. Therefore, we expected a negative coefficient sign.
The index data for the core explanatory variables are available from the China Statistical Yearbook [50]. The index data for the intermediary and control variables are from the WIND and China Stock Market Accounting Research databases. Table 1 shows the descriptive statistics. To reduce errors and heteroscedasticity caused by different units, each variable was treated logarithmically. The results show that haze pollution varies significantly among different regions. The development index of the digital economy (lnDIGE) has a small mean and large standard error, while the standard error of the industrial structure (the mediating variable) is relatively small. Clear differences among provinces exist in terms of PS, FA, ES, and IN.
Spatio-Temporal Evolution of China's PM2.5 Concentration and Digital Economy
This study selected three cross sections of time-2011, 2014, and 2018. The spatial clustering characteristics of the digital economy development and haze pollution distribution in 30 Chinese provinces were analysed using the natural fracture method.
As illustrated in Figure 1a-c, for PM2.5 pollution, the 30 provinces showed an overall decline over the 8 years of the haze index. Maximum PM2.5 concentration by region was found in east-central China and the provinces of Shandong, Henan, Anhui, and Jiangsu, among others, in 2011, 2014, and 2018. The PM2.5 concentration in these regions was three times the smog concentration in the next highest echelon. In Hubei, Shanxi, Guangdong, Guizhou, and Chongqing provinces, PM2.5 pollution levels improved significantly, while they deteriorated in Xinjiang, Liaoning, and Gansu. This result was affected not only by geographical location and meteorological conditions but also by the provinces' social and economic development [51]. The possible reasons are as follows: (1) Most economically developed provinces have relatively high PM2.5 levels and have consequently witnessed greater efforts to control air pollution. (2) The industrial division of labour in the provinces is changing. An increase in the proportion of the tertiary sector improves air quality, while the transfer of the industrial structure aggravates haze pollution in the receiving province.
(3) In the central and western regions, which have low population density, PM2.5 pollution is not quite as severe, and little attention is paid to, or investments made for, mitigating air pollution, causing a continuous deterioration of air quality.
Spatial Autocorrelation Analysis
To accurately understand the provincial-level digital economy and haze pollution agglomeration in the country, this study analysed the variables for the provinces with PM2.5 air pollution and digital economy development. Figure 2 shows the two indicators China's three major economic belts are the bay area of the Yangtze River Delta, Guangdong province, and the Beijing-Tianjin-Hebei region, which witnessed substantial digital economy development in the first phase. Combined with other areas of the country, these form a clear core-periphery model wherein the eastern region's digital economy development index has leading areas, such as Guangdong, Jiangsu, Beijing, Shanghai, Zhejiang, Shandong, Shanxi, Shaanxi, and Guizhou. Moreover, Sichuan, Jiangxi, Anhui, Hubei, and other mid-west cities are catching up. Comparative advantage is implemented by digital economy development rotation. Simultaneously, the digital economy index reflects the imbalance and insufficiency among various regions in China [51]. The digital economy in Xinjiang, Gansu, Ningxia, and other regions in more remote areas is developing slowly, forming the bottom of the index. Thus, strengthening the Internet infrastructure construction in these areas is necessary.
Spatial Autocorrelation Analysis
To accurately understand the provincial-level digital economy and haze pollution agglomeration in the country, this study analysed the variables for the provinces with PM2.5 air pollution and digital economy development. Figure 2 shows the two indicators in the global Moran's I calculation formula: the 2011-2018 global Moran's I of the PM2.5 index, which is between 0.22 and 0.39 (p-value is 0.000-0.010, significant at 1%), with Z (I) 2.6-3.4 (Z >+ 2.58); and the global Moran's I of the digital economy, which is between 0.28 and 0.37 (p-value is 0.000-0.004, significant at 1%), with Z(I) 2.6-3.4 (Z >+ 2.58). Thus, the distribution of haze pollution and the digital economy presented significant spatial autocorrelation and had a geographical agglomeration feature. The more severe the haze pollution in the focal province, the higher the haze pollution in the neighbouring provinces. Moreover, the more advanced the digital economy in the focal province, the higher the degree of digital economy development in the neighbouring provinces. distribution of haze pollution and the digital economy presented significant spatial autocorrelation and had a geographical agglomeration feature. The more severe the haze pollution in the focal province, the higher the haze pollution in the neighbouring provinces. Moreover, the more advanced the digital economy in the focal province, the higher the degree of digital economy development in the neighbouring provinces.
LM Test
Moran's I passed the significance test. The classical OLS regression had a significant spatial correlation; therefore, a spatial econometric model should be used for parameter estimation. As presented in Table 2, both LM-Lag and LM-Error passed the 1% significance level of the spatial dependence test. According to the criteria, LM-Lag and LM-Error should pass the significance test, and the lag and robust LM-Error should pass the 1% significance test. Thus, the spatial lag model and the SEM were used to estimate the regression. We introduced the neighbouring weighting matrix to the model and analysed the regression results.
LM Test
Moran's I passed the significance test. The classical OLS regression had a significant spatial correlation; therefore, a spatial econometric model should be used for parameter estimation. As presented in Table 2, both LM-Lag and LM-Error passed the 1% significance level of the spatial dependence test. According to the criteria, LM-Lag and LM-Error should pass the significance test, and the lag and robust LM-Error should pass the 1% significance test. Thus, the spatial lag model and the SEM were used to estimate the regression. We introduced the neighbouring weighting matrix to the model and analysed the regression results.
Regression Results and Discussion
As shown in Table 3, in the SAR estimation with a time fixed effect, the estimated value of ρ was 0.2, significant at the 5% level. This value indicates that neighbouring regions have a significant positive spatial spillover effect on PM2.5; an increase of 1% in PM2.5 concentration in neighbouring provinces leads to an increase of approximately 0.2% in PM2.5 concentration in the focal province. Thus, maintaining a province's particular approach to haze treatment cannot effectively solve inter-regional haze pollution. Consequently, transforming local treatment to regional joint prevention and control is necessary. In the SEM, the λ value was significant at the 10% level. This indicates that haze concentration is affected not only by observable factors such as population structure but also by observable factors in adjacent areas. The influencing factors are discussed below. First, we focus on the effect and magnitude, of the core explanatory variable of the digital economy on haze pollution. The panel OLS, SAR, and SEM models showed that the digital economy development has a significantly negative effect on haze reduction, passing the significance test with a 99% confidence level. Every 1% increase in the development level of the digital economy reduces haze concentration in the region by approximately 0.2%. The possible reasons for this are as follows: (1) The digital economy promotes the construction of digital infrastructure through technological effects. (2) The digital economy, through structural effects, expands the proportion of digital industries, digitally empowers traditional industries, improves the energy efficiency and operational efficiency of traditional industries, promotes the rapid and efficient transformation and upgrading of traditional industries, and finally achieves low energy consumption and low emissions [53].
Second, from the perspective of energy, in the OLS and SAR models, the total energy consumption was significantly positive at 5%, which is consistent with expectations. This indicator shows a significant promoting effect on haze pollution [54]. The secondary sector includes industry and construction; the industrial consumption of coal, oil, nonferrous metals, and other raw materials creating fine dust, which is the leading cause of haze pollution. The energy structure is the key factor responsible for aggravating haze pollution. Therefore, accelerating the transformation and upgrade of this structure is urgently necessary.
The degree of innovation variable in the OLS and SEM models was significantly negative at the 5% level. The early stage of economic development involves resource consumption to expand production and satisfy people's material needs. Thus, economic development neglects environmental protection to a certain extent. Moreover, although living standards are widely improved, natural resources become constrained. These aspects of early development highlight the importance of environmental protection in the reversed transmission of technology innovation, transforming economic development patterns, and optimising economic structure adjustment [55].
The population structure variable in the three models showed inconsistencies; its coefficient was positive in the OLS model, confirming that young people are willing to accept high pollution emissions in exchange for high income; thus, an increase in the labour population can increase or aggravate haze pollution [47]. However, in the SAR and SEM models, the coefficient was negative, possibly because labour population agglomeration significantly reverses the transmission of regional environment improvement to reduce smog pollution.
The coefficients of fixed assets were all negative, indicating that fixed asset investment is positively correlated with the development of the digital economy and is an important source of funds to promote technological innovation. Among the control variables, population structure and fixed assets were statistically significant.
Test for Threshold Regression Model
From the analysis of the theoretical model (8), we observed the mechanism through which the digital economy affects haze pollution. Considering that the development of the digital economy acts on haze reduction through structural effects, this study introduced the mediation variable index of industrial structure as the threshold for a threshold effect analysis to examine the influence of different intervals of the industrial structure on haze. The form of the panel threshold model was tested first. Subsequently, we followed Hansen [56] and used the bootstrap sampling method to simulate a likelihood ratio statistic of 200, estimating the threshold value and relevant statistics. The results show that a single threshold of F statistic was significant at 5%, while the double and triple thresholds were not significant. Thus, to analyse the effect of the digital economy on haze pollution, we considered the industrial structure to be a single threshold effect and assumed that the industrial structure is the threshold variable. As shown in Table 4, the negative influence of the digital economy on haze pollution continued to weaken, and the non-linear characteristics of the negative and diminishing 'marginal effect' of the digital economy remained. This trend shows that the dynamic influence of the digital economy on haze pollution is affected not only by its development level but also by the regulating influence of the industrial structure, which is reflected in the positive interaction between the digital economy and industrial structure. However, this effect gradually weakens with the change of industrial structure.
Heterogeneity Test
Owing to different resource endowments and stages of development, both the development level of the digital economy and haze pollution have noticeable heterogeneity in terms of their regional distribution. Therefore, regional differences may exist in the impact of the digital economy on haze pollution reduction, which are necessary to consider for an in-depth discussion.
First, a descriptive statistical explanation is provided for the differences in haze pollution and digital economy development levels in various regions. As shown in Table 5, in terms of PM2.5, the logarithmic mean value of haze pollution is the lowest in western China and highest in eastern China. Thus, the eastern region is significantly ahead of the central and western regions in terms of digital economic development. The mean difference between the eastern and the middle and western regions is approximately 0.574 and 0.89, respectively, reflecting a first-mover advantage. This result sets the foundation for the regional heterogeneity test of the effects of the digital economy on haze pollution. The regression analysis of regional heterogeneity is shown in Table 6. The results of models (1), (2), and (3) show that the digital economy in eastern China has a significant effect on reducing haze pollution, while the effect is not significant in central and western China. In other words, considering regional heterogeneity, the digital economy in eastern China has a higher positive effect on haze pollution reduction. This result is possibly because the digital economy in eastern China developed earlier and was at a higher level, causing the dividend of the impact of the digital economy on environmental governance to be released more fully.
Changing the Spatial Matrix
A spatial econometric model, which is highly dependent on the spatial weight matrix, was used to study the influence of the digital economy on haze pollution. First, the neighbouring spatial weight matrix (W1) was used to determine whether the provinces are adjacent. Adjacency was set to 1, and non-adjacency to 0. Second, the robustness of the regression results was tested using the geographical distance spatial weight matrix (W2), which was constructed using the reciprocal of the square deviation of the distance between provinces.
From the test results in Table 7, we inserted the weight matrix (W2) into the spatial lag model (4) and SEM (5). We observed that the coefficient of the core variable of the digital economy was significantly negative in the SEM model. The lnDIGE regression coefficient was the largest and was significant at 1%, indicating that the digital economy's spatial influence on haze pollution is more likely to be in the error term of undetectable than the spatial correlation between the two in time. Therefore, the development of the digital economy can effectively reduce haze pollution, which is consistent with the main research results and proves that the regression results are robust.
Use of Instrumental Variable
Selecting appropriate instrumental variables for the core explanatory variables can resolve endogeneity problems. Following Huang et al. [45], this study adopts the 1984 volume of each province's post and telecommunications business as the core explanatory variable and the instrumental variable of the comprehensive index of digital economy development. The instrumental variables must satisfy exogeneity and correlation. On the one hand, with the continuous development of traditional communications technology, previous levels of the local telecommunications infrastructure affect the subsequent stage of application of Internet technology from the technical level perspective and usage habits.
On the other hand, the impact of the use of traditional telecommunications tools, such as the use of fixed-line telephones, on economic development should meet the exclusivity. As their usage frequency gradually declines with social and economic development, the instrumental variable must satisfy the conditions.
As the original data of the selected tool variable are in cross-sectional form, they cannot be used directly in the econometric analysis of panel data. Based on Nunn and Qian [57], a variable that changes over time is introduced to construct the panel tool variable. The interaction term is constructed by the number of Internet users in the last year and the number of telephones per 10,000 people in each province in 1984. This statistic is used as the instrumental variable of the digital economy index of the province in that year. The results in columns (1) and (2) of Table 8 show that the effect of the digital economy on reducing haze pollution remains valid after considering endogeneity, and the results are all significant at 1%. For the test of the null hypothesis of insufficient identification of instrumental variables, the LM statistic P values are all 0.000, which significantly rejects the null hypothesis. In the test for weak identification of instrumental variables, the Wald F statistic is greater than the threshold value above 10% of the weak identification test. In general, the tests illustrate the rationality of choosing the cross-term, between the historical postal and telecommunications volumes of various provinces, and the number of Internet users in China as the instrumental variable of digital economy development.
Conclusions
First, this study constructs an evaluation system for developing the digital economy at the provincial level in China from the two aspects of digital infrastructure (representing digital technology) and the digital industry (representing emerging industries). It calculates the development level of the digital economy in each province using the entropy weight method. Second, the spatial spillover effects of haze pollution and digital economy development are tested with the global and local spatial correlation indexes. Third, using the data of 30 provinces in China from 2011 to 2018, OLS regression and spatial SAR and SEM models were used to analyse the impact of digital economy development on haze pollution. Fourth, using the threshold model, the study discusses how the digital economy mechanism affects haze pollution through industrial structure change. Finally, the study divides the research samples into three regions (eastern, central, and western regions) to study the regional heterogeneity impact of digital economic development on haze pollution.
The findings of the present study are as follows: First, both haze pollution and digital economy distribution present significant global positive spatial spillover effects and local characteristics. Second, the digital economy has a positive impact on reducing smog. The development of the digital economy in neighbouring provinces has a significant positive spillover effect on reducing haze pollution in key provinces. The change of energy structure and innovation degree can effectively restrain the aggravation of haze pollution, and the conclusion is still valid in the robustness test using the instrumental variable method and adjusting the spatial matrix. Third, the results of the transmission mechanism show that the development of the digital economy can affect haze pollution by changing the industrial structure, showing the non-linear feature that the influence of haze reduction continues to weaken. Finally, in terms of regional differences, the impact of the digital economy on haze pollution is most significant in eastern China, while not significant in central and western China. Based on this study, the following policy recommendations are put forward.
First of all, the penetration and application of digital technology in environmental governance should be accelerated. We would increase investment in digital technologies; pay attention to the breadth and depth of applications in advanced fields such as the Internet, 5G, artificial intelligence, and big data; promote the circulation and sharing of resources, knowledge, and capital; and promote the improvement of digital economy in environmental governance, such as energy conservation and emission reduction. Second, the transformation and upgrading of industrial structure should be promoted, encouraging enterprises to vigorously develop cutting-edge technologies and promoting the continuous progress of digital industry and digitization of industry. Third, it is necessary to understand further the positive impact of the digital economy on reducing haze pollution in central and western China, indicating that a dynamic and differentiated digital economy strategy should be implemented. Finally, haze reduction policies should take into account spatial spillover and decomposition boundaries of administrative areas.
Although this study supplements the relevant research on the impact of the digital economy on haze and provides some theoretical reference for the digital economy on environmental governance, there is still room for further research. First, this paper measures the digital economy from two aspects: digital infrastructure and digital industry. Because of the existing data, it may have measurement errors. The evaluation is conducted at the provincial level, and the sample size is limited. In the future, it can be more detailed and micro, which may be more accurate in exploring the relationship between the two from the city level. Second, this study empirically analyses the spatial impact of digital economy development on haze pollution. The mechanism part is only carried out from the perspective of industrial structure, and subsequent studies should further explore the multi-dimensional impact of different mechanisms on haze. Finally, the development of the digital economy is cyclical, and each stage has a different impact on haze levels. This should be further investigated in future studies. | 9,403.8 | 2021-08-13T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Buckling in Armored Droplets
The issue of the buckling mechanism in droplets stabilized by solid particles (armored droplets) is tackled at a mesoscopic level using dissipative particle dynamics simulations. We consider spherical water droplet in a decane solvent coated with nanoparticle monolayers of two different types: Janus and homogeneous. The chosen particles yield comparable initial three-phase contact angles, chosen to maximize the adsorption energy at the interface. We study the interplay between the evolution of droplet shape, layering of the particles, and their distribution at the interface when the volume of the droplets is reduced. We show that Janus particles affect strongly the shape of the droplet with the formation of a crater-like depression. This evolution is actively controlled by a close-packed particle monolayer at the curved interface. On the contrary, homogeneous particles follow passively the volume reduction of the droplet, whose shape does not deviate too much from spherical, even when a nanoparticle monolayer/bilayer transition is detected at the interface. We discuss how these buckled armored droplets might be of relevance in various applications including potential drug delivery systems and biomimetic design of functional surfaces.
Pickering emulsions [1], i.e. particle-stabilized emulsions, have been studied intensively in recent years owning to their wide range of applications including biofuel processing [2] and food preservation [3,4]. They have also been developed as precursors to magnetic particles for imaging [5] and drug delivery systems [6]. Even with their widespread use, they remain, however, underutilized. In Pickering emulsions, particles and/or nanoparticles (NPs) with suitable surface chemistries adsorb at the droplet surfaces, with an adsorption energy of up to thousands of times the thermal energy. The characteristics of Pickering emulsions pose a number of intriguing fundamental physical questions including a thorough understanding of the perennial lack of detail about how particles arrange at the liquid/liquid interface. Other not completely answered questions include particle effects on interfacial tension [7], layering [8], buckling [9][10][11] and particle release [8,12].
In some important processes that involve emulsions, it can be required to reduce the volume of the dispersed droplets [9,[13][14][15]. The interface may undergo large deformations that produce compressive stresses, causing localized mechanical instabilities. The proliferation of these localized instabilities may then result in a variety of collapse mechanisms [8,10,11]. Despite the vast interest in particle-laden interfaces, the key factors that determine the collapse of curved particle-laden interfaces are still subject of debate. Indeed, although linear elasticity describes successfully the morphology of buckled particle-laden droplets, it is still unclear whether the onset of buckling can be explained in terms of classic elastic buckling criteria [16,17], capillary pressure-driven phase transition [9], or interfacial compression phase transition [18]. Numerous experiments have been conducted to link the rheological response of particle-laden interfaces to the stability of emulsions and foams. However, their results could be dependent on the method chosen for preparing the interfacial layer. Due to their inherent limited resolution, direct access to local observables, such as the particles three-phase contact angle distribution, remains out of reach [19]. This crucial information can be accessed by numerical simulations sometimes with approximations. All-atom molecular dynamics (MD) simulations have become a widely employed computational technique. However, all-atom MD simulations are computationally expensive. Moreover, most phenomena of interest here take place on time scales that are orders of magnitude longer than those accessible via all-atom MD. Mesoscopic simulations, in which the structural unit is a coarse-grained representation of a large number of molecules, allow us to overcome these limitations. It is now well established that coarse-grained approaches offer the possibility of answering fundamental questions responsible for the collective behaviour of particles anchored at an interface [20].
We employ here Dissipative Particle Dynamics (DPD) [21] as a mesoscopic simulation method. We study the shape and buckling transitions of model water droplets coated with spherical nanoparticles and immersed in an organic solvent. The procedure and the parametrisation details are fully described in prior work [22][23][24] and in the Supporting Information (SI). The particles are of two different types: Janus and homogeneous. They are chosen so that the initial three-phase contact angles (≈ 90 • ) result in maximum adsorption energy. The volume of the droplets is controllably reduced, pumping randomly a constant proportion of water molecules out of the droplet (more details in the SI). At every stage we remove 10 percent of the water from the droplet. Throughout this letter, E i refers to the i th removal of water, with E 0 corresponding to the initial configuration and E 20 to the final configuration. We seek FIG. 1. Sequence of simulation snapshots representing buckling processes of water in oil droplets armored with 160 spherical Janus (top) and homogeneous (bottom) nanoparticles after successive removals of water. The number of water beads removed increases from left to right with Ei refering to the i th removal. Cyan and purple spheres represent polar and apolar beads, respectively. Pink spheres represent water beads. The oil molecules surrounding the system are not shown for clarity.
to determine whether the NPs at the droplet interface buckle, causing the droplets to deviate from the spherical shape. We show that Janus particles affect strongly the shape of the droplet via the formation of a crater-like depression. This evolution is actively controlled by a closepacked particle monolayer at the curved interface. On the other hand, homogeneous particles follow passively the volume reduction of the droplet. The shape of the droplet remains approximately spherical with a nanoparticle monolayer/bilayer transition, with some NPs desorbing in water. We discriminate the two mechanisms with the evolution of their respective nanoparticle threephase contact angle distributions. While for Janus particles the distribution remains unimodal, albeit skewed when the droplet is significantly shrinked, for homogeneous particles, the evolution of the contact angle distribution becomes bimodal with some particles becoming more/less immersed in the aqueous phase. We consider a system initially made by a spherical water droplet immersed in oil, and stabilized by a sufficiently dense layer of NPs [24]. The initial shape of the droplet is spherical. The only difference between the two systems is the NP chemistry, i.e. the distribution and proportion of polar and apolar beads around the spherical particles and their efficiency in interacting with the two fluids at the interface. Janus and homogeneous NPs are designed to present comparable three-phase contact angles, θ c = (91.6 ± 2.0) • and θ c = (88.7 ± 3.5) • , respectively (cf. SI for details). We consider throughout this study the same NP density on the droplets. We calculate the radius of gyration, R GYR , and the asphericity, A s , for the droplet covered by either Janus or homogeneous NPs (cf. SI for details). For the initial configurations, we ob-tain R GYR = 13.837 ± 0.003 and R GYR = 13.860 ± 0.003, and A s = 0.156±0.05 and A s = 0.153±0.05, respectively, expressed in R C units (cf. SI for details).
In Fig.1 we show representative snapshots obtained during the simulations for systems containing Janus NPs (top panels) and homogeneous NPs (bottom panels). Visual inspection of the simulation snapshots highlights some fundamental differences between the two buckling processes. We start with spherical initial droplets (E 0 ). When the water droplet is coated with Janus particles (top), the system starts developing dimples as moderate amount of water is removed (E 2 ). The morphology then becomes more crumpled with increasing numbers of dimples (E 5 ). For stronger removal, the droplet geometry evolves to a large and smooth curved shape, yielding a crater-like depression to minimize the interfacial energy of the system (E 8 and E 20 ). During this evolution, Janus NPs remain strongly adsorbed at the interface, forming a close-packed monolayer between the two fluids.
The buckling process is fundamentally different when the water droplet is stabilized with homogeneous NPs (bottom). When the volume of the droplet is reduced, the shape of the system evolves smoothly and does not present any sharp transition to morphologies showing dimples and cups, nor crater-like depressions. Instead, the NPs reorganize progressively into a bilayer, presumably to minimize the system energy. Unlike Janus NPs, homogeneous NPs either protrude exceedingly towards the decane solvent, or recede into the water droplet with some particles even desorbing into the water phase (from E 2 to E 20 ). For reference, we recall that the change in energy accompanying desorption of a spherical particle from the oil-water interface to either bulk phase is ap- proximated by ∆E = πr 2 γ ow (1 ± cos θ) 2 , in which r is the particle radius, γ ow is the bare oil-water interfacial tension, and the plus (minus) sign refers to desorption into oil (water) [19]. Even if this expression assumes the oil-water interface remains planar up to the contact line with the particle, it can give a rough approximation of the energy at play. Considering the system parameters given in the SI, we obtain ∆E ≈ 85 k B T in our systems when one NP desorbs.
These two different behaviours are quantitatively investigated in Fig. 2, where we show the temporal evolution of the radius of gyration (left panel), R GYR , and the asphericity (right panel), A s , of the two droplets as a function of the dimensionless parameter ∆N W ≡ N W /N 0 W , with N W the number of water beads remaining in the droplet, and N 0 W the initial number of water beads in the droplet. When N W > 0.6, the radius of gyration of two systems follow the same evolution, regardless the chemistry of the NPs (Janus or homogeneous). For one droplet coated with Janus NPs, R GY R then departs from its linear trend when N W < 0.6. This departure corresponds to the evolution from E 5 to E 8 in Fig. 1, i.e. the transition from a droplet interface made of dimples and cups to the formation of the crater-like depression. During this transition, the size of one dimple increases when the system relaxes after evaporation. This local evolution yields a larger depression, which causes the progressive coalescence of the small dimples. This transition is consistent with the surface model numerical analysis from Ref. [17], which study the shape evolution of a spherical elastic surface when the volume it encloses is decreased. This model, which has long been considered as valid to describe the deformation of thin shells [16,25], showed that a thin shell with a single dimple has lower energy than a shell containing multiple dimples. This occurs because elastic energy mainly concentrates in dimple edges as bending energy. Dimples coalescence lowers the total elastic energy. Below ∆N W ≈ 0.6, R GY R increases as the droplet can be described as half-sphered. Let us note that this evolution is coherent with the temporal evolution of the radial distribution function of the NPs, g(r), with r the distance between the centers of the NPs, given in the SI. In contrast, the droplet coated with homogeneous NPs shrinks isotropically when the volume reduces even below ∆N W ≈ 0.6. This evolution yields continuous decrease of R GY R and a relatively low A s in Fig. 2. Eventually, the NP concentration becomes too high and some NPs move into the droplet. When N W < 0.25, the number of water beads that remain in the droplet is not sufficient to define unambiguously the droplet volume. This limitation impacts the system shape and the evolution of R GY R and A s for Janus and homogeneous NPs.
We also quantify the NP layer properties to see if the NPs actively influence or passively follow the evolution of the droplet geometry. In Fig. 3, we compare the three phase contact angle distribution of Janus (left panel) and homogeneous (right panel) NPs at the initial stage E 0 , where the shape of the droplet is spherical, and the final stage, E 20 . The initial distributions, fitted with continuous lines, can be described with Gaussian distributions for both NPs. The values of the respective means, µ J and µ H , and variances, σ J and σ H , differ due to the NPs chemistry. We obtain µ J = 91.6 • and µ H = 88.6 • , and σ J = 2.0 • and σ J = 3.4 • for Janus and homogeneous NPs, respectively. When the droplet coated with Janus NPs shrinks, the contact angle distribution evolves to a skewed one, but it remains unimodal, with a single peak centered at the same value as the one measured for the initial configuration. The emergence of the skewness of the distribution is linked to the decrease of the NP-NP distance when the droplet volume is reduced. It is due to the major role played by steric effects. As discussed earlier, to minimize its interfacial energy, the system must deform its shape, eventually forming a crater-like depression. We conclude this transition is achieved through the active role played by the Janus NPs. In the final structure, some NPs are forced to deviate from their original contact angle, increasing the skewness of the distribution on both sides of the peak.
The evolution of the system is different when homogeneous NPs are present. As the droplet volume is reduced, the contact angle distribution firstly evolves as a monolayer interface with a single peak (cf. SI). As the droplet shrinks further, and the distance between the NPs decreases, the distribution becomes bimodal, with two distinct peaks emerging on both sides of the initial equilibrium contact angle. This feature is characteristic of a particle bilayer. Indeed, homogeneous NPs are more weakly attached to the interface than Janus NPs. In the case of buckling mechanism studied here, the homogenous NPs mainly follow the volume reduction, sharing the interfacial area, either receding into the water droplet or protruding towards the organic solvent. Unlike Janus NPs, homogeneous NPs do not drive the evolution of the droplet shape, which does not differ too much from the spherical geometry. The behaviour just described is characteristic of the passive role played by the homogeneous NPs, which mainly follow the volume reduction, only modulating the droplet shape due to the steric constraints.
The curved shape obtained when the droplet is coated with Janus NPs can also be characterized by the wettability associated with the local arrangement of the NPs at the interface. The particles with a contact angle θ c < 85 • , i.e. the blue ones in Fig. 3 (left panel), can be found in the crater-like depression. These particles have receded into the water droplet due to the concave local geometry of the interface. The particles with θ c > 100 • , i.e. the red ones in Fig. 3 (left panel), can be found at the transition between the concave and convex areas of the interface, where they are likely to protrude towards the solvent. The shape deformation of the droplet is achieved through the active role played by the Janus NPs. Their specific chemistry causes them to create an interface with excess wettability (θ c < 85 • ) in a pocket delimited by the crater-like depression, and surrounded by a cup with low wettability (θ c > 100 • ).
Our results are consistent with experiments reporting buckling and crumpling of nanoparticle-coated droplets [9][10][11]. In particular, we observe a close analogy to the experimental work of Datta et al. [11], who studied water-in-oil droplets of varying sizes. In these experiments the dispersed phase is slightly soluble in the continuous phase. The volume reduction was controlled with the addition of a fixed amount of unsatured continous phase. As shown in Fig. 4, Datta et al. observed droplet shapes including dimples, cups, and folded configurations, in agreement with our simulations (cf. experimental details in the caption in Fig. 4). Unlike our mesoscopic analysis, Datta et al. [11] do not have access to the particle three-phase contact angle distribution. This information provides a deeper understanding of the organisation of the NPs at the interface, and allows us to decipher the active or passive role of the NPs.
As explained in the SI, the layering properties of the particles depend strongly on the numerical protocol. For example, decreasing the relaxation time between successive water removals can induce NP release from the interface, which is in agreement with experiments [12]. The results presented here seem to be due to the chemistry of the nanoparticles simulated (i.e. Janus vs. homogeneous). It is however possible that homogeneous NPs with large adsorption energy become active and yield buckled armored droplets similar to those observed when Janus NPs are simulated here. The new physical insights discussed in this letter could be useful for a variety of applications. For example, controlling the positions of the solid particles with respect to the interface could help in heterogeneous catalysis [26]. In biomimetic design, where the identification and evaluation of surface binding-pockets is crucial, the ability of controlling pockets such as those created by the craterlike depression in the presence of Janus NPs, could play a central role in designing structures with a defined geometry [27]. The analogy between Fig. 1 and the shape of protein active site might play an important role for ligand docking [28,29]. Finally, buckled armored droplets might also be of relevance as potential drug delivery systems [6]. Over the last decade, nanoscale droplets have been used for instant real-time ultrasound imaging of specific organs [5]. Superparamagnetic solid NPs provide a means of manipulating the droplets using an external magnetic field [5]. One of the main limitations in such applications is droplet coalescence, which can happen before droplets reach the target. The specific shapes obtained with buckled armored droplets might prevent coalescence. Indeed, the NP arrangements on the droplets show increased packing, which reduces significantly the NPs mobility. The particle layers would then provide enough mechanical resistance to guarantee the droplet stability.
Buckling in Armored Droplets
Supporting Information
MD SIMULATION METHOD
The Dissipative Particle Dynamics (DPD) simulation method [21] was implemented within the simulation package LAMMPS [30]. The procedure and the parametrisation details are fully described in prior work [22,23]. The system simulated here is composed of water, oil (decane), and nanoparticles (NPs). One "water bead" (w) represents 5 water molecules and a reduced density of one DPD bead is set to ρ = 3. One decane molecule is modeled as two "oil beads" (o) connected by one harmonic spring of length 0.72 R c and spring constant 350 k B T /R c [31], where R c is the DPD cutoff distance. The initial size of the simulation box is L x × L y × L z ≡ 72 × 72 × 78 R 3 c , where L i is the box length along the i th direction. Periodic boundary conditions are applied in all three directions. The NPs are modelled as hollow rigid spheres and contain polar (p) and nonpolar (ap) DPD beads on their surface. One DPD bead was placed at the NP center for convenience, as described elsewhere [22,23]. Hollow models have been used in the literature to simulate NPs, and hollow NPs can also be synthesized experimentally [32]. We considered spherical NP of the same volume, 4/3πa 3 0 , where a 0 is the radius of the sphere. We imposed a 0 = 2R c ≈ 1.5 nm. All types of beads in our simulations have reduced mass of 1. We maintain the surface bead density on the NPs sufficiently high to prevent other DPD beads (either decane or water) from penetrating the NPs (which would be unphysical), as it has already been explained elsewhere [23]. To differenciate every NPs, we report the nonpolar fraction of the NP surface beads and the NP type. For example, 75HP (55JP ) indicates that 75% (55%) of the beads on the NP surface are nonpolar, and that we consider an homogeneous (Janus) NP.
The interaction parameters shown in Table I are used here. These parameters were adjusted to reproduce selected atomistic simulation results, as explained in prior work [22]. By tuning the interaction parameters between polar or nonpolar NP beads and the water and decane beads present in our system, it is possible to quantify the effect of surface chemistry on the structure and dynamics of NPs at water-oil interfaces. Specifically, the interaction parameters between NP polar and nonpolar beads were adjusted to ensure that NPs are able to assemble and disassemble without yielding permanent dimers at the water/oil interface [22]. All simulations were carried out in the NVE ensemble [22]. The scaled temperature was 1, equivalent to 298.73 K. The DPD time scale can be gauged by matching the selfdiffusion of water. As demonstrated by Groot and Rabone [31], the time constant of the simulation can be calculated as τ = NmDsimR 2 c Dwater , where τ is the DPD time constant, D sim is the simulated water self-diffusion coefficient, and D water is the experimental water self-diffusion coefficient. When a w−w = 131.5 k B T /R c (cf. Tab. I), we obtained D sim = 0.0063 R 2 c /τ . For D water = 2.43 × 10 −3 cm 2 /s [33], we finally obtain τ = 7.6 ps.
SYSTEM CHARACTERISATION: DROPLETS AND NANOPARTICLES
In our simulations, the initial size of the droplet was fixed. At the beginning of each simulation, the solvent (oil) beads were distributed within the simulation box forming a cubic lattice. One water droplet of radius ≈ 20 R c was generated by replacing the oil beads with water beads within the volume of the spherical surface. A number of spherical NPs were placed randomly at the water-decane interface with their polar (nonpolar) part in the water (oil) phase to reach the desired water-decane interfacial area per NP. Following previous work [24,34], the NPs considered in this study are spherical and of two different types: Janus and homogeneous. The emulsion systems are stabilized by a sufficiently dense layer of NPs [24]. We considered water droplets coated with 160 spherical Janus and homogeneous nanoparticles of type 55JP and 75HP , respectively. Considering the NP surface coverage, φ, defined in Ref. [22], we obtain φ ≈ 0.9. Considering the results obtained in Ref. [22] for a flat interface, this yields an interfacial tension γ ow ≈ 6.8 k B T /R 2 c for both Janus and homogeneous NPs. The initial configuration obtained was simulated for 10 6 timesteps in order to relax the density of the system and the contact angle of the nanoparticles on the droplet. The system pressure and the three-phase contact angles did not change notably after 5000 simulation steps. We let then run the system for an additional 2 × 10 6 timesteps to generate two new intial configurations after 2 × 10 6 and 3 × 10 6 timesteps, respectively. We then repeated the buckling simulation with these different initial configurations to test the reproducibility of the simulation.
The surface area of the droplets is slowly diminished, pumping randomly a constant proportion, i.e. 10 percent, of water molecules out of the droplet and letting the system pressure and the three-phase contact angles equilibrate at constant density. By slowly, we mean we do not create any hollow volume in the droplet that would strongly drive the system out-of-equilibrium. Doing so, the three-phase contact angle distribution of the NPs evolves sufficiently smoothly when the droplet buckles and becomes nonspherical, thereby preventing particles to be artifactually realeased. This numerical protocol can be similarly compared with an emulsion system where the dispersed phase is slightly soluble in the continuous phase [11]. By adding a fixed amount of unsatured continuous phase, the volume of the droplets can then be controllably reduced.
Three-phase contact angle. To estimate the three phase contact angle on the droplets we calculate the fraction of the spherical NP surface area that is wetted by water [35], where A w is the area of the NP surface that is wetted by water and R is the radius of the NP. The ratio A w /4πR 2 is obtained by dividing the number of NP surface beads (ap or p), which are wetted by water, by the total number of beads on the NP surface (192 for spherical NP). One surface bead is wet by water if a water bead is the solvent bead nearest to it. One standard deviation from the average is used to estimate the statistical uncertainty.
Radius of gyration. The description of the geometrical properties of complex systems by generalized parameters such as the radius of gyration or principal components of the gyration tensor has a long history in macromolecular chemistry and biophysics [24,36,37]. Indeed, such descriptors allow an evaluation of the overall shape of a system and reveal its symmetry. Considering, e.g., the following definition for the gyration tensor, where the summation is performed over N atoms and the coordinates x, y, and z are related to the geometrical center of the atoms, one can define a reference frame where T GY R can be diagonalized: In this format we obey the convention of indexing the eigenvalues according to their magnitude. We thus define the radius of gyration R 2 GY R ≡ S 2 1 + S 2 2 + S 2 3 , and the asphericity A s ≡ S 1 − 1 2 (S 2 + S 3 ), which measures the deviation from the spherical symmetry. To determine the properties of a droplet, we calculate R GY R and A s using the centers of the water beads.
In this letter, Janus and homogeneous NPs present similar three-phase contact angle θ c = (91.6 ± 2.0) • and θ c = (88.7±3.5) • , respectively. The radius of gyration, R GYR , and the asphericity, A s , for the Janus and homogeneous initial configurations are R GYR = 13.837 ± 0.003 and R GYR = 13.860 ± 0.003, and A s = 0.156 ± 0.05 and A s = 0.153 ± 0.05, respectively, expressed in R C units.
EVOLUTION OF THE NANOPARTICLE RADIAL DISTRIBUTION FUNCTION
The transition from dimples and cups to crater-like depression observed when Janus nanoparticles cover the droplet can be reflected in the temporal evolution of the radial distribution function of the NPs, g(r), with r the distance between the centers of the NPs. We first consider the initial spherical configuration, and extract the list of nearest neighbours of each NP within a shell of radius r < 12R c . This treshold value defines the first and second neighbouring shells [24,38]. As the system is densely packed at the interface, we follow the temporal evolution of g(r) considering this subset of particles, which corresponds to the nearest neighbours shell. When the volume of the droplet reduces, we first see in Fig. S1 (left panel) the emergence of a new peak around r ≈ 4.75R C < r DPD , where r DPD = 5R C represents the distance between the centers of NPs above which the NPs do not interact with each other through the DPD non-bonded force. As the droplet volume further reduces, the heigh of the new peak increases. This evolution continues until ∆N W ≈ 0.6 (cf. Fig. 2 in the main text). Below that value, the heigh of the first peak decreases, together with the increase of the heigh of the second peak. This corresponds to the transition from dimples and cups to crater-like depression. The evolution continues until ∆N W ≈ 0.25 where the number of water beads which remain in the droplet is not sufficient to define unambiguously the droplet volume.
As the g(r) evolution between the homogeneous particles is concerned, we see in Fig. S1 (right panel), a constant increase of the heigh of the first peak, centered at r ≈ 4.75R C . This first peak located below r DPD is already present in the initial spherical configuration, and is linked to the difference between the NP chemistry. Indeed, Janus NP are made of two hemispherical faces, one hydrophobic in contact with the hydrocarbon beads, and one hydrophilic in contact with the water beads. The NP hydrophobic faces which protrude in the solvent interact strongly and repeal each other to minimize the interaction potential energy. Unlike Janus NPs, both hemispherical faces of homogeneous NPs are made with hydrophobic and hydrophilic beads. This allows the NPs to interact smoothly with each other and to fluctuate more. Hence they can come closer than r DPD , keeping spherical the droplet interface.
EVOLUTION OF THE THREE-PHASE CONTACT ANGLE DISTRIBUTION
In Fig. S2, we show the evolution of the three phase contact angle distribution of Janus (left panel) and homogeneous (right panel) NPs from the initial stage E 0 (cf. Fig. 1 in the main text), where the shape of the droplet is spherical, to the final stage E 20 . The initial distributions, fitted with continuous lines, can be described with Gaussian distributions for both the Janus and homogenous emulsion systems. The values of the respective means, µ J and µ H , and variances, σ J and σ H , differ according to the chemistry of the stabilizers. We obtain µ J = 91.6 • and µ H = 88.6 • , and σ J = 2.0 • and σ J = 3.4 • for Janus and homogeneous NPs, respectively.
When the droplet is coated with Janus NPs, the system evolves to a skewed distribution as the droplet shrinks, but remains unimodal with a single peak at the same value as the one measured for the spherical initial configuration. The emergence of the skewness of the distribution is linked to the decrease of the NP-NP distance when the droplet volume is reduced due to the major role played by the steric effect. The evolution is different when homogeneous NPs cover the droplet. As the volume is reduced, the contact angle distribution firstly evolves as a monolayer interface with a single peak (from E 0 to E 5 , Fig. 1 in the main text). As the distance between the NPs decreases further, the distribution becomes bimodal as two distinct peaks emerge on both sides of the original equilibrium contact angle. This fundamental difference is characteristic of a particle bilayer.
FIG. S3. Snapshots of the final configurations E20 of the buckling processes of water droplets armored with 160 spherical Janus (left) and homogeneous (right). The proportion of water beads removed is around 10 percent in each evaporation. The equilibration time after each evaporation is reduced to 2 × 10 3 timesteps (instead of 10 5 as in the main text).
HOW NUMERICAL ALGORITHMS AFFECT THE DROPLET EVOLUTION
In our mesoscopic analysis, we controllably reduced the volume of the droplet, removing a constant proportion, i.e. 10 percent, of water beads from the droplet. As a consequence, the three-phase contact angle distribution of the NPs evolves smoothly when the droplet buckles, thereby preventing particles to be artifactually released. However, the particles behaviour strongly depends on the numerical protocol.
In Fig. S3, we show qualitatively, for comparison, the evolution of the droplet volume when the equilibration time after each water bead removal is reduced. The proportion of water beads removed remains in 10 percent in each stage. When the droplet is coated with homogeneous NPs (right panel), we observe that the shape of the droplet remains spherical, with some NPs desorbed in the organic solvent. This evolution is representative of the passive role played by the homogeneous NPs and is in agreement with experiments [12]. When the droplet is coated with Janus NPs (left panel), we observe a significant curved-shape deformation of the droplet, along with the abscence of NP release. This evolution is representative of the active role played by the Janus NPs. The morphology of the droplet becomes noticeably crumpled, with large dimples, and no transition to crater-like depression is observed. This results are consistent with the surface model numerical analysis from Ref. [17], where more than one dimple may nucleate if the evaporation is rapid, leading to metastable multi-indented shapes. Experimentally, the term rapid may correspond to kinetic barriers, which prevent thermally activated coalescence between adjacent dimples. Our result highlights the central role played by the relaxation time of the system after each evaporation in the evolution of the interface geometry. | 7,534.8 | 2017-02-08T00:00:00.000 | [
"Physics"
] |
Parametric Identification of a Dynamic Foundation Model of a Rotary Machine Using Response Data Due to Unknown
The estimation of a model of the foundation of a rotary machine has been recently attempted by using the difference between two sets of response data at some of the bearing locations from two consecutive rundowns of the machine, with and without known unbalance weights at certain positions on the two balance discs of each rotor respectively. However, it would be a great advantage to be able to perform the estimation with a single rundown. Due to practical restrictions in performing such tests (accessibility, costs etc.), there are cases in which data for only one rundown are available. In this case, the unbalance configuration is unknown and has, therefore, to be estimated, in addition to the unknown foundation model. Due to the special form of the unbalance force, this overall inverse problem can be solved by eliminating the unbalance configuration from the model estimation process. The remaining equation to estimate the foundation model consists of the projection of the response data, where the associated projector depends on the foundation model parameter. First results using the method, applied to a laboratory test rig and to a commercial turbo-generator, are presented.
INTRODUCTION
The successful condition monitoring of generators in modern power stations can be significantly enhanced by reliable mathematical models of the complete machines.Although sufficiently reliable models of the rotor and the bearing are well established, the influence of the foundation on the machine dynamics is not yet fully understood.In recent years several attempts have been made to model the foundation by finite elements, but due to the complexity of the foundation those attempts revealed unsatisfactory results (Lees and Simpson, 1983).454 U. PRELLS AND A.W. LEES Mathematical modelling is always purpose- orientated (Natke, 1995).In case of foundation modelling the purpose is not focussed on estimat- ing foundation mass and stiffness but to establish a foundation model which reproduces the contribu- tion of the foundation to the dynamic of the entire system with sufficient accuracy.The criterion for the quality of the model estimate is the fit between calculated model response and measurement.In order to determine the contribution of the foundation to the rotor's dynamic behaviour, the real- valued symmetric matrices Ar of a frequency filter model* (for example Mottershead and Stanway, 1986) N F(co) (jco) Ar E C nxn (1) r=0 have to be estimated using response data at the bearing locations during a machine rundown cover- ing a frequency range c E 2. Note that the matri- ces Ar have to be symmetric because Maxwell's Theorem of Reciprocity must hold true.Although F results analytically from dynamic condensation (see Appendix A), Ar can be assumed to be real- valued because the damping contribution of the foundation is negligible.Assembling the np'= n(n + 1)(N + 1)/2 independent entries of the matri- ces Ar in one vector x Rn, the dynamic stiffness matrix F(c, x) can be understood as a function of the model parameter vector (see Appendix B).For given model parameters x, F(c,x) maps the response u(co) C to the forces fB(co) C at the bearing locations.The latter can be calculated as fB(co) Q(co)u(co) + C(co)p, (2) since the matrices Q(co) and C(co) depend only on the models of the rotor and the bearings, as shown in Appendix A. The real-valued vector p R d repre- sents the unbalance configuration and consists of the masses, the eccentricities and the angles between the positions of the masses on the balance discs and the shaft marker.Using for u(co) the difference of the responses of two consecutive rundowns, with and without balance weights, the parameter vector x can be estimated from F(co, x)u(co) fB(co). (3) This method of estimating a foundation model has been discussed in several papers, for instance Lees (1988), Zanetta (1992), Feng and Hahn (1995), Vania (1996), Smart et al. (1996), Lees and Friswell (1997) (1997, 1998).However, the applicability of this method is based on two major requirements: (1) the unbalance configuration p has to be known, (2) the vertical and horizontal responses at all bear- ing locations have to be measured.
These criteria are often not fulfilled, due to equip- ment failure, costs or accessibility.The objective of this paper is to explore the possibility of estimating the model parameters x in the case where only an incomplete set of data from one rundown is avail- able, i.e. the unbalance configuration p is unknown and only n' components of the n-dimensional re- sponse vector have been measured, i.e.
where H0 E 1R ' is the measurement matrix which selects the measured part of the complete response vector (see Appendix A).The main problem here is that the unbalance configuration p is unknown.One possibility consists in estimating p in addition to the model parameter vector x. Background to that problem can be found in Lees and Friswell (1997).However, the method presented in that paper requires response data at all bearing loca- tions, which are often not available.The method presented in this paper aims to separate the esti- mation of the model parameters x R n" from the estimation of the unknown unbalance configurationp R d * Note that in the case N 2 the matrices A0, A1, A2 correspond to the contribution of stiffness, damping and inertia respectively.
In the first section, the basic estimation equation is introduced and extended to the case of unknown unbalance configurations.In the second section, the numerical handling of the resulting inverse problem is discussed and countermeasures to regu- larise the problem are suggested.x) Q(co))-I x)p, the equation error can be calculated as g-V(x)V+(x)g (hmn' P(x))g-N(x)g.( 8) The symmetric and idempotent matrix P(x) is the projector into the subspace spanned by V(x) and N(x) is the projector into the orthogonal comple- ment.Since N(x) varies with x this method is some- times called variable projection method (see for instance Golub and Pereyra, 1973; Krogh, 1974; Kaufman, 1975).A cost function can be formulated using the squares of the relative Euclidean norm of Eq. ( 8) J(x) "-IlN(x)gllZ/llgll 2 g-CP(x)g/llgll 2 (9) Note that 0 <_J(x)_< due to the orthonormal decomposition g/llg[I P(x)g/llg[I + N(x)g/l[gll. (lO) where all terms are complex.For a set of discrete frequencies ft := {cl,...,COm} Eq. ( 5) can be ex- tended and written in real-valued form by doubling the order (see Appendix C) g V(x)p. (6) Here g E I 2mn' contains the measured responses and V(x)E]2mn'xd is the real equivalent of the frequency response matrix Z(co, x).Note that Eq. ( 6) is linear in the rotor unbalance configuration p.
Equation (6) states that there exists a linear com- bination of the columns of V(x) that generates g, which means that the vector g, the generalised response vector, is contained in the subspace spanned by V(x).Although p and x are unknown this statement is independent of the value of p.To realise this one may solve Eq. ( 6) in the least-squares sense for p yielding where V + is the Moore-Penrose inverse of V.
Inserting the normal solution/5 of p into Eq. ( 6 If x has been estimated by minimising J(x) then Eq. ( 7) delivers an estimate for p.Although the formulation (9) of the estimation problem has the advantage of being independent of the unknown unbalance configuration p, it has the disadvantages (existence, uniqueness and stability) typical of out- put residual-related methods for solving non-linear inverse problems (Natke et al., 1995).In the next section, the existence, uniqueness and stability of the solution are explored, and the numerical and computational aspects of the estimation procedure are explained.
THE ESTIMATION PROBLEM
The existence of a solution depends essentially on two conditions" (1) the projectors must be non-trivial, i.e.P(x) I2m,,', or equivalently N(x) O, (2) the degree ofthe chosen filter model (see Eq. (1)) must reflect the most important peaks of the available data.
A sufficient condition for the non-triviality of the projectors is 2mn' > d, which is the dimension of the unbalance vector p.If one assumes that there are two balance discs on each shaft, then d= 4ns (in the case of an ns-shaft rotor).Since each shaft is supported by two journal bearings and the mea- surements are taken at each bearing in horizontal and vertical directions, the maximum number of responses is 4nsn _> n'.From this one finds rn > n/2n', which means, in principle, two frequencies are suf- ficient in the case that only half of all possible horizontal and vertical displacements .havebeen measured.However, this limit case is far from real- istic.Even if the structure only has a single reso- nance, rn 2 frequencies are insufficient to sample that peak.An optimum discretisation in the fre- quency domain for the purpose of model updating has been discussed by Cottin (1991).It was stated that at least three (better five) frequencies per reso- nance are necessary, where the step size depends on the damping ratio of each individual peak.Moreover the usual identifiability conditions (see for instance Natke, 1992) have to be satisfied, which requires a sufficiently large number of resonances within the frequency range to ensure a small con- dition number of V(x).
The uniqueness of the estimates depends on the chosen degree of the frequency filter model.It is well known from polynomial approximations that the error between model and data can be made arbitrarily small by increasing the degree.The determination of an adequate degree for the filter model is a non-trivial task, because the peaks of the data may be (1) due to measurement errors or (2) due to resonances of the subsystem.
rotor/bearing
Those resonances of Z, which are sensitive with respect to resonances of the foundation, are diffi- cult to distinguish.Investigations using simple test models reveal that often peaks of the frequency response matrix Z cannot be controlled by adjust- ing the resonances of F. The following two limit cases are helpful to find those peaks of Z which are sensitive with respect to the peaks of F: lim Z(o, x) -H-Q-1 ()C(), (12) lim Z(o,x) HF-I(o,x)C(), (13) Ilxll q with the positive scalar q >> Q(w)[I, Vco E f.In the first limit case, the frequency response function reflects only the contribution of the rotor/bearing model, whilst in the second case the influence of the rotor/bearing model contained in Q is negligible, and the contribution of the rotor/bearing model to the response is only due to the matrix C. Plotting the peaks of the response function Z(co, x) for some cases Ilxll [0, q], using a simple initial foundation model is useful in deciding the appropriate model degree N. Of course, this choice is closely related to the stability of the estimates, i.e. the sensitivity of the estimates with respect to data errors.
The stability of the estimates depends on three points: (1) the moderate choice of the degree of the filter model, (2) the condition of the matrix V(x), (3) the dimension of span(V(x)) relative to the dimension of x.
These are related in their effect on the foundation parameter estimates.As already mentioned, one has to analyse the data in order to choose an appro- priate degree of the filter model.However, data are always corrupted by noise, thus deciding which peaks are real and which are due to noise is rarely simple.In the case that foundation and noise- related peaks are of comparable magnitude, fre- quency filter models of different degrees will lead to the same order of equation error.To avoid non- unique estimates, those data which are most noise corrupted have to be excluded from the estimation process.This can be done by introducing a moder- ate weighting, or simply by using only those data with relatively large magnitudes.Assuming that the same type of sensors are used, data with small relative norms ri max lui()l ( 14) will be more noise polluted.For a given threshold r0 > 0 the selection of data .(CO){,/(&): ri ro} (15) for the estimation process leads to a reduction of the dimension of the parameter space.A reduction in the number of parameters to be estimated not only stabilises the estimates but also reduces the degree of non-uniqueness.A large number of parameters is related to a high degree frequency filter model.The larger the chosen frequency filter model degree, the more likely are non-unique estimates, as different parameter vectors xx may generate the same space, span(V(x)) span( V(x ')).The physical explanation is that some parameters are mainly related to resonances of Z outside the frequency range f.A change in those parameters will have little effect on the subspace spanned by V. A method to eliminate parameters related to those modes a posteriori has been reported elsewhere (Friswell et al., 1997).However, this method assumes that a high-dimensional parameter vector has been estimated, which is often not feasible due to numerical and computational limitations.The problem of local minima of output-related methods is well known (Oeljeklaus, 1999).Depend- ing on the size of the problem, the cost function J(x) defined in Eq. ( 9) has several local minima.Those local minima are due to several levels of fit between peaks of the data and the model response.If, for instance, the model response for x matches the data at one peak whilst the model response for x does not match any peak of the data, then the cost func- tion value J(x) is lower than J(x').Since there are in general model parameters, which lead to model responses that match the data at several peaks, the associated cost function values are local minima.
An iterative solution procedure, as for instance Newton-Raphson, will, in general, lead to a local minimum, depending on the quality of the chosen initial parameter vector.To avoid the use of locally convergent non-linear solvers, a controlled random search has to be applied which evaluates the cost function many times (several thousand evalua- tions).The time necessary for one evaluation of J(x) depends essentially on the inversion of the dynamic stiffness matrix (see Eq. ( 5)).A modal re- presentation of the dynamic stiffness matrix is, in general, not possible, due to the speed-dependence of the model of the journal bearings.Thus, the inversion has to be performed at each frequency step, which is time-consuming.Already for rela tively small problem sizes, the evaluation of the cost functions at a given parameter vector x can take several seconds.Even on a fast computer this can lead to several hours of execution time.The over- all computation time of the estimation method depends (1) on the number of parameters to be estimated, (2) on the number of frequency points and (3) on the quality of the initial parameter vector.
The choice of the initial model parameter is a difficult task, since, in general, the foundation model estimation is a 'black-box problem', i.e. no a priori model is available.In order to find an appropriate initial parameter vector Xini, it is sug- gested to start with a 2nd order diagonal frequency filter model F(,Xini).This simple model is then optimised in a pre-process by maximising the num- ber ofpeaks of V(xini) within the frequency range f, using the limit cases defined by Eqs. ( 12) and ( 13).
This strategy tries to minimise the risk of finding a local minimum.Using this initial parameter vector, the cost function J(x) is minimised by a modified Nelder-Mead simplex algorithm (for details see Nelder and Mead, 1965) using only those response vector components which have magnitudes above the threshold r0.If the minimum of the cost function is insufficient, the degree of the frequency filter model is increased.
In the next section, the results of two applications are presented to demonstrate the capability of the method described above.
APPLICATIONS
The first application is an experimental rotor rig located in the laboratory of the Department of Mechanical Engineering at the University of Wales Swansea.The second application is a commercial turbo-generator.The estimation routines have been coded in MATLAB and executed on a PI1450 MHz PC and with 256 MB RAM under WINDOWS'98.
Experimental Rotor Rig
A schematic of the the rotor rig is depicted in Fig. 1.A detailed description of the test rig can be found in Edwards et al. (1999).It consists of a solid steel shaft of length 740 mm and diameter 10 mm.At positions A and B the shaft is connected via two brass-bush bearings to a foundation, hence the foundation model has dimension n 4. At position M, the rotor is connected to an electric motor, via a flexible coupling.The bearings are assumed stiff and free of damping.At positions A and B, piezo-electric accelerometers are attached in the horizontal and vertical directions.The responses at the bearing locations due to an unknown residual unbalance distribution (pl ps) along the shaft have been measured, during a slow rundown of the rotor covering a frequency range between 15 and 45 Hz.Thus, in this case, n' n 4 and d 8. Since the number of frequency points was m 1343 the size of V(x) is 10744-by-8.The dimension of the parameter vector x dim(x) 10. (N-t-1) (16) depends on the chosen degree N of the filter model.
The time necessary to evaluate J(x) was about 9 s and was independent of the chosen degree of the filter model.Diagonal frequency filter models of various degrees have been calculated in a pre- process in order to maximise the number of peaks of V(x) within the running range.Using these initial models, the cost function J(x) has been minimised.
The minimum values of the cost function for different degrees of foundation frequency filter FIGURE 2 Measured (dotted) and calculated responses (solid) using a foundation frequency filter model of degree 3. 459 models are shown in Table I.For frequency filter models of degrees 3 and 4, the cost function minimum is about 0.07.In contrast to the model of degree 3 the additional matrix A4, that has to be estimated for the model of degree 4, is almost zero.Thus, both models lead to the same estimates.
Using the foundation model estimate of degree 3, the residual unbalance vector p can be estimated by U. PRELLS AND A.W. LEES Eq. ( 7) yielding 10 .4. (-0.547, -0.099, 1.752, 0.26, 1.848, -0.322, 0.511,0.128)T, ( 17) which appears to be physically realistic.Using the estimates of the foundation model and of the unbalance, the response can be calculated.In Fig. 2, the model response (solid) is compared to the mea- sured data (dotted).The calculated displacements match the data at the two groups of peaks in the upper and lower frequency range.Although the match about 32 and 28 Hz is not quite right, the overall fit is satisfactory.This lack of fit is due to the fact that the data signal is relatively low at those frequencies.
Application to a Power Turbo-Generator The method outlined above has been applied to a commercial turbo-generator.The rotor, in this case, consists of 6 shafts which are connected via 12 journal bearings to a steel foundation; hence n 24.
A finite element beam model of the rotor was statically condensed from over 2000 to n-196 degrees-of-freedom (DoF).The non-linear model of the journal bearings was linearised at each frequency point to deliver a frequency dependent damping and stiffness matrix.Twenty-two hori- zontal and vertical responses have been measured at all bearing locations during one rundown, covering a frequency range between 5 and 50 Hz, with rn 275 frequency points.In Fig. 3, the relative maximum magnitudes of the measured response vector components are plotted.The first component corresponds to the horizontal DoF at the first journal bearing of the HP rotor, and the last component correspond to the vertical DoF of the 2nd journal bearing of the exciter.In order to reduce computational effort, a frequency filter model of degree N= 2 has been chosen to model the dynamic contribution of the foundation.Moreover, the data used for the estimation have been restricted to those components having relative maximum magnitudes larger than r0 0.5.The n= 4 selected data components are circled in Fig. 3.A residual unbalance distribu- tion was assumed along the entire rotor.Taking into account all translational DoF, except those at the bearing locations, the dimension of the unbal- ance vector p was d 72.In the N--2 case consid- ered here, the parameter vector x is of dimension np 24(25 + 1)/2 900, and the matrix V(x) (see Eq. ( 6)) has the size 2200-by-72.Evaluation of the cost function J(x) takes about 26s.Using an undamped diagonal initial model, estimated in a pre-process to maximise the number of peaks F() within the frequency range, the controlled random j search procedure has been initiated to minimise the N cost function J(x).The value of the cost function at Ar the initial parameter vector was 0.86.After 467 improvements and a total computation time of more than 48 h, the value decreases to 0.12. Figure 4 shows the data (dotted) and the calculated model C responses (solid).
Although the model response does not fit pre- cisely, the overall shapes of the spectrum are reproduced.Due to the relatively low degree of the frequency filter model, no better fit was expected, u() The necessity of using a frequency filter model of higher degree was confirmed by the corresponding unbalance estimate: the values were too large to be physically realistic.It is also possible that the algo- rithm has found a local minimum.The use of a fre- p quency filter model of higher degree would lead to an increase in the overall computation time.The first attempt to estimate np= 2100 parameters of a Q() frequency filter model of degree N 6 was cancelled C(a) after 4 days of execution time because no significant improvement of the cost function value could be ei found.
CONCLUSIONS
A method has been applied to a high-dimensional parameter estimation problem for the estimation of a rotor foundation model in the case of incomplete measurements and unknown unbalance regimes.The usability of the method has been tested by two practical applications: an experimental test rig and an industrial power turbo-generator.Although the results are so far encouraging, the computational problems encountered require further investigation.
NOMENCLATURE z(,x)
frequency in rad/s frequency filter complex unit (-1) 1/2 degree of filter model symmetric and real-valued matrix for each r 0,..., N set of all complex-valued n-by-n matrices complex vector space of dimension n n-dimensional complex-valued force vector at the bearing locations n-dimensional complex-valued response vector n'-dimensional complex-valued vector of measured responses real vector space of dimension d real-valued d-dimensional vector of unbalance configuration complex-valued n-by-n matrix complex-valued n-by-d matrix k-by-k identity matrix ith column vector of the identity matrix real-valued n-by-n selecting matrix real-valued np-dimensional In general let Ax():= -co2Mx-b-jcoC2c-b Kx denote the dynamic stiffness matrix at frequency , where the subscript X E {R,B,F} refers to the rotor, the bearing and the foundation respectively.Mx, Cx and Kx denotes the matrices of inertia, damping and stiffness.The dynamic stiffness matrices are partitioned according to the internal DoF, indi- cated by an additional subscript/, and by those DoF connected to the bearings, indicated by the additional subscript B: A FUF fF >> AFro A FII U FI In Eqs. ( 18)-( 20) the two rules of model synthesis are involved: (1) Kinematic compatibility requires the responses at the connection DoF to be the same.
(2) Due to Newton's principle of actio et reactio the forces at the connection DoF differ only in sign.
Moreover it is assumed that besides the unbal- ance force ft0 of the rotor no other (relevant) forces act on the system.Of course, the non-zero entries of the force vector fRi are the components of the vector of unbalance forces ft0, i.e. there exists a selecting matrix S := [ei,..., ei] such that fm= Svfv. (21) The unbalance force has the form fU(OO) C02Ip, where the real-valued d-dimensional vector p of the unbalance configuration contains the eccentricity, the mass and the angle to the shaft marker of each balance disc.The matrix represents the harmonic excitation in the frequency domain and has a block- diagonal structure j 0 (23) In addition, the special structure of the bearing model is already taken into account.It can be shown that the matrix B B(o) is block diagonal, i.e.where in general the matrix BiE C 22 for the ith bearing is given by Bi Ki(&) +j&Di(), { 1,..., rib}. (25) The stiffness matrices and the damping matrices, Ki, Di 22, result from linearisation and are in general non-symmetric and non-singular, and depend on the excitation frequency.The dynamic stiffness matrix F used in Eqs.
=:f Note that AFII is non-singular because the entire system is grounded.Coupling the models of the rotor from Eq. ( 18) and the model of the bearings from Eq. ( 19) leads to the input/output equation of for symmetric real-valued matrices.Each matrix At, r 0,..., N, of the frequency filter of degree N can be written uniquely as where Z(co, x) H(F(co, x) + Q(co))- For m excitation frequencies Eq. ( 44) can be extended to give gc-Vc(x)p, where .() / I z(, x) g <(x) :: .
FIGURE 3 FIGURE 4
FIGURE 3 Relative maximum magnitudes of the response vector components.
vector of model parameters complex-valued n-by-d frequency response matrix set of m discrete frequencies Because of commercial confidence the depicted values are scaled to unity.
and real-valued adjustment parameters xr.Collecting all parameters in one vector x of dimension np := (N+ 1)n(n + 1)/2 the frequency filter can be written as np F(co, x) Z Ra(co)Xa"
F
in Eq. (40) one finds u M (co) Z(co, x)p,
E
EN NE ER RG GY Y M MA AT TE ER RI IA AL LS S Materials Science & Engineering for Energy SystemsEconomic and environmental factors are creating ever greater pressures for the efficient generation, transmission and use of energy.Materials developments are crucial to progress in all these areas: to innovation in design; to extending lifetime and maintenance intervals; and to successful operation in more demanding environments.Drawing together the broad community with interests in these areas, Energy Materials addresses materials needs in future energy generation, transmission, utilisation, conservation and storage.The journal covers thermal generation and gas turbines; renewable power (wind, wave, tidal, hydro, solar and geothermal); fuel cells (low and high temperature); materials issues relevant to biomass and biotechnology; nuclear power generation (fission and fusion); hydrogen generation and storage in the context of the 'hydrogen economy'; and the transmission and storage of the energy produced.As well as publishing high-quality peer-reviewed research, Energy Materials promotes discussion of issues common to all sectors, through commissioned reviews and commentaries.The journal includes coverage of energy economics and policy, and broader social issues, since the political and legislative context influence research and investment decisions.S SU UB BS SC CR RI IP PT TI IO ON N I IN NF FO OR RM MA AT TI IO ON N Volume 1 (2006), 4 issues per year Print ISSN: 1748-9237 Online ISSN: 1748-9245 Individual rate: £76.00/US$141.00Institutional rate: £235.00/US$435.00Online-only institutional rate: £199.00/US$367.00For special IOM 3 member rates please email s su ub bs sc cr ri ip pt ti io on ns s@ @m ma an ne ey y. .cco o. .uuk k E ED DI IT TO OR RS S D Dr r F Fu uj ji io o A Ab be e NIMS, Japan D Dr r J Jo oh hn n H Ha al ld d, IPL-MPT, Technical University of Denmark, Denmark D Dr r R R V Vi is sw wa an na at th ha an n, EPRI, USA F Fo or r f fu ur rt th he er r i in nf fo or rm ma at ti io on n p pl le ea as se e c co on nt ta ac ct t: : Maney Publishing UK Tel: +44 (0)113 249 7481 Fax: +44 (0)113 248 6983 Email<EMAIL_ADDRESS>or Maney Publishing North America Tel (toll free): 866 297 5154 Fax: 617 354 6875 Email<EMAIL_ADDRESS>further information or to subscribe online please visit w ww ww w. .mma an ne ey y. .cco o. .uuk k C CA AL LL L F FO OR R P PA AP PE ER RS S Contributions to the journal should be submitted online at http://ema.edmgr.comTo view the Notes for Contributors please visit: www.maney.co.uk/journals/notes/emaUpon publication in 2006, this journal will be available via the Ingenta Connect journals service.To view free sample content online visit: w ww ww w. .i in ng ge en nt ta ac co on nn ne ec ct t. .cco om m/ /c co on nt te en nt t/ /m ma an ne ey y Friswell et al. (1997) and Prells et al. | 6,609.4 | 2000-01-01T00:00:00.000 | [
"Engineering"
] |
Chemical Composition, Antimicrobial Activity, and Withdrawal Period of Essential Oil-Based Pharmaceutical Formulation in Bovine Mastitis Treatment
Due to the emergence of antibiotic-resistant bacteria, the risk it represents to public health, and the possible consequences for animal health and welfare, there is an increasing focus on reducing antimicrobial usage (AMU) in animal husbandry. Therefore, a great interest in developing alternatives to AMU in livestock production is present worldwide. Recently, essential oils (EOs) have gained great attention as promising possibilities for the replacement of antibiotics. The current study aimed to test the potential of using a novel EO-based pharmaceutical formulation (Phyto-Bomat) in bovine mastitis treatment. The antibacterial activity was performed using the microdilution technique. Lactating dairy cows were treated with 15 mL of Phyto-Bomat in the inflamed quarter for 5 consecutive days in order to analyze blood and milk samples for thymol and carvacrol residues using gas chromatography and mass spectrometry (GC–MS). Antimicrobial activity expressed as the minimum inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) indicates that this formulation has the highest activity against Gram-positive strains. The dominant compounds in Phyto-Bomat were thymol and carvacrol, at 12.58 ± 1.23 mg/mL and 23.11 ± 2.31 mg/mL, respectively. The quantification of these two compounds in evaluated biological samples showed that 24 h after administration the concentration of thymol and carvacrol in milk samples was at the same level as before application. On the other hand, thymol and carvacrol were detectable in plasma samples even after 24 h post-treatment, with values ranging from 0.15–0.38 and 0.21–0.66 µg/mL, respectively. The tested formulation showed encouraging results of antibacterial activity against bovine mastitis pathogens, as well as the withdrawal period of dominant compounds, which implies that further testing regarding the bacteriological and clinical cure rates in clinical settings is needed.
Introduction
Bovine mastitis is among the most severe and economically important infections affecting livestock production, and one of the major causes of antibiotic use in dairy cows [1,2]. In general, mastitis is defined as an inflammation of the mammary gland, usually caused by different pathogenic microorganisms, mostly bacteria, such as staphylococci, streptococci, and coliforms [1,3]. Moreover, Staphylococcus aureus, Streptococcus uberis, Escherichia coli, and Streptococcus agalactiae are among the most frequent mastitis-associated pathogens in Serbia [4], but in other European countries also [1,5,6].
Apart from the substantial economic losses associated with the disease, it has extreme zoonotic importance since the milk is unsafe for human consumption [7,8]. This unsafety could be due to the presence of residues and the long withdrawal period of antimicrobials [9,10]. Moreover, antimicrobial residues in milk can interfere with the production of dairy products and may cause hypersensitivity and resistance to microorganisms in humans [11]. Furthermore, the increasing concern about antibiotic resistance in public health issues is pushing the milk industries to reduce the usage of antimicrobial drugs [9]. Erskine et al. [12] reported that approximately 90% of the residues detected in milk over a period of five years originated from antibacterial therapy for mastitis.
Therefore, there is a growing need to develop new, alternative therapies, especially those derived from natural products, such as plants [8,13]. That is one reason why phytotherapy is gaining much attention nowadays as an alternative to antimicrobial agents. Considering the numerous advantages that essential oils (EOs) have in relation to antibiotics-such as non-toxicity, biodegradability, and reduced possibility of resistance-in recent decades, their research and use have been gaining attention [14]. In addition to the mentioned advantages of phytotherapy, in recent years, there has been a large decline in the percentage of newly discovered antibiotics, which could be an alternative to the existing ones, whose efficiency is decreasing [15].
EOs are aromatic oily liquids obtained from different plant parts and are widely used in several industrial and scientific fields [16,17]. Many of them have the 'generally recognized as safe' (GRAS) status, awarded by the United States Food and Drug Authority (FDA) [18]. According to traditional medicinal knowledge, EOs have been used as analgesics, sedatives, anxiolytics, antifungals, anti-inflammatory drugs, and antibacterial agents [19]. EOs have been recognized for their potential antimicrobial activities due to their high hydrophobicity, which enables them to cross the bacterial cell membranes leading to a loss of function and damage of proteins, lipids, and organelles within the bacterial cell, and consequently cell death [20][21][22].
Additionally, there is potential to decrease antimicrobial consumption and consequently antimicrobial resistance through the development of EO-based phytopharmaceuticals for mastitis treatment due to the shorter withdrawal period of EOs. Actually, mastitis in lactating cows is commonly treated by intramammary or parenterally infusion of antibiotics [8,30]. Previous research suggests that the most commonly used antibiotics in mastitis therapy in Serbia were penicillin, streptomycin, gentamicin, tetracycline, cephalexin, sulfonamides, and enrofloxacin [31,32]. According to the Summary of the Product characteristics of these drugs given by the Medicines and Medical Devices Agency of Serbia, the withdrawal period of these antibiotics can vary from 1-5 days, while McPhee et al. [33] reported that thymol residues were only detected in the 12 h post-treatment in the milk sample. However, some authors reported that the activity of some antibiotics such as macrolides, tetracyclines and trimethoprim-sulphonamides is reduced in milk, which also reduces the chances of effective treatment [10,14].
Hence, the aim of the present study was to evaluate the antimicrobial activity of an EO-based intramammary pharmaceutical formulation developed for bovine mastitis treatment. In addition, the withdrawal period of the proposed formulation in the milk and blood of treated cows was assessed.
EO-Based Formulation
The proposed pharmaceutical formulation for intramammary application was based on four different EOs with proven antimicrobial activity. Namely, it contained EOs of common (Thymus vulgaris L.) and wild thyme (Thymus serpyllum L.), oregano (Origanum vulgare L.), and mountain savory (Satureja montana L). The obtained EO mixture was further diluted with common marigold (Calendula officinalis L.) and St. John's wort (Hypericum perforatum L.) oil macerates (herbal drug:sunflower oil, 1:5) in an amount of up to 15 mL in an intramammary injector. The chemical composition and antimicrobial activity of common and wild thyme against bovine mastitis-associated pathogens were previously studied by Kovacevic et al. [4]. Oregano and mountain savory chemical compositions and antimicrobial activity against bovine mastitis-associated pathogens were reported by Kovacevic et al. [34]. The EOs' concentration in the proposed formulation was determined according to the MBC values against the most common mastitis-associated pathogens. The predominant compounds among the EO components included in the proposed formulation were thymol and carvacrol [4,34].
Sampling Procedure
The experimental protocol was approved by the Animal Ethics Committee of the Ministry of Agriculture, Forestry and Water Management-Veterinary Directorate (9000-689/2, 7 June 2020). The presented study was carried out at two dairy farms located in Serbia, with 500-1100 Holstein-Friesian cows per farm. Milk samples were collected from individual quarters with clinical and subclinical mastitis. The cows were screened for clinical mastitis by clinical examination, while subclinical mastitis was assessed using somatic cell count in the milk samples. Palpation and inspection methods were performed to examine typical signs of clinical mastitis by a veterinarian. Pathogen isolation was conducted from October 2021 to December 2021 by taking milk samples from all animals during morning milking. A total of 55 milk samples from dairy cows at two farms, diagnosed with mastitis during the study period were sampled. Before sampling, the udder was cleaned and wiped. The tips of the teats and the openings of the suction canal were cleaned and disinfected with a cotton swab soaked in 70% alcohol. The first jets of milk were discarded, after which a few milliliters of milk were milked into sterile tubes. After the milk samples were collected, they were immediately transported to the Laboratory for Milk Hygiene at the Department of Veterinary Medicine, Faculty of Agriculture, University of Novi Sad, under the cold chain (4 • C). All milk samples were incubated on nutrient agar with the addition of 2% blood and incubated under aerobic conditions for 48 h at 37 • C, using a platinum loop (0.01 mL). Microorganisms were isolated and identified based on morphological and biochemical characteristics, as described by Kovacevic et al. [4].
EOs' Effectiveness Determination against Mastitis-Associated Bacteria
The effectiveness of the solution of the final preparation on microorganisms was determined according to the Clinical Laboratory Standards [35] with slight modifications. Mueller-Hinton broth (MHB, HiMedia) was inoculated into each well of a microtiter plate (except for the first well) in a total volume of 100 µL. The first well of the microtiter plate was inoculated with 100 µL of pure preparation (909.09 µL/mL). The second well of the microtiter plate was inoculated with 100 µL of pure preparation and represented a stock solution that contained 100 µL EO + 100 µL broth (454.54 µL/mL). Afterwards, serial doubling dilutions of the tested EOs were prepared in a 96-well microtiter plate well (Hillium) over a range of 454.54 to 56.81 µL/mL (Table 1). Finally, 100 µL was removed from the last well of the microtiter plate. Then, 10 µL of bacterial suspension was added to each test well. The final volume in each well was 110 µL/mL and the final bacterial concentration was 10 6 CFU/mL. The plate was incubated for 24 h at 37 • C. The same tests were performed simultaneously for growth control (MHB + test organism), sterility control I (MHB +preparation), and sterility control II (MHB). The growth of microorganisms was determined by adding 10 µL at 0.01% of the resazurin solution (HiMedia). The plates were incubated at 37 • C for 24 h (in darkness). The change in color from blue (oxidized) to pink (reduced) indicated the growth of bacteria. The minimum inhibitory concentration (MIC) was determined as the lowest concentration of the final preparation that prevented the transition of oxidated to the reduced form of resazurin and was determined by cultivating 100 µL of solution from each well of the microtiter plate in Mueller-Hinton agar (MHA, HiMedia) [36]. The plates were incubated at 37 • C for 24 h. The minimum bactericidal concentration (MBC) was defined as the lowest concentration of the final preparation solution at which 99.9% of inoculated bacteria were killed.
Therapeutic/Experimental Protocol
EO-based intramammary formulation with previously suggested in vitro antimicrobial effect was tested in vivo on cows with mastitis. Animals with a positive diagnosis of mastitis (n = 55) were chosen in the present experiment for in vivo tests. Formulation was administered intramammarily, twice a day, after milking, in 15 mL volume, for 5 consecutive days. The formulation contained EOs of oregano, mountain savory, and common and wild thyme in different concentrations. The milk and blood samples were obtained before treatment, as well as 12 h and 24 h after the treatment. Blood samples were collected in citrate-containing vacutainers, centrifuged, and the obtained blood plasma was kept at −20 • C until analyzed. Milk samples were also kept at −20 • C until analysis.
Method Development and Validation
Chemical standard substances of thymol and carvacrol (Sigma-Aldrich, St. Louis, MO, USA) were dissolved in acetonitrile to obtain stock standard solutions in a concentration of 100 µg/mL. Stock standard solutions were diluted with acetonitrile in order to obtain working standard solutions (10 µg/mL) which were used for preparing calibration standard solutions containing thymol and carvacrol in concentrations ranging from 0.067-6.67 µg/mL and ketamine hydrochloride (internal standard) in a concentration of 0.5 µg/mL. The prepared calibration solutions were analyzed via gas chromatographymass spectrometry (GC-MS) instrumental technique (7890B GC System, 5997A MSD; Agilent Technologies, Waldbronn, Germany). The compounds of interest were separated on HP-5 ms (30 m) capillary column, where 1 µL of sample was injected in splitless mode at the inlet temperature of 260 • C. The starting oven temperature was 50 • C and held for 1 min after which the temperature was raised to 165 • C at a rate of 30 • C/min and held for 5 min. The second ramp was set at 195 • C at a rate of 9 • C/min and held for 10 min, while the third ramp was set at 280 • C at a rate of 40 • C/min and held for 7 min. The MSD transfer line was set at 280 • C, and the total run took 32 min. The obtained chromatograms were monitored in SCAN (m/z: 50-330) and SIM modes (m/z: 91, 135, 150, 180, 182, 209). The analytical method has been validated in terms of selectivity, linearity, precision (repeatability and reproducibility), accuracy, limits of detection (LOD), and quantification (LOQ). The selectivity of the method was assessed based on the chromatograms of the calibration solution containing thymol and carvacrol, as well as real samples. Linearity was estimated by least squares regression analysis of the results obtained for calibration curves (at 8 points, obtained in duplicates) ranging from 0.067-6.67 µg/mL. The intra-and inter-day (n = 3) precision were evaluated by analysis of independently prepared samples of different matrix types. Accuracy was determined by spiking real samples at three different concentration levels (0.5, 3, and 6 µg/mL). The LOD and LOQ were estimated by injecting previously spiked samples (0.1µg/mL), which naturally did not contain thymol and carvacrol.
Preparation of Samples
After thawing, the appropriate amount of biological sample (3 mL of milk or 1 mL of blood plasma) was accurately measured in conical tube, saturated with ammonia sulfate, closed with a rubber stop, and vortexed for 3 min. After that, 1.5 mL of ketamine hydrochloride solution in diethyl ether (c = 0.5 µg/mL) was added to the tubes and vortexed for another 5 min. The samples were then centrifuged (10 min, 3900 rpm) and diethyl ether extracts were transferred to an evaporating dish for gentle removal of solvent in air stream. The dry residue was dissolved in 1.5 mL of acetonitrile and analyzed through a GC-MS instrument according to previously described conditions.
Phyto-Bomat preparation was extracted with ketamine solution in acetonitrile (c = 0.5 µg/mL) and analyzed via described GC-MS technique.
Data Analysis
All of the obtained data were analyzed with Microsoft Office Excel v 2019. And Statsoft Statistica v12.5. The results were processed by means of descriptive statistics, while differences between concentrations of quantified compounds (thymol and carvacrol) in relation to evaluated time points were assessed through application of ANOVA followed by post-hoc Tukey HSD test. The differences were considered significant if p < 0.05.
Bacteriological Testing of Milk Samples
The current study revealed a predominance of Escherichia coli, Streptococcus spp., and
Antimicrobial Activity of EO-Based Pharmaceutical Formulation
The minimum inhibitory concentrations (MICs) and minimal b concentrations (MBCs) of EO-based formulations against mastitis-associated are presented in Table 1. The EO-based formulation exhibited antimicrob against the tested mastitis-associated bacteria. The MIC of the formulation fo bacterial species ranged from 22.72 mg/mL to 45.4 mg/mL, while the lowest M were found for E. coli, Streptococcus spp., and Staphylococcus spp. strains. determined for the EO-based formulation ranged from 45.4 mg/mL to 90.09 m
Method Validation
The analytical method for the simultaneous determination of thymol an in biological matrices such as milk and blood plasma was set up and Chromatograms of calibration standards ( Figure 3) and samples belonging types of matrices confirm the selectivity of the applied analytical method. Th the method validation are presented in Table 2.
Antimicrobial Activity of EO-Based Pharmaceutical Formulation
The minimum inhibitory concentrations (MICs) and minimal bactericidal concentrations (MBCs) of EO-based formulations against mastitis-associated pathogens are presented in Table 1. The EO-based formulation exhibited antimicrobial activity against the tested mastitis-associated bacteria. The MIC of the formulation for the tested bacterial species ranged from 22.72 mg/mL to 45.4 mg/mL, while the lowest MIC values were found for E. coli, Streptococcus spp., and Staphylococcus spp. strains. The MBCs determined for the EO-based formulation ranged from 45.4 mg/mL to 90.09 mg/mL.
Method Validation
The analytical method for the simultaneous determination of thymol and carvacrol in biological matrices such as milk and blood plasma was set up and validated. Chromatograms of calibration standards ( Figure 3) and samples belonging to different types of matrices confirm the selectivity of the applied analytical method. The results of the method validation are presented in Table 2.
Thymol and Carvacrol Quantification
The quantified amounts of thymol and carvacrol in the applied Phyto-Bomat preparation were 12.58 ± 1.23 mg/mL and 23.11 ± 2.31 mg/mL, respectively. Furthermore, the results of the thymol and carvacrol quantification in evaluated biological samples are presented in Supplementary Table S1, while Figure 4 shows the trends of these monoterpenes' accumulation in milk and blood plasma.
Thymol and Carvacrol Quantification
The quantified amounts of thymol and carvacrol in the app preparation were 12.58 ± 1.23 mg/mL and 23.11 ± 2.31 mg/mL, respecti the results of the thymol and carvacrol quantification in evaluated bio presented in supplementary material 1, while Figure 4 shows th monoterpenes' accumulation in milk and blood plasma.
Based on these data, it can be seen that both compounds peaked w after IMM dosing with Phyto-Bomat, and then declined relatively rap milk (Figure 4). Regarding milk samples, there were statistically signif the concentrations of thymol (F(2, 42) = 25.547, p = 0.000) and carvacro p = 0.000) during the evaluated time points, whereas the post-hoc ana
Discussion
Even though the treatment of bovine mastitis still relies on the use of antibiotics, both for prophylaxis and therapy, their use is questioned because of an increase in the number Based on these data, it can be seen that both compounds peaked within the first 12 h after IMM dosing with Phyto-Bomat, and then declined relatively rapidly in plasma and milk (Figure 4). Regarding milk samples, there were statistically significant differences in the concentrations of thymol (F(2, 42) = 25.547, p = 0.000) and carvacrol (F(2, 42) = 14.882, p = 0.000) during the evaluated time points, whereas the post-hoc analysis indicated that the levels in samples collected 12 h after the treatment were the cause of these recorded differences. Furthermore, within about 24 h for both compounds, the same levels as before treatment were obtained. Similarly, thymol (F(2, 42) = 129.35, p = 0.000) and carvacrol (F(2, 42) = 92.655, p = 0.000) concentrations fluctuated in the plasma, with the exception that in the plasma samples, even after 24 h certain levels were still detectable.
Discussion
Even though the treatment of bovine mastitis still relies on the use of antibiotics, both for prophylaxis and therapy, their use is questioned because of an increase in the number of resistant strains as well as residues of antibiotics in milk for human consumption [7,37,38]. Aiming to solve the problem of antibiotic resistance in bacteria, many attempts have been made to investigate the EOs' effectiveness against mastitis-associated bacteria. Furthermore, since antimicrobial resistance poses a major threat to public health worldwide, issues related to antimicrobial use in dairy production systems are currently in focus. This implies that research should focus on the development of innovative, alternative approaches, such as using EOs. The therapeutic effects of EOs have been addressed in in vivo [13,33,39] as well as in vitro studies [4,34,[40][41][42] evaluating the antimicrobial efficacy of EOs against a vast number of mastitis-associated pathogens in dairy cows.
In order to evaluate the in vitro antimicrobial efficacy of the proposed EO-based formulation (Phyto-Bomat) we have isolated the causative agents of bovine mastitis on two dairy farms where E. coli was the most prevalent (20%). This is in agreement with other research results [22,43,44] since E. coli are the most frequently isolated bacteria belonging to dairy farms with intensive systems of milk production [37]. In addition, many researchers described Streptococcus spp. strains as the major or minor bovine mastitis-associated pathogens worldwide [37,45]. Our research results show a high prevalence of these bacteria on Farm A and Farm B at 17% and 12%, respectively.
The analysis of EOs' compositions is essential to confirm the presence and concentration of the active compounds responsible for EOs' properties [46]. During the evaluation of the chemical composition and the antimicrobial activity against the most common mastitis pathogens of the EOs of oregano, mountain savory, and common and wild thyme, it was determined that carvacrol and thymol were the most abundant compounds and principally responsible for biological activity in these studies [4,34,46,47].
In the present study, the in vitro antibacterial activity of the proposed formulation (Phyto-Bomat) was tested. The results of the MIC and MBC indicate that the formulation has the highest antibacterial activity against Gram-positive strains. This finding is consistent with the literature data, where it is stated that Gram-negative bacteria have a lower susceptibility to EOs in comparison to Gram-positive bacteria [48,49]. The lower susceptibility of Gram-negative bacteria is explained by the difference in the cell wall structure, which limits the diffusion of hydrophobic compounds through the lipopolysaccharide envelope [16]. Comparing the obtained MIC values from the mixture, it is evident that higher concentrations were required to inhibit P. mirabilis, S. marcescens, S. uberis, and K. oxytoca isolates. Moreover, the results obtained in the present study are in accordance with our previous research results. Actually, the EO mixture has strong antibacterial activity in vitro, as do individual EOs obtained from common and wild thyme, oregano, and mountain savory [4,34,42].
When it comes to the proposed pharmaceutical EO-based formulation chemical composition, thymol and carvacrol were the most abundant compounds at 12.58 ± 1.23 mg/mL and 23.11 ± 2.31 mg/mL, respectively. Hence, the high antibacterial activity of the proposed formulation could be due to the high content of these compounds [27,46]. Thymol and carvacrol are known to be particularly active against microorganisms because of their phenolic structure, which can disrupt the cell membrane of microorganisms [50]. Both carvacrol and thymol are structural isomers that differ in the position of the hydroxyl group on the phenolic ring. The addition of the hydroxyl group makes them more hydrophilic, which could cause them to degrade and dissolve in microbial membranes [49]. Moreover, compared to carvacrol, thymol has a similar antimicrobial activity, even though its hydroxyl group is located in a different position. In addition, similar to carvacrol, thymol's antimicrobial activity causes changes in the cytoplasmic membrane's structure and function, which can harm the outer and inner membranes. It can also interact with intracellular targets and membrane proteins. Thymol's interaction with the membrane alters the membrane permeability and causes the release of ATP and K+ ions [51,52]. Thymol integrates within the lipid bilayer's polar head groups, inducing cell membrane alterations. In contrast to the efficiency of monoterpenes with added oxygen molecules carvacrol and thymol, monoterpene hydrocarbons p-cymene and γ-terpinene used separately do not show a remarkable inhibitory effect against bacteria [53,54].
Some blends of EOs showed more remarkable effectiveness than the single oils, highlighting a synergistic effect in relation to the phytocomplex [23]. Different types of components in the combination may affect multiple biochemical processes in the bacteria, improve the bioavailability of the combined agents, overpower the drug resistance mechanisms of bacteria, and neutralize the adverse effects of the components [55]. Moreover, some studies demonstrated stronger antimicrobial activities of EO mixtures, as compared to when they were used alone [56,57]. In most of the studies, the evaluation of carvacrolthymol combinations showed an additive effect expressed through fraction inhibition concentration [54,58,59].
The presence of antimicrobial residues in milk is one of the biggest challenges of the food and veterinary industries worldwide since they could interfere with the production of dairy products and may cause hypersensitivity and resistance to microorganisms in humans [11]. For this reason, appropriate scientific data about how long residues remain in edible animal products are needed in order to obtain safe products of animal origin [60].
To the best of our knowledge, this is the first study where the withdrawal period of an EO-based pharmaceutical formulation in bovine mastitis treatment is studied. Withdrawal periods must be determined by studying residue depletion for a veterinary medicinal product when the target species is a food-producing animal [33]. It is expected that the compounds most abundant in plant species can be found in milk as well as in meat.
Although EOs are considered safe for human and animal consumption, negative effects linked to their use are still possible. In particular, EOs could confer an undesirable odor or taste to milk or dairy products because of their low threshold of detection [50]. Some products are already used in organic dairy cattle, but so far no scientifically based data on the withdrawal time of these plant extracts are present in the literature.
In our assay, two major chemicals (thymol and carvacrol) were identified in the milk and blood plasma of treated animals. Our research results show that administration of the proposed formulation (Phyto-Bomat) results in minimal milk residues of thymol and carvacrol, which, after 24 h return to the same level as before application. In the study conducted by McPhee et al. [33], blood and milk samples from dairy goats were analyzed for thymol residues after intramammary injections of an EO-based formulation. Residues of thymol in milk samples were only detectable 12 h post-infusion, while in plasma, thymol was detectable 15 min post-treatment up to 4 h post-infusion. On the other hand, in our study, thymol and carvacrol were detectable in plasma samples even after 24 h posttreatment. It should be taken into account that different amounts of thymol and carvacrol are present in the formulation proposed in our research and in the formulation given by McPhee et al. [33].
In general, substantial research work is needed to assess the efficacy, safety, and benefit-risk ratio of the proposed phytotherapy. It is essential to acquire data on residues, in particular when assessing consumer safety.
Conclusions
Globally, the problem of escalating microorganism resistance to the currently available antimicrobials has opened up the need for new research to find more potent treatments with a broad range of activity. Our results have demonstrated that the tested mixture of EOs exhibited antimicrobial potential against the most frequent mastitis-associated pathogens. It can also be concluded that the activity was more pronounced against Gram-positive bacteria than Gram-negative bacteria. This research should help to clarify the application of these EOs for the treatment of mastitis in the future.
Quantifying thymol and carvacrol residues in the plasma and milk of cows treated with the proposed formulation provided valuable information in terms of food safety issues. Hence, our further research results will be focused on testing the in vivo antimicrobial efficiency and clinical efficiency of the proposed EO-based formulation.
Data Availability Statement:
The data used to support the findings of this study are available in the present manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,153 | 2022-12-01T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Agricultural and Food Sciences",
"Chemistry"
] |
A Tour Towards the Various Knowledge Representation Techniques for Cognitive Hybrid Sentence Modeling and Analyzer
ABSTRACT
INTRODUCTION
Artificial intelligence (AI) incorporates the intelligence of a human in the machine. Basically, AI is the branch of science which makes the machine exhibit intelligence as human beings for a particular domain. In 1950s, Alan Turing, the British Mathematician presented a paper on Computing Machinery and Intelligence that made an inquiry that if a machine could pass a certain test known as the Turing test, then system could be an intelligent. In this paper, Turing also considered a number of arguments for and the objections to the idea that computers could exhibit intelligence. McCarthy and Hayes (1969) state that a machine is intelligent if it solves/ perform/reason certain classes of problems requiring intelligence in humans. Other definitions for AI were also proposed like "AI is the part of computer science concerned/related with designing computer systems that exhibit the characteristics we associate with intelligence in human behavior". Charniak and McDennott (1985) states that "AI is the study of mental faculties through the use of computational models", whereas Yousheng Tian, et al (2011) states that it is the study of Cognitive Science. Mylopoulos (1983) presented a brief description on terminology and issues related to KR.
The research in this field is divided into two categories, KR and General (Learning, Planning, etc.). For making the computer or machine exhibit intelligence as human, the user will require two things, KR and Inference Mechanism. Development of an AI system has been a crucial task since at certain times incomplete information is available which can be ambiguous and uncertain. Hence, the solution to these problems is to build an effective knowledge base and an effective Inference Mechanism. Existing KR tools are used to represent either declarative knowledge or procedural knowledge, but not both. This has become the principal Tanwar) 125 motivation for designing and developing a system to represent both the types of knowledge for the purpose of reasoning, a QAS should be embedded with the system so that users can easily retrieve the facts. The survey related to work is divided into the following categories: 1. Knowledge representation 2. Knowledge representation techniques 3. Hybrid knowledge representation system / languages 4. Other knowledge representation languages 5. Question answering system
KNOWLEDGE REPRESENTATION
Many of the problems in AI require extensive knowledge about the world. Objects, properties, categories and relations between objects, situations, events, states and time, causes and effects are the things that AI needs to represent. All the above defined things could be represented by the current KR System. In AI, for a specific domain system, it must have a knowledge base and various techniques for representing the knowledge by Brewster et al (2004). The KR performs the three major tasks: Acquisitions, Reasoning, Searching ( Figure 1).
Ontologies
Ontology is the study of Real World Entities and how these entities can be grouped / relates to each other and arrange them in a hierarchy shown in Figure 2 and then Categorize them according to the similarities and the differences presented by Davis et al (1993). Ontology explicitly represents the knowledge to make it easily accessible to machines. The terms which are not represented explicitly might be used with common sense. It binds the people/entities in a community based on the conceptualization. The complexity of conceptualization is proportionate to the size community. Ontology is helpful in the domain specific knowledge representation. Symbols significantly represent the concepts and their relations. Ontology comprises of statements, referred as an axiom. Ontology can be used for information integration, information retrieval, content management, architecture engineering and construction [1].
Ontology provides the following: 1. Semantically rich axiomatization of domain knowledge 2. An ontology captures domain knowledge 3. Reasoning about domain knowledge Ontologies are divided into three main categories: 1. Top level ontologies are the fundamental ontologies and describe the abstract and general concept of domain specific knowledge 2. Domain ontologies and task ontologies represent knowledge within a specific domain of discourse, for example medicine or geography, or the knowledge about a particular task, such as diagnosing or configuring. 3. Application ontologies provide the specific vocabulary required to describe a certain task enactment in a particular application.
Description Logic
Description logic (DL) represents the knowledge in coherence with KR languages. DL is more expressive than propositional logic stated by Enrico (2004) [3]. In AI and KR, it could be put to utilization for reasoning for a particular domain and is mainly used for medical knowledge mentioned by Baader et al, (2003) [4] . The main difference between First order logic (FOL) and DL are: FOL represents class/objects and their properties (predicates) whereas DL represents the concepts and its role. DL consists of a Terminological box (T Box) and Assertional box. Terminological box represents the concepts and concept hierarchy and Assertional box (A Box) represents the relationship among concepts. As T Box represents the concepts, hence the complexity of the T Box can very much affect the performance of decision making in DL. YAK, KRIS, CRACK and KODIAK are description logic based systems [5].
Bayesian network
A Bayesian network represents and reason about an uncertain knowledge. The set of variables, X = X1, X2, X3…….Xn, are represented by nodes and directed arcs/links connects the nodes. The connection between nodes represents the dependencies between variables shown in Figure 3. Conditional probability represents the strength of association between variables. The reasoning in Bayesian network can be diagnostic reasoning which is reasoning from symptoms to cause and predictive reasoning that is reasoning about causes to the new [6]. The nodes in Bayesian network include the following. 1. Propositions are represented by the Boolean nodes that can have binary value (True or False). 2. A node can have ordered value like speed (slow, medium, high). 3. A node can have integer values, for example age of a teenager may have possible value from 1 to 18.
Fuzzy Logic
In 1965, Lotfi Zadeh introduced the multi-valued logic, i.e. fuzzy logic, which extended the range of truth values to all real numbers in the interval between 0 and 1 whereas in case of crisp set the truth values are Tanwar) 127 defined as either 0 or 1. For example, the possibility that the sun is shining when there are some clouds in the sky might have a set to a value of 0.7. It is likely that the sun is shining. Basically, fuzzy logic is the way to represent the expert knowledge that uses vague and ambiguous terms. Fuzzy logic is a set of mathematical principles for KR based on the theory of fuzzy sets, sets that calibrate vagueness and degrees of membership. Generally, the membership functions used to represent a fuzzy set are Sigmoid, Gaussian and pi presented by Zadeh (1965).
Fuzzy set theory in comparison with first order logic resembled the human reasoning in its use of approximate/estimated information and uncertainty to generate decisions. So, to depict the knowledge in a more precise way, fuzzy logic is the way which is designed to mathematically represent uncertainty and vagueness and it provide formalized tools for dealing with the imprecision built-in to many problems. Since knowledge can be expressed in a more natural way by using fuzzy sets, many business, Industrial, engineering and decision problems can be greatly simplified. In the fuzzy theory, fuzzy set A of universe X is defined by function µA (x) called the membership function of set A described by Zadeh (1965) Here, for any element x of universe X, membership, function µA (x) equals the degree to which x is an element of the set A. This degree, a value between 0 and 1, is used to represent the degree of membership, also called membership value, of element x in set A. The knowledge using fuzzy logic is represented by the production system by Esragh and Mamdani (1981) [8].
Semantic Web
KR could represent knowledge on web with semantic web technology. More specifically, it is the web of data. It is not a separate web whereas it is an extension of the current one, with well-defined meaning of information which enabled computers and people to work in cooperation presented by Heflin (2001). Original web interchanges the data, whereas semantic web records the data so that it relates to the real world objects. Semantic web depends on the ability to associate formal meaning with contents. KR is a medium to design the semantic web language that provides meaning for the data on the web. In case of semantic web, KR helps in data integration, management, conceptualization and retrieval. As shown in Figure 4.
Conceptual Graph (CG)
In 1992, Sowa presented a graph known as a conceptual graph (CG). CG is a graph of logic based on the semantic networks. In addition to concept node the relations between concepts are also represented by the nodes known as relation node presented by Sowa (1992). The Concept can be concrete like human beings, animals, places, etc. or abstract like Feelings, senses, etc. CG can be finite, connected, bipartite graphs shown in Figure 5. Single CG can be corresponding to single proposition. Each node in CG must be unique and labeled by the type which is used to represent a class / object. All complex sentences were translated into small primitives before representation and then the each primitive CG joined to form the CG of the whole complex sentence mentioned by Guy et al (1993) mentioned that CG can be used for knowledge acquisition, reasoning, information retrieval, etc.
Hybrid knowledge representation techniques
Every KR techniques have their own demerits and merits, depending upon which type of knowledge user requires for representation. For adequate representation, the user needs different types of structures. To navigate the problem associated with the single KR technique, the hybrid KR has evolved. This section presents existing hybrid KR techniques.
Krypton
Ronald, Brachman and Richard (1983) developed Krypton, a HKR system in which KR was separated by two sections / two boxes called terminological box (T box) and assertion box (A box) mentioned by Ronald J. et al (1983) [10]- [12]. The TBox has the structure of KL-ONE in which terms are organized taxonomically, using frames an ABox has used the first-order logic sentences for those predicates which come from the TBox and a symbol Table maintaining the names of the TBox terms so that a user can refer to them. It is like a Tell-Ask module (Saffiottiand and Sebastiani, 1998). All interactions between a user and a Krypton knowledge base was mediated by TELL and ASK operations, Jacques and Indra (1994), shown in Figure 6. It was extensively used for representing the declarative knowledge [10], [11], [13].
OBLOG 2
In 1987, Thomas F. Gordon proposed the idea of Oblog 2. The Oblog stands for Object-oriented Logic and is an experimental HKR and reasoning system. Oblog 2 is a hybrid of a terminological reasoning with a Prolog inference mechanism. The description of type and attribute taxonomies is supported by terminological component, whereas entities are instances of a set of types. Horn clause rules act as procedures, for determining the values of attributes are indexed by its type presented by Thomas (1987).
LOOM
Gregor (1987) implemented the LOOM HKR also belongs to the KL-ONE family was an intelligent system. The Loom is an application independent HKR system and is a classification based KR system. It is a frame based system and all the statements in LOOM were mapped into the predicate logic, Fikes and Kehler (1985). Loom provided an intelligent environment for the problem domain. Loom represents the declarative knowledge consisting of rules, facts, and default rules. A powerful reasoning mechanism is embedded by the system known as deductive engine / classifier presented by MacGregor and Robert [14]. Classifier in LOOM was based on the unification, forward chaining and object oriented concepts.
The capabilities of LOOM are: 1. Well defined semantics for the given language.
FRORL
Frame-and-Rule Oriented Requirement specification Language (FRORL) was developed by Jeffrey J. P. Tsai, Thomas Weigert and Hung-Chin Jang in 1992. FRORL is based on the concepts of frames and production rules which is designed for software requirement and specification analysis. There are two types of frames: object frame and activity frame. Object frames represent the real world entity not limited to physical entities. Frames in FRORL behave like a data structure. Each activity in FRORL is represented by activity frame to represent the changes in the world. Activity, precondition and action are reserved word not required in the specification. FRORL consist of Horn clause of predicate logic mentioned by Jeffrey et al (1998) [15].
RT-FRORL
RT-FRORL is the extension of FRORL proposed by Jeffrey J.P. Tsai, Mikio Aoyama, and Y.L. Chang in 1988. RT-FRORL inherits FRORL's basic structure, but also includes those language constructs needed to support the specification of real-time systems. The syntax of RT-FRORL is based on frames and production rules as in FRORL but the first order logic and temporal logic defines the semantic of RT-FRORL. RT-FRORL easily specifies the concurrent and absolute time properties of real time systems. A requirements model specified in RT-FRORL consists of two frame types: objects and activities. Each real world entity is modeled as an object. Changes taking place in the world are represented in the requirements model as activities. Each object and activity has certain properties, assumptions or constraints associated with it that is integrated into a frame representation. The syntax for an object frame is shown below (Jeffrey et al, 1991), [15] [16]. Object-name Abstract relation: Parent name Attribute -name-1: Value-l Attribute -name-1: Value-2 Attribute name-1: Value-n Each slot in the object frame corresponds to a particular property of the object and has a value set associated with it from which it can draw a value. The syntax for an activity frame is shown in Figure 2. Table 2. Frame objects and their properties Activity: Activity _name( at tr _value-l ,.. . ,attr -value-n) Abstract-relation: Parent name Part: Attr-value-1: value-1 at t r _value -n : value _n Preconditions: Conjunction or disjunction of various activities or facts Actions: Sequence of activities or facts alt_actions: Sequence of activities or facts In the above mentioned Table 1 and 2, the part, slot describes the objects participating in the activity, or their attributes. The precondition slot holds constraints to be satisfied prior to the execution of the actions in the next slot. If the preconditions are not satisfied, the actions outlined in the alt_actions slot will be performed. The RT-FRORL is developed to express the real time constraints. They illustrated examples on how RT-FRORL could be used to specify such systems. The temporal logic foundations used to reason about RT-FRORL were presented and typical assertions relevant to real-time systems were discussed by Jeffrey et al (1991) [16].
MANTRA
(MANTRA) stands for Modular Assertional, Semantic Network and Terminological Representation Approach. This work consists of four different KR techniques in designing a hybrid system integrating first order logic, terminological languages, semantic networks and production systems. The system architecture consists of three levels: (i) The Epistemological level, where the semantic primitives of the representation are defined, (ii) The Logical level, where the knowledge base management, the inference procedures and the interactions between the different epistemological methods are defined, and (iii) The Heuristic level, where ad hoc strategies could be introduced to improve the efficiency of execution by Calmet et al in1991 [17]. The semantics of the system has been formally defined using a model-theoretic formalism. The main contribution of this work is: (i) A methodology to define the semantics of KR methods. (ii) The integration in a multi-level architecture, of first-order logic, terminological reasoning, inheritance with exceptions and heuristic programming. The system has been implemented in Kyoto Common Lisp (KCL), a complete implementation of the standard Common Lisp, together with an object oriented extension called Common ORBIT stated by (1987). The use of an object oriented programming tool increases the modularity of the system and makes it easy to modify. The interface has been developed using KYACC-KLEX, an interface between KCL and the compiler constructor YACCand LEX environment (Vigouroux, 1988), (Johnson, 1987), (Lesk, 1975) [18], [19]. The use of this environment makes the interface very easy to modify and to adapt to the changes in the syntax of the representation language described by Schneider (1986) [19].
The system presents two interface languages: one interactive interface based on menus and a programming interface allowing the use of the system primitives inside Lisp programs. The programming interface facilitates the integration of the system with other systems written in Common Lisp and to develop the not-yet-implemented a heuristic level of the architecture.
To facilitate the interconnection between the different methods a single data abstraction has been adopted mentioned by Rajeswari and Prasad (2012). This data abstraction consists of a set of directed graphs. Directed graphs subsume several of the most commonly used data structures and are also suitable to be used in an interactive system due to their inherent graphical character. The system Grasp, a graph manipulation package, has been adopted as the programming tool implementing this data abstraction stated by Bittencourt (1984). The base of all inference procedures implemented into the system is the unification function. A special unification package has been implemented in Common Lisp. The adopted algorithm is the almost linear algorithm mentioned by Martelli and Montanari (1982), Calmet et al (1991a and1991b) [16] [20].
SOL
SOL (Smart Object Language) defines the smart object model for the design of complex knowledge based systems described by William et al (1995) [21]. The paradigm provides mechanisms for both HKR and multiple inference strategies.
Central to the paradigm is the concept of smart objects, engineered artifacts that combine a high level object structure with a rule based lower level language. The concept of smart objects was developed using criteria that evolved in the development of a large, complex knowledge based system. They present these criteria as desirable characteristics for KR s in general and use them to evaluate traditional knowledge and smart objects. An overview of a prototype knowledge base system (KBS) was implemented using the smart object paradigm and making its benefit concrete. In their work, they presented the overview of object based and smart object base KR and presented the hybrid KR technique with smart object and rule based system. Smart objects are a tool for building systems in which inferential processes are an integral part of a broader design. Smart objects have an internal structure that partitions the knowledge contained in them and determines the interactions, the behavioral characteristics of a system of objects. The four elements of a smart object are a) Method, b) Interface, c) Attribute and d) Monitor. The smart object paradigm is successful in meeting the design criteria proposed for KRs used in the design of complex KBS'. It structures and captures domain and application knowledge in the form of smart objects: encapsulations of data and states enhanced with a production system like Rule Language. Smart objects have been conceptualized with an internal structure to aid in the design of complex KBS. The structure explicitly provides for meta-control of both the reasoning strategy and the structure and control flow of the implemented system.
The central methodological concept of the paradigm is the division of knowledge of a modeled environment into a domain component and an application component. The domain component is a potentially reusable base of knowledge that is common to a class of environments or problems. The domain component could represent behavior and structure common to all nuclear power plants governed by common Nuclear Regulatory Commission procedures and manufacturing assembly lines using common workflow architecture. The application component is the knowledge specific to an instance of the class.
AAANTS
Chirminda et al (2003) developed a hybrid KR system called AAANTS (Adaptive, Autonomous, Agent colony interactions with Network Transparent Services). The AAANTS model was a multi-agent system that conceptualizes and implements a colony of agents that actively interact with a collection of distributed services in order to give adaptive behavior. AAANTS was modeled on an intelligent environment related projects and built a prototype for an intelligent room that actively adapted the environmental conditions based on user behavior patterns. The KR was based on Frames that harmonize with continuous adaptation based on Reinforcement Learning techniques described by Singhe, et al (2003) [22]. AAANTS KR methodology has combined frame-based Uniframe and Accumulators to complement the learning achieved through Reinforcement Learning (LR) techniques. Agents keep frames representing the different states of activation. Each state relates to a value function that indicates the expected future rewards that initiates from this state. A correct mapping of a state signal from the environment would trigger an action of the highest expected reward.
131
AAANTS is a multi-agent system where each colony is exhibited, as a group of heterogeneous agents distinguished by their differences in ontology, behavior, knowledge and goals. AAANTS is a general-purpose hybrid agent model that has the capability to interact with ubiquitous services embedded in the environment. AAANTS model has shown remarkable improvements over other functional monolithic gent models in terms of adaptability and knowledge component reusability. The core implementation is based on a component based distributed framework. The agent components in the AAANTS model are designed to interact with information sources from heterogeneous domains. They model the information sources as heterogeneous services that actively interface with the core implementation with the help of message based communication middleware.
In the AAANTS architecture, an adaptation layer acts as the sole communication medium in a typical insect colony such as the ants. The implementation has proven that the use of the adaptation layer functionality for interfacing has helped to overcome the conflicts faced with communication between agents and services. It was apparent from the implementation that the adaptation layer excludes the need for brokering and matchmaking services that are present in traditional deliberative architectures.
The AAANTS framework implementation is a distributed component based model that facilitates the well-being of a myriad of agent components. The framework was successful in providing services such as lifecycle management, agent reproduction, colony evolution, fault tolerance, load balancing and mobility to the agent components. Another advantage of the framework is the separation of common and redundant functionality of agent components to a single layer for common usage. This AAANTS model has succeeded in distributing knowledge and linear sequencing actions within an episode among several agents. The user ability to reward an individual action within an episode has enabled to properly adjust the value function of a state so that the sequence of actions adapts to the optimal pattern over a period. AAANTS model helped them to observe emergent behavior similar to that of a natural ant colony. These agents sense the environment and communicate with others using primitive message constructs to offer emergent adaptive behavior as a community (Christof, 1991) [23].
Extended Semantic Network
Extended Semantic Network (ESN) by Reena et al (2006) was the proposed HKR model to overcome the difficulties faced in the application area of information recovery and categorization in the current era of information overflow. The ESN prototype is a new proposed method for KR for easy ontology construction which can be employed in new generation search algorithm to facilitate information management, retrieval and sharing. This prototype enables easy construction of conceptual networks. Unlike NLP techniques, no heavy computations are required.
In order to develop new networks, a need for a set of documents related to that particular topic is required. These documents are needed to be for that only input into the proximal network (PN) program. Then automatically a network of nodes has been developed called the word network. This network contains all the different words that can be found in the input data related to the domain. Thus forming a recall process network, the network is then combined with the semantic network and the structure got restricted to 50 nodes.
Semantic network is essentially the precision network where the nodes are placed in the network with the help of expert knowledge. For constructing the extended semantic network they extend the precision model by adding nodes from the recall model at all possible and required positions. ESN was the hybrid of two networks to provide the extended semantic network. 1. First, is a PN model which involves three phases of processing, first the pre-treatment process where the documents related to the domain are analyzed in two stages and an output of word document matrix is obtained. This matrix is then passed on to the intermediate process and is analyzed by the data mining and clustering algorithms, namely k-means clustering, principal component analysis and word association obtain an output of word pair matrix with a value between each word pair. This value is the proximity between the word pair in the projected space depending on their occurrence in the contents of the documents processed. This data are further subjected to post-treatment process where partial stemming is carried on the word pair matrix depending on the case based requirement. 2. Secondly the semantic network which is based on the KL-ONE model, with the domain being the center of the network, which is expatiated by the domain components which in turn define concepts using the instance and inheritance relations. They follow the scheme process where minimum required information on a domain is precisely represented using the semantic relations they defined. The model is built based on the same set of documents used in PN and the 50 most important concepts is chosen with the help of a domain expert and is put into the semantic model. Each relational link used, namely the compositional, instantiation and inheritance links are given a predefined unit during calculation. This model is then stored and can be visualized using graph editor. The objective is to introduce the semantic based relation into our mathematically modeled PN. The network thus developed is then analyzed and merged to obtain one single semantic network for that domain. This process is repeated on different lists of concepts concerning to
Substitution in CHSMA
Inheritance in the semantic net binds the general entity / concept about the problem domain to a specific entity/ concept. An example is illustrated in Figure 7. Following substitution rules were applied for inference and reasoning: Rule 1 Direct substitution: If n1 and n2 N and where n1 general class and n2 specific class than apply value/n2. For example, Poonam/ n2 i.e. assign a value Poonam to n2.
Substitution procedure
Substitution in CHSMA was used to unify the value of each variable which can be a concept/ class. Substitution worked recursively generating the possible substitution for all the variable.
Substitution (m, n) in ({N, P, U, A, R} 1 n must have m 2 reduction of (m, n) where m and n N / P and there exist a relation r1,r2, r3 from n to m in the structure S then. For all r in S do the following 1. If m = m than return m. 2. If mn then Follow associations a in set A and relation r in set R between n to m i.e. n1 to n2 to n3 and so on till nm. and returns m.
Information processing in CHSMA
Let P and S be the paragraph and story respectively chosen for representation which consists n number of sentences (s) then each sentence in the paragraph /story was processed according to the equation given below.
2) i=1
Where n is the total number of sentences in P and 1 ≤ i ≤ n.
Matching technique was applied during each sentence processing which takes a pair of nodes (j, k) those were associated/ related to each other and return the substitution for (ji, ki) for sentence.
CONCLUSIONS
It has been found that many HKR systems were developed in different domains and used for different applications. So it made sense to take the advantage of technology to serve the people so that they can learn and explore the world. The cognitive hybrid sentence modeling and analyzer (CHSMA) for the English language successfully processed the sentences, paragraphs and stories and recognized them with the average | 6,560.4 | 2018-12-01T00:00:00.000 | [
"Computer Science"
] |
Caspase 3/GSDME-dependent pyroptosis contributes to chemotherapy drug-induced nephrotoxicity
Chemotherapy drug-induced nephrotoxicity limits clinical applications for treating cancers. Pyroptosis, a newly discovered programmed cell death, was recently reported to be associated with kidney diseases. However, the role of pyroptosis in chemotherapeutic drug-induced nephrotoxicity has not been fully clarified. Herein, we demonstrate that the chemotherapeutic drug cisplatin or doxorubicin, induces the cleavage of gasdermin E (GSDME) in cultured human renal tubular epithelial cells, in a time- and concentration-dependent manner. Morphologically, cisplatin- or doxorubicin-treated renal tubular epithelial cells exhibit large bubbles emerging from the cell membrane. Furthermore, activation of caspase 3, not caspase 9, is associated with GSDME cleavage in cisplatin- or doxorubicin-treated renal tubular epithelial cells. Meanwhile, silencing GSDME alleviates cisplatin- or doxorubicin-induced HK-2 cell pyroptosis by increasing cell viability and decreasing LDH release. In addition, treatment with Ac-DMLD-CMK, a polypeptide targeting mouse caspase 3-Gsdme signaling, inhibits caspase 3 and Gsdme activation, alleviates the deterioration of kidney function, attenuates renal tubular epithelial cell injury, and reduces inflammatory cytokine secretion in vivo. Specifically, GSDME cleavage depends on ERK and JNK signaling. NAC, a reactive oxygen species (ROS) inhibitor, reduces GSDME cleavage through JNK signaling in human renal tubular epithelial cells. Thus, we speculate that renal tubular epithelial cell pyroptosis induced by chemotherapy drugs is mediated by ROS-JNK-caspase 3-GSDME signaling, implying that therapies targeting GSDME may prove efficacious in overcoming chemotherapeutic drug-induced nephrotoxicity.
Introduction
Traditional chemotherapeutic drugs, such as cisplatin and doxorubicin, are commonly used to treat various cancers, including lung, bladder, and ovarian cancer [1][2][3][4] . However, severe side effects caused by toxicity to healthy organs and tissues, particularly the kidney, limit the clinical application of these drugs 5,6 . Indeed, chemotherapeutic drug-induced nephrotoxicity reportedly occurs in one-third of cancer patients 7 , the mechanisms of which have been wide studied [8][9][10] . Tubular injury, inflammation, and vascular injury are typical characteristics of chemotherapy drug-induced nephrotoxicity, among which, tubular injury is the most critical. In the kidneys, chemotherapeutic drugs cause proximal tubular cell death, leading to acute kidney injury (AKI) 11 . However, the associated molecular mechanisms have not yet been fully characterized. Therefore, further studies are warranted for the early diagnosis and treatment of chemotherapeutic druginduced AKI.
Pyroptosis is a newly discovered form of programmed cell death with morphological characteristics that differ from those of apoptosis and necrosis 12 . Pyroptosis can be induced by activation of the executors, gasdermin E (GSDME), or gasdermin D (GSDMD), which results in the cleavage of their N-terminal fragments (GSDME-N or GSDMD-N, respectively) [13][14][15] . GSDME-N or GSDMD-N then translocate to the cell membrane and mediate cell perforation, resulting in infiltration of extracellular material, cell swelling, and pyroptosis 16 . Moderate cell pyroptosis can remove pathogenic microorganisms and antagonize infection, however, excessive cell pyroptosis not only leads to cell death but also enhances inflammatory responses, resulting in fever, hypotension, septicemia, as well as other serious symptoms 12 .
Pyroptosis is associated with diabetes, as well as infectious, metabolic, nervous, and cardiovascular diseases [17][18][19][20] . Moreover, recent studies have indicated that GSDMD-dependent pyroptosis is also associated with kidney diseases, especially AKI [21][22][23] ; hence, pyroptosis has become the focus of considerable kidney disease research. The results of these studies have demonstrated that renal tubular epithelial cell pyroptosis can accelerate ischemia-reperfusion and contrast-induced AKI. Specifically, Zhang et al. 21 found that the caspase 4/5/11 signaling pathway promotes contrast-induced AKI by inducing GSDMD-dependent pyroptosis of renal tubular epithelial cells, and caspase 11 knockout mice exhibit reduced AKI damage by inhibition of GSDMD activation. In addition, Wu et al. 24 reported that miR-155 promotes pyroptosis of renal tubular epithelial cells through caspase 1, thereby accelerating ischemia-reperfusion-induced renal damage.
The effect of GSDME, a newly defined executor of pyroptosis, has recently been reported in various cancers, with strategies targeting GSDME proposed to block pyroptosis [25][26][27] . Wang et al. 14 reported that GSDME-positive tumor cells switch cisplatin-induced cell death from apoptosis to pyroptosis, resulting in extensive inflammatory damage. In addition, GSDME knockout attenuates cisplatin-induced crypt and villi disruption and attenuates the reduced spleen weight and lung injury. Further studies found that caspase 3, an executor protein of apoptosis, serves as the primary protein responsible for GSDME cleavage and activation, implying that GSDME, in addition to GSDMD, plays an essential role in pyroptosis. However, to the best of our knowledge, the role of GSDME in chemotherapeutic drug-induced nephrotoxicity has not been reported to date.
To clarify the relationship between GSDME and chemotherapeutic drug-induced nephrotoxicity. We treated human renal tubular epithelial cells with the chemotherapeutic drugs, cisplatin, or doxorubicin, to determine the role of GSDME in cell pyroptosis. This study will provide new insights into the role of GSDME-dependent pyroptosis in chemotherapy-induced nephrotoxicity.
Cisplatin or doxorubicin induces pyroptosis of renal tubular epithelial cells
It has been reported that the typical characteristics of pyroptosis were increased LDH release, increased PIpositive cells with flow cytometry, and typical bubbles emerging from the cell membrane 14 . We treated human renal tubular epithelial cells, HK-2, with various concentrations of cisplatin (0, 5, 10, 20, and 40 μM) or doxorubicin (0, 0.5, 1, 2, and 4 μg/ml). CCK-8 and LDH analyses indicated that cisplatin and doxorubicin decreased cell viability and increased LDH release in a concentration-dependent manner (Fig. S1a-d). Flow cytometry analysis demonstrated that cisplatin dramatically increased the proportion of propidium iodide (PI) +positive cells in a concentration-dependent manner (Fig. S1e, f) Morphologically, both the cisplatin-and doxorubicin-treated HK-2 cells showed typical bubbles emerging from the cell membrane (Fig. S1g). Therefore, these data indicate that cisplatin and doxorubicin induce pyroptosis in human renal tubular epithelial cells.
Cisplatin or doxorubicin promotes GSDME cleavage in the kidney in vitro and in vivo
Cell pyroptosis can be triggered by the cleavage of the Gasdermin family proteins 13,14 . Our immunohistochemical results demonstrated that GSDME is positive in renal tubular epithelial cells of normal human kidney (Fig. S1h), which was consistent with the expression in the Human Protein Atlas. In addition, both cisplatin and doxorubicininduced the cleavage of GSDME in a concentration-and time-dependent manner ( Fig. 1A-D). We, therefore, postulate that GSDME is involved in cisplatin-and doxorubicin-induced pyroptosis of human proximal tubular epithelial cells.
We then examined GSDME cleavage in a cisplatininduced mouse model of nephrotoxicity and found that cisplatin increased serum creatinine and BUN (Fig. 1E, F). HE staining exhibited severe renal tubular epithelial cell death in cisplatin-treated mice compared to the control mice (Fig. 1G). Western blot detection indicated that cisplatin increased the cleavage of GSDME and caspase 3 activation (Fig. 1H-J).
Caspase 3 activation is associated with GSDME cleavage in cisplatin-or doxorubicin-treated renal tubular epithelial cells Recent studies have indicated that GSDME is an executor protein of pyroptosis owing to its activation of intrinsic and extrinsic apoptotic pathways 14,28 . Our results show that the levels of activated caspase 3/7/8/9, PARP, and Bax were elevated, while that of Bcl-XL was reduced in a concentration-and the time-dependent manner in response to cisplatin or doxorubicin induction. No activation of caspase 6 was observed after cisplatin or doxorubicin treatment (Fig. S2a-d).
To further verify the connection between the caspase cascade and GSDME cleavage, we firstly pretreated HK-2 cells with the caspase 3-specific inhibitor, Z-DEVD-FMK.
The results indicate that GSDME cleavage and LDH release were significantly inhibited, while cell viability was partially ameliorated following treatment ( Fig. 2A-H). Moreover, pretreatment of cells with the caspase inhibitor, Z-VAD-FMK, showed similar results ( Fig. S3a-h). We then knocked down the expression of caspase 3/7/9 in HK-2 cells (Fig. S4a-c). Morphologically, the pyroptotic features in the cisplatin-or doxorubicin-induced HK-2 cells were abrogated following caspase 3 siRNA intervention (Fig. 3A, E). Cell viability was increased and LDH release was suppressed after caspase 3 siRNA treatment ( Fig. 3B, C, F, G). The western blot results indicated that caspase 3 siRNA inhibited GSDME cleavage induced by cisplatin or doxorubicin (Fig. 3D, H). Interestingly, we found that caspase 9 siRNA did not affect the cisplatin-or doxorubicin-induced pyroptosis ( Fig. 3A-H). Caspase 7 knockdown augmented the cleavage of GSDME and caspase 3 induced by cisplatin and doxorubicin ( Fig. S4d-k), suggesting that caspase 7 knockdown induces All data are presented as means ± SD from at least three independent experiments (n = 3 for in vitro experiment; n = 6 for in vivo experiment). ***p < 0.001 versus control group, ****p < 0.0001 using two-tailed Student's t tests.
other caspase-related proteins, which may increase caspase 3 cleavages, leading to augmentation of GSDME cleavage.
Necroptosis also reportedly plays an essential role in cisplatin-induced death of HK-2 cells 29 . To distinguish necroptosis from pyroptosis, we used GSK'872 (a necroptosis inhibitor) to block necroptosis. The results demonstrated that GSK'872 did not affect the cleavage of GSDME nor prevent the typical morphology of pyroptosis ( Fig. S5a-d), implying that GSDME activation is not associated with necroptosis.
GSDME inhibition attenuates cisplatin-or doxorubicininduced pyroptosis in the kidney in vitro and in vivo
To clarify the effect of GSDME cleavage on cisplatin-or doxorubicin-induced pyroptosis in renal tubular epithelial cells, we generated GSDME knockout (GSDME-KO) HK-2 cells. The efficiency of the GSDME knockout was verified by western blot (Fig. 4A). Flow cytometry analysis indicated that GSDME-KO dramatically decreased PI + -positive HK-2 cells following cisplatin treatment (Fig. 4B, C). Morphologically, GSDME-KO decreased the pyroptotic features of cisplatin-or doxorubicin-treated HK-2 cells (Fig. 4D, G). Furthermore, CCK-8 and LDH analyses indicated that GSDME-KO increased cell viability and decreased LDH release induced by cisplatin or doxorubicin in HK-2 cells (Fig. 4E, F, H, I). However, caspase 3 cleavage was not affected in the GSDME-KO group compared to that in the empty vector (NC) group (Fig. 4J, K). Taken together, these data imply that GSDME is vital in cisplatin-or doxorubicin-induced pyroptosis in HK-2 cells.
did not affect Kim1 (Fig. 5G, H). These results imply that Ac-DMLD-CMK may protect the kidney by targeting caspase 3-Gsdme signaling in mice.
Caspase 3-GSDME signaling is involved in doxorubicininduced pyroptosis in human podocytes
Although the major targets of chemotherapy druginduced nephrotoxicity are renal tubular epithelial cells, podocytes also serve as target cells of doxorubicininduced nephrotoxicity. Hence, we also sought to detect the activation state and role of the caspase 3/GSDME/ pyroptosis axis in podocytes under the doxorubicin challenge. To this end, we first compared the expression of GSDME in renal tubular epithelial cells and podocytes derived from humans or mice. Results show that GSDME had much higher expression in humans than in mice (Fig. S6a). MP and mRTEC also showed GSDME activation following cisplatin or doxorubicin induction, although with low baseline expression (Fig. S6b, c). We then stimulated human podocytes with doxorubicin and observed decreased expression of synaptopodin, suggesting that doxorubicin can induce podocyte injury (Fig. S6d). Furthermore, doxorubicin-induced GSDME and caspase 3 cleavage in a concentration-dependent manner in human Ac-DMLD-CMK decreases cisplatin-induced renal pyroptosis in vivo. A, B Serum creatinine and BUN detected in cisplatin-treated mice before or after Ac-DMLD-CMK pre-treatment. C HE staining of cisplatin-treated mice before or after Ac-DMLD-CMK pre-treatment. Scale bar, 50 μm. D-F Western blot analysis of GSDME and caspase 3 in cisplatin-treated mice before, or after, Ac-DMLD-CMK pretreatment. G, H mRNA expression of kidney injury-related genes and inflammatory-related genes in cisplatin-treated mice before, or after, Ac-DMLD-CMK pretreatment. All data are shown as means ± SD from six independent experiments (n = 6). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 using one-way ANOVA followed by the Tukey's method. podocytes (Fig. S6e, f). In addition, we observed that caspase 3-directed siRNA decreased doxorubicin-induced activation of GSDME, LDH release, and the number of pyroptotic human podocytes (Fig. S6g-m). We also knocked down GSDME using GSDME siRNA (Fig. S7a), and found increased cell viability, as well as decreased doxorubicin-induced LDH release and a reduced number of pyroptotic human podocytes, without impacting caspase 3 activation (Fig. S7b-h). Taken together, these results indicate that caspase 3-GSDME signaling also plays a vital role in doxorubicin-induced pyroptosis in human podocytes.
ERK and JNK signaling mediate GSDME cleavage in the kidney in vitro and in vivo Next, we aimed to explore the molecular mechanisms of cisplatin-or doxorubicin-induced pyroptosis in cultured HK-2 cells. ERK and JNK signaling reportedly play pivotal roles in caspase 3 activation 30,31 , which contributes to GSDME-dependent pyroptosis. Western blot analysis indicated that both ERK and JNK became phosphorylated in cisplatin-or doxorubicin-treated HK-2 cells (Fig. 6A, F). We then incubated HK-2 cells with the ERK inhibitor, U0126, or JNK inhibitor, SP600125 (Fig. S8a-d), to detect GSDME activation. Western blot results indicated that both U0126 and SP600125 inhibited GSDME cleavage and caspase 3 activation (Fig. 6B, G). In addition, U0126 and SP600125 increased HK-2 cell viability and decreased LDH release following cisplatin or doxorubicin induction (Fig. 6C, D, H, I). Furthermore, U0126-or SP600125pretreated HK-2 cells exhibited decreased plasma membrane bubbling compared to cisplatin-or doxorubicintreated HK-2 cells (Fig. 6E, J). Moreover, in vivo, both SP600125 (Fig. 7A) and U0126 (Fig. 7B) suppressed the increased serum creatinine and BUN induced by cisplatin (Fig. 7C, D). Meanwhile, the expression of kidney injuryrelated genes Ngal and Kim1 decreased following SP600125 and U0126 pretreatment (Fig. 7E, F). HE staining further indicated that SP600125 and U0126 alleviated renal tubular epithelial cell death compared to cisplatin-treated mice (Fig. 7G). In addition, western blot results indicated that SP600125 and U0126 decreased cisplatin-induced cleavage of renal Gsdme in mice, and caspase 3 (Fig. 7H-J), implying that ERK and JNK signaling may act as upstream regulators of GSDMEdependent pyroptosis in renal tubular epithelial cells, both in vitro and in vivo.
ROS induces GSDME cleavage through JNK signaling in renal tubular epithelial cells
The mitochondrial apoptotic pathway, which participates in the cleavage of GSDME, can be affected by ROS 28 . Thus, we speculated that ROS are also involved in cisplatin-or doxorubicin-induced pyroptosis of HK-2 cells.
Morphologically, cisplatin-or doxorubicin-induced cell bubbles were mostly inhibited by the ROS inhibitor NAC incubation (Fig. 8A, F). Furthermore, NAC increased HK-2 cell viability and decreased LDH release after cisplatinor doxorubicin induction (Fig. 8B, C, G, H). We also found that ROS levels, augmented by cisplatin or doxorubicin, were markedly attenuated by NAC (Fig. 8D, I). In addition, NAC dramatically inhibited GSDME cleavage and caspase 3 activation, as shown by western blot (Fig. 8E, J). ROS also affects mitogen-activated protein kinase (MAPK) signaling pathways 32,33 . We found that the phosphorylation of JNK induced by cisplatin or doxorubicin was abolished following NAC treatment, while ERK phosphorylation was not affected (Fig. 8K-N). Taken together, these data indicate that ROS induces the caspase 3-GSDME via JNK signaling in HK-2 cells.
Discussion
In the present study, we demonstrated that cisplatin-or doxorubicin-induced renal pyroptosis is dependent on the cleavage of GSDME, which becomes induced by caspase 3 activation. In addition, we revealed that caspase 3-GSDME activation is regulated by ROS-JNK signaling. Hence, this study furthers the understanding of the mechanism responsible for chemotherapeutic druginduced nephrotoxicity.
Earlier studies demonstrated that apoptosis and necrosis are the primary types of cell death associated with chemotherapeutic drug-induced AKI 2,34,35 . However, we observed characteristic large bubbles emerging from the plasma membrane after long exposure to cisplatin or doxorubicin in human HK-2 cells, implying the emergence of pyroptosis. Moreover, GSDME became activated in a concentration-and time-dependent manner following treatment with chemotherapy drugs, implying that chemotherapeutic drug-induced pyroptosis of HK-2 cells is GSDME-dependent.
Pyroptosis can be induced by caspase 4 and 5 (in humans) or caspase 11 (in mice) activation via GSDMD cleavage, leading to cell bubbling and the release of IL-1β 21,36-38 . Recently discovered caspase 3/GSDME signaling was also reported to be one of the signaling mechanisms for cell pyroptosis 14,28,[39][40][41] . Moreover, GSDMEpositive cancer cells reportedly undergo pyroptosis, while GSDME-negative cancer cells undergo apoptosis upon stimulation with chemotherapy drugs 14,27 . Thus, GSDME expression might determine the type of cell death that results from chemotherapeutic drug exposure. In this study, we showed that similar to other GSDME-positive cells, caspase 3 inhibition with siRNA or the specific inhibitor Z-DEVD-FMK prevented GSDME activation and subsequent pyroptosis in HK-2 cells, implying that caspase 3 is involved in HK-2 cell pyroptosis via GSDME cleavage. However, caspase 9 siRNA showed no effect on the activation of GSDME in HK-2 cells, which differed from the results of Zhou et al. 28 in colon cancer cells and Tsuchiya et al. 42 in macrophages. This discrepancy might be caused by cell-type differences. Moreover, caspase 3 inhibition has been reported to reflexively activate caspase 7 14 . Herein, we found that caspase 7 knockdown activated . Cytotoxicity and cell viability were determined using the LDH assay (C, H) and CCK-8 detection (D, I) in HK-2 cells treated with cisplatin (20 μM) or doxorubicin (4 μg/ ml) for 48 h with or without pre-treatment of U0126 (10 μM) and SP600125 (10 μM). E, J Representative light microscopy images of HK-2 cells treated with cisplatin (20 μM) or doxorubicin (4 μg/ml) for 48 h with or without pre-treatment of U0126 (10 μM) and SP600125 (10 μM). The red arrow indicates bubbles emerging from the plasma membrane. Scale bar, 50 μm. All data are presented as mean ± SD from three independent experiments (n = 3). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 using one-way ANOVA followed by the Tukey's method. Western blot analysis of ERK and JNK in kidney tissues from different groups. C, D Serum creatinine and BUN detected in cisplatin-treated mice before or after U0126 and SP600125 pretreatment. E, F mRNA expression of renal injury-related genes in kidney tissues from different groups. G Representative images of HE staining in kidney tissues from different groups. Scale bar, 50 μm. H-J Western blot analysis of GSDME and caspase 3 in cisplatin-treated mice before, or after, Ac-DMLD-CMK pretreatment. All data are presented as mean ± SD from six independent experiments (n = 6). **p < 0.01, ***p < 0.001, ****p < 0.0001 using one-way ANOVA followed by the Tukey's method. caspase 3, resulting in increased cleavage of GSDME. Taken together, these results indicate that GSDME is recognized by caspase 3 in HK-2 cells.
Previous reports have indicated that GSDMD is the main executor of pyroptosis in chemotherapy druginduced AKI 22,43 , in which they observed that cisplatin (20 μM) or doxorubicin (4 μg/ml) for 3 h in the presence or absence of NAC (5 mM). All data are presented as mean ± SD from three independent experiments (n = 3). *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001 using one-way ANOVA followed by the Tukey's method. cleaved renal GSDMD by upregulating the expression of caspase 11, which subsequently initiates cell pyroptosis. Furthermore, Miao et al. 43 showed that GSDMD deficiency alleviates cisplatin-induced renal morphological changes, and renal function deterioration, as well as urinary IL-18 release. Our results demonstrated that GSDME also functions as a critical target of chemotherapy drug-induced AKI. Similar to that of other reports 40,44,45 , our in vitro results indicated that GSDME knockout alleviated renal tubular epithelial cell pyroptosis. In vivo, the Gsdme-derived inhibitor, Ac-DMLD-CMK, alleviated deterioration of kidney function, attenuated renal tubular epithelial cell injury, reduced inflammatory cytokine secretion, and inhibited caspase 3-GSDME signaling induced by cisplatin. Similarly, a previous study reported that GSDMEb knockout-or GSDMEb-derived inhibitor Ac-FEID-CMK-treated zebrafish exhibited reduced proximal renal tubule structure injury compared to the control, indicating that GSDMEb plays an essential role in proximal tubular cell pyroptosismediated AKI in zebrafish 46 . In fact, in addition to the kidney, GSDME also plays an important role in other organs and diseases, however, most of the current studies on GSDME focus on tumors 14 . For instance, GSDME has been reported to function as a tumor suppressor gene by directly inducing tumor cell pyroptosis through caspase 3, as well as indirectly by acting on T lymphocytes through Granzyme B 14,47 . Furthermore, one study showed that GSDME amplified the apoptotic pathway by creating holes in the mitochondria membrane, leading to the release of cytochrome c 48 . Taken together, these results demonstrate the complexity of the mechanism associated with GSDME cleavage. Hence, further investigation is required to clarify the precise mechanism responsible for GSDME regulation.
The MAPK signaling pathway plays an essential role in renal tubular epithelial cell proliferation, survival, and differentiation. Jo et al. 31 indicated that the ERK inhibitor U1026 alleviates cisplatin-induced kidney injury and attenuated necrosis of tubular cells by reducing cisplatininduced caspase 3 cleavage. Similarly, the JNK inhibitor SP600125 also reportedly alleviates cisplatin-induced renal injury 49 . In addition, Yu et al. 30 reported that JNK is involved in lobaplatin-induced colon cancer cell pyroptosis by activating the caspase 3/GSDME signaling pathway. In our in vitro study, we found that both ERK and JNK were activated following cisplatin or doxorubicin treatment, while inhibitors targeting ERK and JNK alleviated cisplatin-or doxorubicin-induced HK-2 cell pyroptosis via inhibition of caspase 3 and GSDME activation. Meanwhile, our in vivo study demonstrated a protective effect for U0126 and SP600125 on decreased kidney function as well as GSDME, and caspase 3 activation in the kidney. Note, SP600125, elicited a stronger protective effect, indicating that both JNK and ERK, particularly JNK, are involved in renal tubular epithelial cell pyroptosis.
The p38 signaling was also reported to participate in cisplatin-induced nephrotoxicity 50 . Thus, we assessed the effect of p38 on cell pyroptosis using the p38 inhibitors, SB203580 and SB202190, however, no protective effect was observed in HK-2 cells (data not shown), suggesting that p38 is not involved in HK-2 cell pyroptosis. However, Ramesh et al. 50 found that p38 MAP kinase inhibition alleviated cisplatin-induced nephrotoxicity in mice. The discrepancy between these results requires further investigation.
As common chemotherapy drugs, cisplatin and doxorubicin have been reported to induce ROS production 11 . Indeed, ROS generation is believed to be one of the major mechanisms of chemotherapeutic drug-induced nephrotoxicity. Excessive ROS causes cell death by activating the MAPK signaling pathway. Zhou et al. 28 found that ROS elevation stimulates caspase 3/GSDME-dependent pyroptosis in iron-treated cancer cells. Similarly, we demonstrated that NAC, a ROS inhibitor, significantly alleviates cisplatin-or doxorubicin-induced ROS and cell pyroptosis in HK-2 cells. Furthermore, NAC inhibited JNK phosphorylation without a noticeable effect on ERK activation, suggesting that ROS is an upstream regulator of JNK in HK-2 cells. Another study also demonstrated that NAC attenuates lobaplatin-induced colon cancer cell pyroptosis by regulating JNK phosphorylation 30 . It was reported that phosphorylated JNK can recruit Bax to mitochondria, prompting cytochrome c release into the cytosol, and subsequently activating caspase 3 and pyroptosis 30 . Considering these data, we conclude that chemotherapy drug-induced pyroptosis in renal tubular epithelial cells is regulated via the ROS/JNK/caspase 3/ GSDME signaling pathway.
Pyroptosis was reported to be important for the antitumor activity of chemotherapy drugs. More recently, many studies have indicated that reagents or drugs targeting GSDME showed an antitumor effect. For instance, Miltirone, derived from a traditional herb Salvia miltiorrhiza, was shown to possess antitumor activity by inducing GSDME activation in hepatocellular carcinoma 51 . Furthermore, a PLK1 kinase inhibitor was reported to improve the effect of cisplatin in the treatment of esophageal squamous cell carcinoma by inducing pyroptosis 52 . Diverse small-molecule inhibitors have also been shown to augment the anti-cancer effect by inducing GSDME cleavage. However, our results showed that the chemotherapy drugs cisplatin or doxorubicin induce GSDME activation, leading to the pyroptosis of normal renal tubular epithelial cells, implying that GSDMEtargeted anticancer therapy would worsen renal pathology. Most of the healthy organs were GSDME-positive 14 .
Similarly, Xu et al. 53 demonstrated that GSDME cleavage is involved in acute hepatic failure. Thus, to protect healthy organs from chemotherapeutic drug toxicity by inhibiting GSDME activation could adversely affect the antitumor effect of chemotherapy drugs. Hence, the toxicity and associated adverse side effects of chemotherapy drugs on normal organs should be carefully considered when designing antitumor therapies targeting GSDME.
In conclusion, we have found that the chemotherapy drugs cisplatin or doxorubicin induce pyroptosis of human renal tubular epithelial cells via ROS/JNK/caspase 3/GSDME signaling. Therapies targeting GSDME could be effective in attenuating chemotherapy drug-induced nephrotoxicity. This study may advance the understanding of this process.
Materials and methods
Cell culture and treatments Small interfering RNA (siRNA) and shRNA (short-hairpin RNA) knockdown The siRNAs for caspase 3, caspase 9, and GSDME were purchased from the GenePharma Company and transfected into HK-2 cells. Briefly, HK-2 cells and HP were seeded in 6-well plates and transfected with scrambled, caspase 3, caspase 9, or GSDME siRNA using Lipofectamine RNAiMAX (Life Technologies, CA, USA) transfection reagent according to the manufacturer's protocols. The HK-2 cells were incubated for 48 h at 37°C with 5% CO 2 . The efficacy of the siRNA knockdown was determined using western blot analysis. The sequences of the siRNAs used in the experiments were shown in Table S1.
For shRNA knockdown, the HK-2 cells were seeded in 24-well plates and transfected with control constructs or caspase 7 shRNA (Genechem, Shanghai, China) using Lipofectamine 3000 (Life Technologies, CA, USA) according to the manufacturer's instructions. After 48 h, the transfected HK-2 cells were selected by their puromycin resistance. The efficacy of the caspase 7 shRNA was determined by western blot analysis.
CRISPR-Cas 9 knockouts of GSDME HK-2 cells were seeded in a 24-well plate at 1 × 10 4 cells/well and transfected with the px459 empty plasmid and px459-GSDME-KO plasmid (YouBio, Changsha, Hunan, China) using Lipofectamine 3000 (Life Technologies, CA, USA) for 48 h. Then, the transfected HK-2 cells were selected by puromycin treatment. The specificity of GSDME-KO was determined by western blot analysis.
Ethics statement
Normal human renal biopsy specimens were collected from healthy donors for kidney transplantation, which was approved by the Ethics Committee of the First Affiliated Hospital, Zhejiang University, School of Medicine (2020-607). The relevant experiments were conducted following approved guidelines of the First Affiliated Hospital, Zhejiang University, School of Medicine.
Mouse kidneys were collected for western blot detection and HE staining.
Cell cytotoxicity and viability assays
Cells were seeded into a 96-well plate (10,000 cells/well in 200 μl medium) and treated with related reagents for 48 h. Next, cell cytotoxicity and viability were assessed using a kit (Cat# CK17, Dojindo, Tokyo, Japan) according to the manufacturer's instructions. The absorbance was measured using a Microplate Reader (Infinite ® M1000, TECAN, Switzerland).
Flow cytometric analysis
For flow cytometry detection, each group of HK-2 cells was treated and collected after trypsin digestion. The HK-2 cell suspension was washed with cold phosphate-based buffer, resuspended, and labeled with Annexin V-FITC and PI according to the manufacturer's protocol (Cat# 556547, Becton Dickinson, NJ, USA). Apoptosis was analyzed by flow cytometry (BD). PI-positive cells were considered pyroptotic cells.
Reactive oxygen species (ROS) measurement
The ROS levels in HK-2 cells were detected with the DCFH-DA Detection Kit (S0033, Beyotime, Shanghai, China). Briefly, the HK-2 cells were seeded in a 6-well plate and incubated with cisplatin or doxorubicin in the presence or absence of NAC for 48 h. After washing, the cells were stained with 10 μM of DCFH-DA at 37°C for 30 min according to the manufacturer's instructions.
Immunohistochemistry
Fixed, paraffin-embedded human renal biopsy specimens (1.5-μm thick) were deparaffinized, rehydrated, and blocked with 1.5% H 2 O 2 -methanol. After washing with phosphate-buffered saline (PBS), the slides were subjected to antigen retrieval in citrate buffer. Non-specific binding was blocked with 10% donkey serum for 30 min. After that, the slides were incubated with a rabbit anti-GSDME antibody (1:100, Cat# ab215191, Abcam, MA, USA) overnight at 4°C. Then, the donkey anti-rabbit/mouse antibody was incubated for 30 min and washed with PBS. After staining with 3,3′-diaminobenzidine (DAB), the slides were counterstained with hematoxylin and examined under a microscope (Leica DMLB, Wetzlar, Germany).
qRT-PCR
Total RNA extraction of the mouse kidney cortex was performed with Trizol reagent (Cat# 15596018, Invitrogen, CA, USA). The Prime-Script RT reagent kit (Cat# RR047B, Takara Biotechnology, Dalian, China) was then used to reverse transcribe the RNA to cDNA. RT-PCR was performed with the SYBR Green Mix (Cat# Q711-02/ 03, Vazyme, Nanjing, China) on the ViiA7 Real-Time PCR system (Applied Biosystems, CA, USA). Primer sequences used were listed in Table S2.
Western blot analysis
Cells were lysed in the denaturing buffer of the Total Protein Extraction Kit (Cat# SD-001, Invent Biotechnologies, Beijing, China) to obtain protein extracts. Then, 20 μg of total protein from each group were subjected to SDS-PAGE gels and transferred to PVDF membranes (Millipore, Billerica, MA, USA). The membranes were incubated with primary antibodies targeting GSDME (Cat# ab215191, Abcam), caspase 3 (Cat# 14220, CST),
Statistical analysis
Each experiment is repeated at least three times independently. Data are presented as means ± standard deviations. Quantitative results were analyzed using GraphPad Prism 7 (GraphPad Software Inc., San Diego, CA, USA). Comparisons between two groups were made using two-tailed Student's t tests. Data from multiple groups were compared using one-way ANOVA followed by the Tukey's post hoc method. p < 0.05 was considered to indicate statistical significance.
Chinese Medicine, Hangzhou, China. 6 Central Laboratory, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China Author contributions X.S. and J.C. designed the project; X.S. and H.W. performed the experiments; X. S. drafted the paper; C.W. offered suggestions for the transfection experiments; H.J. advised on drafting the paper.
Data availability
The datasets used during the current study are available from the corresponding author on reasonable request. | 6,799.8 | 2021-02-01T00:00:00.000 | [
"Biology",
"Chemistry",
"Medicine"
] |
Deep learning-based object detection for smart solid waste management system
Currently in Ethiopia, pollution and environmental damage brought on by waste increased along with industrialization, urbanization, and global population levels. Waste sorting, which is still done improperly from the household level to the final disposal site, is a prevalent issue. Real-time and accurate waste detection in image and video data is a crucial and difficult task in the intelligent waste management system. Accurately locating and classifying these wastes is challenging, particularly when there are various types of waste present. So, a single-stage YOLOv4-waste deep neural network model is proposed. In this study, a deep learning algorithm for object detection using YOLOv4 and YOLOv4-tiny is trained and evaluated. A total of 3529 waste images are divided into 7 classes, which include, cardboard, glass, metal, organic, paper, plastic, and trash. Each model uses three various inputs throughout the testing phase, including input images, videos, and webcams. Experiments with hyper-parameters on subdivision values and mosaic data augmentation were also done in the YOLOv4-tiny model. The outcome demonstrates that YOLOv4 performs better than YOLOv4-tiny for object detection specifically for waste detection. The outcome shows that YOLOv4 performs better than YOLOv4-tiny for object detection, even if YOLOv4-tiny’s scores are higher in terms of computing speed. The best results from the YOLOv4 model reach mAP 91.25%, precision 0.91, recall 0.88, F1-score 0.89, and Average IoU 81.55%, while the best YOLOv4-tiny results are mAP 82.02%, precision 0.75, recall 0.76, F1-score 0.75, and Average IoU 63.59%. This research also proves that the models with smaller subdivision values and using a mosaic have optimal performance.
Introduction
Solid waste is defined as any type of garbage, trash, refuse, or discarded material.It can be classified according to where the waste is generated, such as municipal solid waste, medical waste, and e-waste [1].
Approximately 62 million tons per year are produced in sub-Saharan Africa [2].Even while production is rising, garbage collection rates in developing countries are usually less than 70%.Over 50% of the waste that is collected is frequently dumped in uncontrolled landfills, and only 15% is recycled safely and ethically.In African nations, household waste accounts for the majority.The amount of garbage generated per person each day in Addis Ababa is estimated to be between 0.4 and 1.23 lit, 0.11 to 0.25 kilograms, and 205 to 370 kg/m3 [3].Waste generation is rising despite the city's inadequate solid waste collection and disposal system [4].
Both the Federal Democratic Republic of Ethiopia (FDRE) and the Oromia National Regional State Government have their headquarters in Addis Ababa.Addis Ababa serves as both a diplomatic hub and the headquarters for numerous organizations.Addis Ababa spans a land area of 540 km2 and is geographically situated between latitudes 8055 and 9005 in the north and 38040 and 38050 in the east.With a population of roughly 3.5 million and a growth rate of 8% per year, there are 99 Kebeles and 10 sub-cities (Kifle Ketema) with a density of 5936.2/km2[2].
Habitually, without any segregation, the majority of the solid trash is treated by Traditional garbage disposal techniques like burial or landfill, incineration, chemical corrosion, etc. which will significantly pollute the soil and air while also being quite expensive.Landfilling is one of the commonly used waste disposal management.When doing that, the most serious Citation: Desta M, Aboneh T, Derebssa B (2023) Deep learning-based object detection for smart solid waste management system.Ann Environ Sci Toxicol 7(1): 052-060.DOI: https://dx.doi.org/10.17352/aest.000070worry is plastic garbage, which is the most common and causes the most long-term environmental harm [5].Depending on the substance and structure, plastics can decompose in 20 to 500 years.The fact that organic waste decomposes anaerobically in landfills, producing methane rather than using up resources, is another issue with this method of disposal.Methane produces a stronger greenhouse gas effect than carbon dioxide when it is released into the atmosphere.When exposed to oxygen, it frequently starts an uncontrolled fire in the landfill.However, if managed differently from other waste types, organic waste could be turned into a renewable energy source.Anaerobic digestion of organic waste results in the production of biogas, a fuel that is high in methane.By replacing fossil fuels with renewable sources of energy like methane, greenhouse gas emissions can be reduced and global warming can be slowed.
Only 65% of the garbage produced in Addis Ababa is collected and disposed of, 5% is recycled, 5% is composted, and 25% is not collected and deposited in places that are not permitted [6].71% of solid waste is generated by households, with the remaining 26% coming from businesses, which are then divided as follows: hotels (3%), hospitals (1%), commercial centers (9%), and street cleaning (10%).It is inferred that tight cooperation between the government and the household is required to manage solid waste appropriately and effectively from its source.The generated municipal solid trash is delivered to Koshe (Reppi), an unmanaged landfill that is currently situated in the city's core and provides a serious health risk to the surrounding neighborhoods [3].
The majority of the solid waste that is collected in Addis Ababa is frequently discarded in the Reppi open dumping site without any sorting.Open dumping, which in this context refers to the unplanned disposal of waste without the involvement of environmental protection measures, is, in accordance with van Niekerk and Weghmann [7] by far it is the most common practice in Africa.As a result, this disposal approach has a negative effect on both the community and the ecology.Many African cities only have one official landfill site, which is frequently overflowing and poses a major threat to public health and safety [8].In a similar vein,
Addis Ababa's only dumping site since 1964 has been Reppi
In the waste dump, there is no segregation, which encourages a lot of selvages (waste pickers) to enter and search the area for recyclable and reusable items.The landfill has also contributed to a number of societal concerns, including odor and environmental challenges.Cholera, typhoid, and amoebic infections, which make up nearly half of all illnesses in the country, are more prevalent due to the current lack of cooperation in waste collection and disposal.Reproductive, dermatological, and visual problems are among other detrimental effects on health.Significant health risks include dermatitis, noise pollution, diarrhea, and, most importantly, the prevalence of children under 10 playing with condoms and other abandoned medical equipment such as syringes and needles [9] (Figure 1).
Existing system used to manage waste in Ethiopiaformal waste management process
There are 10 sub-cities in Addis Ababa (the smallest administrative unit in FDRE).Additionally, each Kebele may have 7500 -8500 families.The existing waste management procedure has two components: formal and informal.The procedure's formal waste management comprises two stages of collection and disposal at a dumpsite, both of which are done entirely by government workers.The informal segments feature a large cast of actors.Efforts were made on one's initiative to collect various wastes and sell them at a spot called "Menallesh Tera" in the largest open market in the nation, known as Merkato.Individuals and other industrial players visit Menallesh Tera to obtain the supplies they need [10].Containers are positioned in common locations near the main roadways in each Kebele.For various families, the distance to these bins could be varied.Some people may live just nearby, while others may live one or more kilometers away.Employees use trolleys to transport sacks of garbage to the containers according to schedules from the Kebele.At this moment, the collection process is at its core.People must take their rubbish to the common area on their own if they live far from the waste collection containers or if they are unable to pay the costs.The containers are yellow/green in hue and 8 m3 in capacity.The government vehicles then return to Repi after emptying the containers [3].
The garbage is eventually discharged at Repi.It was believed to be far enough away when Repi was formed in 1964 to not be a concern, but due to the city's quick expansion, towns have already been constructed all around the dump site.Leachate or gas cannot be effectively collected at the landfill (Figure 2).
The exact boundaries of the dumpsite are unknown because there isn't even an appropriate fence surrounding it, but the amount of garbage there occupies 25 acres.The garbage is not covered by topsoil or anything on top of it.It experiences two months of midsummer heavy rain in addition to bright sunlight throughout the year.At the scene, tens of thousands of vultures and scavengers are at work.It's impossible to determine how much harm has been done because there hasn't been a reliable record of the rubbish thrown onsite and no system in place to collect leachate or emissions (Figure 3) [6].At a great distance, the dumpsite's revolting odor calls for observing it is also pretty unpleasant.The dumpsite poses a substantial threat to the health and life of many people who reside nearby and is seriously harming the environment.
Challenges in waste collection, transportation, and disposal stages
Since the wastes are not divided into their components, every stage of the waste management process involves significant difficulties and, in general, health concerns for the participants during the primary and secondary waste collection stages.Health issues can affect the eyes, skin, or lungs, abandoned medical supplies like syringes and needles by children are another severe health risk, and other health risks like respiratory issues, dermatitis, and vision issues are among the risks that are experienced by waste collectors.Because there is no segregation in the waste dump, a lot of selvages (waste pickers) including children come in and look about for recyclable and reusable items.
The landfill has also contributed to a number of societal problems, including odor and environmental challenges.There is no efficient method for gathering leachate or gas at the dump.There isn't even an appropriate fence surrounding the dumpsite, so it's impossible to determine its exact boundaries.
The garbage is not buried beneath anything or covered by dirt.
It receives two months of intense midsummer rain in addition
to bright sunlight throughout the entire year.At the scene, there are countless vultures and scavengers at work who will be exposed to series health problems [6].
Related works
Even if computer vision-based trash segregation hasn't been used in our country, there have been many attempts at it worldwide.However, each of these efforts has its setbacks with regard to how well it can execute the task.Most of them use two-stage detectors which are bulky to be deployed/used in IOT and mobile devices.These detectors require more inference time than single-stage detectors.And almost all of them are modeled in which they detect a single waste at a time.
There is currently no automatic waste segregation system at the residential level in Ethiopia, making the creation of a practical, affordable, and eco-friendly classification model for urban households urgent.
The effectiveness of computer processing of images has significantly increased as a result of the significant increase in computer operating speed.CNN (Convolutional Neural Network) based deep learning models have started to take center stage in the area of image recognition and classification.The process of separating waste into its many components is one of the most crucial l parts of waste management, and it is typically carried out manually by hand-picking.
The process of separating the waste into its many components is one of the most crucial parts of waste management, and it is typically carried out manually by handpicking.
So, with the help of computer vision, we can make the process efficient and resilient through image segmentation and classification as waste segregation become a significant concern in our lives.These systems' increasing demand for accurate and effective segmentation and recognition methods ties up with modern computer architectures' increasing processing power and improved image recognition algorithms.
An intelligent garbage classifier; that analyzes images from the camera, the robot arm, and the conveyor belt for visual classification is used.It employed watershed to separate an overlapping waste and K-NN for classification, with the shape being the most significant characteristic they took into consideration.Nevertheless, they omitted to mention the classifier's accuracy.Since the same class of garbage might come in a variety of sizes and shapes, using merely shape alone to identify objects is insufficient.
Mittal, et al. [11], used a Convolution Neural Network (CNN) which is a machine learning algorithm, was utilized as the model in this study and was applied to a dataset of images of trash.This study classifies diverse waste images into the appropriate categories and continues to provide training accuracy and test accuracy at 91% and 81% respectively [11].
The output of medical waste is outpacing the demand for health in a progressive way as demand rises.Gyawali et al. [12], made a Comparative Analysis of Multiple Deep CNN Models for Waste.They suggest a deep learning method for classifying and identifying medical waste.They suggest a deep learning-based classification strategy [12].In this case, with ResNeXt serving as a suitable deep neural network for actual implementation, and then they suggest transfer learning techniques to enhance the classification outcomes.Using the method on 3480 photos, they were able to identify 8 different types of medical waste with 97.2 percent accuracy; the average F1-score of five-fold cross-validation was 97.2 percent.This study offered a deep learning-based technique for high accuracy and average precision automatic detection and classification of 8 types of medical waste [13].an automated system based on a deep learning approach and conventional techniques by aims for the accurate separation of waste into recycling categories in order to reduce the damage caused by improper garbage disposal, more specifically residential.Glass, metal, paper, and plastic were among the four garbage categories taken into consideration.They get an accuracy of 80% using SVM and 88% when using KNN.Results indicate that the computational cost of CNN algorithms is typically higher than that of conventional techniques, necessitating more powerful computing facilities [14].
An image processing-based intelligent garbage sorting system that is hardware and software was integrated to classify data with an overall accuracy of 83.38% on the issue of solid waste separation systems using the SURF-BOW feature extraction technique and multiclass SVM.The difficulty with the classical approach is having to choose which components of a given image are essential.As there are more classes to categorize, feature extraction becomes more challenging.For each feature definition, the CV engineer must also carefully modify a huge number of parameters.The engineer's judgment, based on much trial and error, must be used to identify which attributes best define different classifications of objects [15].
A transfer learning-based DenseNet169 waste image classification model to increase the speed and precision of waste categorization processing was also utilized.DenseNet169 model that is appropriate for their experimental dataset based on the deep learning network DenseNet169's pre-trained model.According to the experimental findings, the DenseNet169 model's classification accuracy after transfer learning is above 82%, which is higher than that of conventional image classification algorithms.But the DenseNet169 suffers from duplicated gradient flow throughout the layers which adversely affects the accuracy of the model.And their accuracy can be modified using different techniques [16].
A significant, renewable source of energy is municipal solid trash.For image categorization, convolutional neural networks are employed.These wastes are divided into many divisions using equipment constructed in the shape of a trashcan.The study would introduce automation in the field of waste management and save valuable time if such waste materials weren't separated by humans.The ResNet18 Network was used, and the best validation accuracy was discovered to be 87.8%.An important constituent of household waste classes is not considered in their target classes and the selected model network performs the detection process in two stages it's not applicable for real-time detection [13].
A transfer learning-based DenseNet169 waste image classification model to increase the speed and precision of waste categorization processing was also utilized.They were able to create a DenseNet169 model that is appropriate for their experimental dataset based on the deep learning network DenseNet169's pre-trained model.According to the experimental findings, the DenseNet169 model's classification accuracy after transfer learning is above 82%, which is higher than that of conventional image classification algorithms.But the DenseNet169 suffers from duplicated gradient flow throughout the layers which adversely affects the accuracy of the model.Their accuracy can be modified using different techniques [11].
Materials and methods
This work makes use of a variety of software.For deep learning-based waste object detection, Python programming language of version 3.8 with Anaconda IDE Jupyter notebook, TensorFlow library V2.1.2,and Open-cv module is used.Labeling is done by using the LABELIMG tool.Which is ImgAnnotationLab_V4.1.0.0.That is a free, open-source tool that can graphically label images.Training is done using Google COLAB which is a web-based Python editor that allows anyone to write and run arbitrary Python code.It's notably useful for machine learning, data analysis, and education.The collection of digital images of different waste images is done by using a TCL t766S mobile phone camera with a Resolution 720*1600 and a Logi Techc-720 USB camera is used to take images for the test dataset in real-time.The CSP structure divides the original residual module into two parts, one of which is directly connected to the residual network and the other of which is connected via the residual network.The output of the weak network is merged.With this approach, fewer variables and less computation are required while achieving great accuracy [22].
Proposed architecture
The Mish function is basically the activation function in CSPDarknet53.Mish, a novel self-regularized non-monotonic activation function [23].
Mish is bounded below and unbounded above, and its range is [0.31].Mish reduced the conditions for the Dying ReLU phenomenon intentionally in order to save a small bit of unfavorable information.The ReLU function can become saturated as a result of a significant negative bias, and this can prevent the weights from being updated during the backpropagation phase, making the neurons useless for prediction.
Mish properties promote improved communication and expressivity.Being unbounded above, Mish avoids saturation, which normally causes training to slow down owing to nearzero gradients substantially.Being confined below is also advantageous since it results in a substantial regularization effect [23].
The model will be better able to identify items at multiple scales thanks to the feature pyramid, enabling it to identify the same thing at varied sizes and scales.A feature extractor known as a Feature Pyramid Network, or FPN, produces proportionally scaled feature maps at several levels in a completely convolutional manner from a single-scale image of any size.In order to be employed in applications like object detection, it serves as a general method for creating feature pyramids inside deep convolutional networks (Figure 6 (Spatial Pyramid Pooling).In order to obtain feature maps with the same dimensions, SSP first convolves the candidate pictures with sliding kernels of four different sizes: 1, 5, 9, and 13 [26].The spatial size of each candidate map may be preserved via SSP (Figure 7).with a resolution of 416× 416 pixels.So, number of images used in this study is 3529.Sample datasets are shown in Table 1.
The dataset is constructed and separated into seven groups.It includes real-time wastes that are intermingled with each other.This is the unique feature of the research that the model is trained as it can detect more than one waste group at a time.
Data labelling: After constructing our dataset, labeling will be the next task to be performed.Labeling every image with a tool that produces a.txt file containing image data, such as LabelImg.It is done by using the LABELIMG tool.LabelImg is a free, open-source program for marking images graphically labeled or annotated (Figure 8). 2.
As we said earlier in this section sub-division and mosaic data augmentation are taken as a parameter for evaluating the performance of the model for the given dataset.
Results
Training result for YOLOv4 model with Sub-division value 16 and mosaic data augmentation
Subdivision and mosaic augmentation parameters tuning
The test is conducted by changing the subdivision value as well as mosaic data augmentation technique as tuned parameters in the model which has fewer parameters than the original YOLOv4 model that means in the YOLOv4-tiny model.
Because it is needed to change the variable hyper-parameters in this case subdivisions and the mosaic augmentation effects accordingly to match the GPU RAM performance of Colab.
According to Table 3, using the subdivision value of 8 gives a mAP score that is about 2.4% higher than using the subdivision 16 value.The mAP value is 2.2% higher with mosaic data augmentation than it would be without it.This demonstrates that utilizing a mosaic data augmentation and smaller subdivision values (8) improves the model's performance.
The subdivision and data mosaic augmentation settings not only impact the mAP value but also the computation speed.
The table demonstrates that computation time decreases with decreasing subdivision value.In contrast, the model using the data mosaic augmentation takes more time than the model not using the data mosaic augmentation.
Discussion
According to Table 3, using the subdivision 8 value results in a mAP value that is about 2.4% higher than using the subdivision 16 value.The mAP value is 2.2% higher with mosaic data augmentation than it would be without it.This
Can a machine accurately explain the content of an image or video the same way a person could?A machine's ability Citation: Desta M, Aboneh T, Derebssa B (2023) Deep learning-based object detection for smart solid waste management system.Ann Environ Sci Toxicol 7(1): 052-060.DOI: https://dx.doi.org/10.17352/aest.000070to accurately describe the contents of an image or video is subjected to the Turing test in computer vision.In order to determine the answer to this issue, the development of a deep learning algorithm for image classification is examined in this work.Deep learning has greatly increased the accuracy rate of many computer vision tasks.YOLO which is the state of an art single-stage real-time object detection algorithm is proposed.It is based on the Convolutional Neural Network [17].This algorithm can identify objects in images using webcam input in real-time, video input, and image input.Here, in this thesis, Yolov4 and Yolov4-tiny (compressed version of the original model with fewer parameters.This model is often referred to as a lightweight version of the original Yolov4 model and it can be deployed on various edge devices) model is used.Architecture: YOLO is a state-of-the-art real-time object detection algorithm based on a Convolutional Neural Network.It was developed by Joseph Redmon in 2016.This technique can identify objects in images using webcam input in real-time, video input, and image input.YOLOv4 uses an Artificial Neural Network approach to find objects in images.This network segments the image into regions and forecasts the probability and bounding box of each region.The bounding box is then compared to each anticipated probability after each.When many bounding boxes are found for the same object, Non-Max Suppression is employed to make a determination [17].The Prediction (head network), Backbone network, and Neck network are the three main divisions of the YOLO network architecture.The backbone network is primarily in charge of extracting image features, however as deep learning has advanced, it has been shown that while the number of layers in the network increases, so does the amount of extracted feature data and thus increases training costs.Instead, its training impact will diminish after a certain number of levels.The Neck network can enhance shallow features derived from the backbone network, process and refine those characteristics, and blend shallow and deep features to boost network robustness and produce more useful features.The Head network classifies and regresses the features obtained by the backbone and neck networks (Figure 4) [18].We will see each network in detail.The main network of YOLOV4 is CSPDarknet53.A convolutional neural network with 53 layers is called DarkNet-53.Cross-StagePartial-Connection is referred to as CSP.DenseNet and CSP are used by CSPDarknet53 to increase convolutional networks' capacity for learning, reduce memory and computation requirements for network models, and maintain accuracy.The input feature map channel layer is split in half before each residual network in Darknet's five residual modules, and CSP is added after each large residual module [20].The CSPDarknet53 backbone network was built using the Darknet53 development.The basic residual module was added with the CSP structure shown below (Figure 5).
Before joining feature maps with different core sizes as output, SSP maintains the spatial size of each candidate map, resulting in a fixed-size feature map.The development of PANet is based on FPN and Mask RCNN.PANet presents a more flexible ROI Pooling (Region of Interest Pooling) that can extract and integrate features at different sizes, whereas FPN exclusively extracts data from high-level feature layers.After all, by using all the fused features in the neck network, most prediction work is done during the detection stage.The head's function in a one-stage detector is to do dense prediction.The dense prediction, which includes the label, the prediction's confidence score, and a vector containing the center, height, and breadth of the anticipated bounding box, is the final prediction.Data collection: In this research, 2529 waste are taken from The Stanford TrashNet Dataset [28], And 1,000 images of waste from Repi dump sites as well from households and common platforms in which the town's wastes are collected using mobile phones.This dataset includes seven waste types: glass, metal, cardboard, organic paper, plastic, and trash.The dataset is separated into three sections: one for training, one
Desta M, Aboneh T, Derebssa B (2023) Deep learning-based object detection for smart solid waste management system.Ann Environ Sci Toxicol 7(1): 052-060.DOI: https://dx.doi.org/10.17352/aest.000070Data augmentation: To create variances within the data so that it can accurately generalize the unknown data dataaugmentation is a technique used.Data Augmentation is a technique used for manipulating/modifying data without losing its essence (Figure 9).Mosaic Data augmentation was performed in this study to replicate the training data and to increase the context information that can be found in a single image so that it can increase the learning ability of the model.experiment Subdivision and Mosaic Augmentation Parameters Tuning The test is conducted by changing the subdivision value and mosaic data augmentation technique as tuned parameters in the YOLOv4tiny model.Because it is needed to change the variable hyperparameters in this case subdivisions and the mosaic augmentation effects accordingly to match the GPU-RAM performance of Colab.The experiment setup for subdivision and mosaic data augmentation is shown in Table
Desta M, Aboneh T, Derebssa B (2023) Deep learning-based object detection for smart solid waste management system.Ann Environ Sci Toxicol 7(1): 052-060.DOI: https://dx.doi.org/10.17352/aest.000070faster than the YOLOv4 model inference time) to complete 14,000 iterations.This training is carried out by applying mosaic data augmentation on the training dataset and taking the subdivision value to be 08.After 7000 iterations, the curve produced by the loss function YOLOv4-tiny is quite stable.The YOLOv4-tiny models have a lower AP value than that of the YOLOV4 model.As stated earlier, the YOLOv4 model scores a higher mean average value than the YOLOv4-tiny.The tiny version of the YOLOV4 model identifies well all classes, except the organic However, for organic wastes of piles of vegetables, it face difficulties in identifying organic wastes that were categorized under the class organic.It will misclassify with the trash classes.
Figure 11
Figure 11 depicts the Average Loss and mAP of the YOLOv4 Tiny model.
Table 1
Single and mixed type of waste sample dataset.
Table 2
experiment setup for subdivision and mosaic data augmentation. | 6,248.2 | 2023-08-31T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
Parallel workflow tools to facilitate human brain MRI post-processing
Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues.
Introduction
Over the past two decades, magnetic resonance imaging (MRI) techniques have been increasingly applied in brain research, and particularly research on the human brain, due to their noninvasive nature and outstanding spatial resolution for measuring brain structure and function. These techniques typically generate large-scale imaging datasets. To obtain specific brain measures of interest from an acquired MRI dataset, complex image post-processing steps are required.
A number of publicly available software packages have been developed to process brain MRI data, such as FMRIB Software Library (FSL) (Smith et al., 2004), Statistical Parametric Mapping (SPM) (Ashburner, 2012), FreeSurfer (Fischl, 2012), Analysis of Functional NeuroImages (AFNI) (Cox, 1996), BrainSuite (Shattuck and Leahy, 2002), Camino (Cook et al., 2006), CONN (Whitfield-Gabrieli and Nieto-Castanon, 2012) and Diffusion Toolkit (Wang et al., 2007). These packages provide processing modules and interfaces to comprehensively analyze multi-modal brain MRI data. To use these packages, end users must correctly understand each module and manually combine the appropriate modules for a particular purpose. In most packages, end users must also process each step or dataset separately, which is a sub-optimal approach for two reasons. First, understanding the various modules is difficult, particularly for investigators without computational backgrounds. Second, the use of these modules typically involves a number of manual operations, which increases the probability of processing errors due to user oversight. In contrast, automated workflow tools that allow user-operated processing steps to be concatenated enable fully automated processing of raw MRI data.
Human neuroimaging studies typically require a large number of subjects. Thus, the same post-processing procedures are executed across different datasets. Certain workflow tools, such as those embedded in SPM and AFNI, can automatically and sequentially process different individual KEY CONCEPT 1 | Processing modules A function/script to achieve a specific processing purpose, e.g., image segmentation or registration.
datasets. The independent post-processing steps for each individual dataset are also performed sequentially. This sequential processing pattern may not fully optimize available computational resources in a system (e.g., a multi-core desktop/server, a local distributed computing cluster or a highperformance computing platform), resulting in an unnecessarily long computational time. Computational time is becoming increasingly important due to the rapidly increasing sample size of human brain MRI studies. To address this issue, MRI data post-processing tasks across different individuals or within one individual can be parallelized by assigning independent post-processing jobs to different computing cores. Because the majority of personal computers and workstations possess multi-core systems and given that many research centers are now equipped with local distributed computing clusters or high-performance computing platforms, the adoption of workflow tools that permit the automatic parallelization of KEY CONCEPT 2 | Parallelization A mode in which processing jobs without dependency run at the same time, with each job occupying a computing core.
post-processing steps and optimal use of available computational resources is now possible.
A few parallel workflow packages for brain MRI postprocessing have been developed. These tools can greatly facilitate KEY CONCEPT 3 | MRI post-processing The computing/processing of raw images from multi-modal MRI techniques to obtain specific brain measures of interest. relevant human brain MRI investigations and have attracted much attention in the research community. In this mini-review, we aim to provide an overview of these tools for human brain MRI and to discuss relevant issues for potential users and developers.
Available Parallel Workflow Tools for Multi-Modal MRI Post-Processing
In general, there are two categories of available parallel workflow tools for human brain MRI data processing (Table 1, Figure 1). One is flexible workflow tools that provide rich environments KEY CONCEPT 4 | Flexible workflow tools An environment that provides the ability to encapsulate modules from predefined libraries to create a completely automatic workflow. and allow users to customize automated workflows for any purpose by linking either modules from predefined libraries or in-house modules, such as Laboratory of Neuro Imaging (LONI) Pipeline (Rex et al., 2003;Dinov et al., 2009Dinov et al., , 2010, Java Image Science Toolkit (JIST) (Lucas et al., 2010;Li et al., 2012) and Nipype (Gorgolewski et al., 2011). The other category is fixed workflow tools that provide a completely established KEY CONCEPT 5 | Fixed workflow tool A software package that concatenates a series of processing modules according to the dependency between the modules, allowing for fully automated processing, from the raw data to final outputs. data processing workflow for a particular purpose/dataset, such as CIVET (Ad-Dab'bagh et al., 2006), Pipeline for Analyzing braiN Diffusion imAges (PANDA) (Cui et al., 2013), and Data Processing Assistant for Resting-State fMRI (DPARSF) (Yan and Zang, 2010).
Flexible Workflow Tools
Well-designed flexible workflow packages for multi-modal MRI post-processing allow access to appropriate modules from existing software, such as FSL, FreeSurfer, SPM and AFNI, to construct a customized analysis. For example, LONI Pipeline and JIST provide a user-friendly graphical user interface (GUI) to let users create a complete neuroimaging analysis workflow, from raw imaging data to quantitative results ready for statistical analysis. To construct a workflow in the LONI Pipeline or JIST environment, users need to drag appropriate modules from the existing library, define the dependencies between these modules, and set the parameters for each module. Nipype, which is based on Python and which lacks a GUI, encapsulates processing modules of existing neuroimaging software as Python objects. These objects can be easily linked and executed as an automated workflow. In addition to customized workflows, akin to fixed packages, flexible packages provide certain completely established workflows, such as the tensor-based morphometry workflow in LONI Pipeline (Dinov et al., 2010), the cortical reconstruction using implicit surface evolution workflow in JIST (Lucas et al., 2010), and the diffusion data analysis workflow based on Camino in Nipype (http://nipy.sourceforge.net/nipype/interfaces/ generated/nipype.workflows.dmri.camino.diffusion.html). To accelerate the data processing speed, these flexible packages all support parallel computing across multi-cores on a single computer or across multi-computers in a distributed computing cluster.
A flexible parallel workflow tool typically includes (1) a predefined library, (2) a workflow construction framework, (3) validation and quality control, (4) module creation, and (5) computational parallelization. In particular, a library encapsulating modules from existing neuroimaging software is first needed. These modules should be designed to allow for setting input, output, and parameter specifications, among other settings. The framework/protocol for connecting different modules in terms of between-module dependencies should be regularized. As manual setup of the workflow by users may lead to errors, automated validation that monitors the existence of input files, the consistency of data types, parameter matches, and protocol correctness is desired. Additionally, quality control, e.g., through visual inspection of the interim results, is also of great importance because this type of workflow processing is fully automated and nontransparent to users. A module creation framework/protocol that permits users to create their own modules is a plus because the modules in the predefined library may not meet the requirements of a particular analysis. Finally, implementing computational parallelization of independent jobs within the workflow is highly preferred to optimize the computational efficiency. We will illustrate these points using LONI Pipeline as an example.
Predefined Library
As an environment for constructing an integrated workflow with heterogeneous neuroimaging toolboxes, LONI Pipeline has a library of various modules based on popular MRI packages, such as AFNI, SPM, FSL, FreeSurfer, and Diffusion Toolkit. A very user-friendly and uniform interface has been designed for various modules.
Workflow Construction Framework
LONI Pipeline provides a canvas for creating and revising a workflow in the main GUI. To construct a workflow, users only need to drag the appropriate modules from the library, to link the output of one module to the input of another module, and to define input/output files and parameters. Additionally, LONI Pipeline can automatically determine the most appropriate analysis protocol, select corresponding modules, and generate a valid graphical workflow according to the workflow description and a set of user-specified keywords.
Validation and Quality Control
As the workflow is manually constructed, errors are possible. LONI Pipeline supports automatic validation of the consistency of the data types, of parameter matches, and of protocol correctness in advance of executing any workflow. For quality control, users can view the interim results of each module by clicking the icon for each module on the canvas during the execution of the workflow in the LONI Pipeline environment.
Module Creation
LONI Pipeline permits users to create their own modules in the case that the existing modules in the library cannot meet their requirements. The module description typically includes general information (e.g., name, package, authorship, citation), parameter specification (e.g., parameter/file type, dependencies), and executable information (e.g., program location, grid-specific variables). Users can define the description using a userfriendly GUI for module definition. Additionally, several ways to automatically create modules are supplied. Given this feature, LONI Pipeline can therefore also be applied to construct workflows for non-MRI related processing (e.g., genetic analysis).
Computational Parallelization
LONI Pipeline can execute thousands of simultaneous and independent jobs on a multi-core system, a distributed cluster, or a gird/cloud computing system using job scheduling tools such as Sun Grid Engine (SGE), Portable Batch System (PBS), Load Sharing Facility (LSF), and GridWay.
Fixed Workflow Tools
Fixed parallel workflow tools have been developed for particular types of human brain MRI post-processing for which a fully automated processing workflow is completely established and ready for use. For example, the CIVET pipeline tool was developed to facilitate cortical morphological analysis (Ad- Dab'bagh et al., 2006). In CIVET, the raw T1-weighted images are the input, and cortical measures, such as thickness and surface area, are the outputs after implementing a number of image processing steps, e.g., brain tissue segmentation, spatial normalization, surface extraction, and surface registration. It has been recently embedded in the Canadian Brain Imaging Research Platform (CBRAIN) system, which is a web-based neuroimaging research platform designed for computationally intensive analyses using high-performance computing clusters/servers around the world (Sherif et al., 2014). Another example is PANDA, which is a diffusion MRI post-processing pipeline tool (Cui et al., 2013). PANDA specifically integrates several publicly available packages' modules (e.g., FSL) and in-house modules to accomplish all required pre-processing steps for diffusion MRI. The final outputs include brain diffusion metrics and white-matter networks ready for statistical analysis. To post-process resting-state functional MRI data, an automated workflow package called DPARSF [part of toolbox for Data Processing and Analysis of Brain Imaging (DPABI) (http:// rfmri.org/dpabi)] has been developed (Yan and Zang, 2010). DPARSF can yield various brain functional metrics for statistical analysis, such as the regional homogeneity (Zang et al., 2004) and amplitude of low-frequency fluctuations (Zang et al., 2007), by integrating various modules from SPM and RESting-state fMRI data analysis Toolkit (REST) (Song et al., 2011). CIVET, PANDA, and DPARSF do not require users to customize the workflow by selecting modules or defining dependencies. In fact, once the user inputs the raw MRI datasets and selects the post-processing parameter configurations, these tools fully automate all post-processing steps for all datasets. Additionally, these tools all enable parallel computing on a multi-core computer. CIVET and PANDA can also support a distributed computing cluster or a high-performance computing platform.
To construct a fixed workflow tool for brain MRI postprocessing, several factors must be considered: (1) the operating environment and processing modules, (2) workflow design, (3) parallelization, (4) quality control, and (5) testing and validation, as illustrated in Figure 2. Typically, a fixed workflow tool requires a combination of in-house modules and existing modules from publicly available packages (e.g., FSL, SPM). However, because certain publicly available packages are only compliant with specific operating systems (e.g., Windows, Linux, or MAC), a fixed workflow tool must first specify the operating system requirement. Next, the workflow must be designed according to acceptable standard protocols for relevant MRI post-processing procedures. Typically, the workflow comprises a number of interconnected or parallel jobs, each of which is an MRI postprocessing unit. To achieve parallelization, specific tools [e.g., Pipeline System for Octave and Matlab (PSOM) (Bellec et al., 2012), SGE and PBS] for managing computing resources must be applied within the workflow to enable the execution of independent jobs in parallel. As for flexible workflow tools, quality control is also critical for a fixed workflow tool, and effective strategies for quality confirmation must be carefully designed within the tool. Finally, fixed workflow tools must be thoroughly tested and validated by various users to minimize MRI post-processing errors and to ensure that the GUI is as user friendly as possible. These aspects will be elaborated using PANDA as an example.
Operating System and Processing Modules
To efficiently obtain various diffusion metrics and brain networks that are ready for statistical analysis, PANDA was designed to combine a number of in-house post-processing modules with existing modules from publicly available packages [e.g., FSL, Diffusion Toolkit and MRIcron (http://www.mccauslandcenter. FIGURE 2 | Framework for the construction of a parallel workflow tool for brain MRI post-processing. The sections with gray backgrounds represent important aspects of the construction of a parallel workflow tool. In the workflow section, the three blue nodes represent the same post-processing jobs from three different subjects and are therefore independent. The two green nodes indicate two independent post-processing jobs for the same subject. The arrows denote dependencies; for example, A→B indicates that B cannot start until A is complete. Thus, independent jobs can be parallelized to maximize the use of available computing resources. HPC, High-Performance Computing; SGE, Sun Grid Engine; PBS, Portable Batch System. sc.edu/mricro/mricron/)]. Because the FSL package is compatible only with UNIX-based (e.g., Linux or MAC) operating systems, PANDA was designed for a UNIX-based system.
Workflow Design
The processing workflow within PANDA follows the recommended practices for the post-processing of diffusion MRI images in the research community. The main procedure comprises three parts: (I) pre-processing, (II) production of diffusion metrics, and III) construction of brain networks. Part I includes the following steps: (1) converting DICOM files into NIfTI images, (2) estimating the brain mask, (3) cropping raw images to reduce the memory cost and accelerate processing in subsequent steps, (4) correcting for the eddycurrent effect, (5) averaging multiple acquisitions, and (6) calculating diffusion metrics. Part II consists of normalizing and computing multi-level diffusion metrics that can be directly used for voxel-level, atlas-level and tract-based spatial statistics (TBSS) level statistical analysis. Part III tasks include defining network nodes (i.e., parcellating gray matter into multiple regions) and constructing brain networks using deterministic and probabilistic tractography, respectively. Overall, the entire workflow of PANDA comprises 176 post-processing jobs for a single diffusion MRI dataset.
Parallelization
The post-processing jobs within PANDA are organized using PSOM. The dependencies between jobs are first defined. According to PSOM, all jobs that are independent of all other jobs can be executed in parallel, including post-processing jobs for different individuals and independent jobs for the same subject. The processing status of each subject can be viewed in the GUI, with each job being assigned a status of "wait, " "submitted, " "running, " "failed." or "finished."
Quality Control
In PANDA, a results folder named "quality control" is generated. Snapshot pictures of the gray-matter parcellation atlas and of the fractional anisotropy and T1 images in both native and standard spaces are saved to this folder. These pictures can be used to quickly confirm the quality of the signal-noise ratio of the raw image, the quality of the spatial normalization, and the quality of the gray-matter parcellation. For the construction of the brain network, snapshot pictures of the whole-brain white-mattertract map, which is derived from whole-brain deterministic tractography, are also produced to confirm quality by visual inspection.
Testing and Validation
Image post-processing procedures within PANDA are carefully implemented, and frequently used parameters are set as the default. PANDA was thoroughly tested and validated by students and collaborators before its official release.
Discussion
Flexible (e.g., LONI Pipeline, JIST, and Nipype) and fixed (e.g., CIVET, PANDA, and DPARSF) parallel workflow tools are both widely used in the neuroimaging field. These tools can substantially simplify human brain MRI post-processing and optimize available computational resources. The tool choice for a specific study depends on application domains, users' background/preference, and access to computational resources.
Using flexible workflow tools, users can create any desired workflow using available modules in the library together with user-generated modules. To establish an appropriate workflow in such a modular environment, users need to have a good understanding of each individual module as well as the entire workflow protocol for analysis. Once the protocol and the appropriate modules are determined, users can construct the desired pipelines through linking these modules, defining input/output files and parameters with uniform and easy-to-use interfaces. Certain predesigned workflows that can be directly applied for specific analyses are typically included, but these existing workflows are likely to be less comprehensive than a specific fixed workflow tool for similar analyses.
In contrast, fixed workflow tools typically implement comprehensive processing and yield a series of resultant outputs, but only for a particular MRI modality (e.g., structural MRI, diffusion MRI, or functional MRI). All processing steps are preincluded and pre-linked following widely accepted protocols in the research community, so users do not bear the burden of designing the workflow and selecting/building the modules. However, given the diversity of MRI modalities, more efforts are warranted to develop fixed but comprehensive parallel workflow tools for diverse purposes.
The majority of workflow tools are designed to allow for parallel computation with available computing resources (i.e., multi-core desktop, GPUs, local clusters, high-performance computing, and grid/cloud computing), and therefore can greatly accelerate data processing of rapidly increasing neuroimaging datasets. For example, both LONI Pipeline and PANDA can parallelize jobs on a multi-core system or a distributed cluster. LONI pipeline also supports a grid/cloud computing system. Particularly, there is a newly released tool for parallel pipeline analyses of fMRI data on GPUs, i.e., BROCCOLI (Eklund et al., 2014).
Several points, however, need to be emphasized for both users and potential developers. First, quality control is essential for automated workflow tools. In particular, once the entire post-processing procedure is complete, the processing quality must be verified prior to initiating subsequent procedures. In most parallel workflow tools (e.g., LONI Pipeline, JIST, Nipype, CIVET, DPARSF, and PANDA), a number of intermediate results/snapshots are provided for rapid manual checks. In flexible workflow tools, automatic validation of the workflows customized by users (e.g., correctness of the analysis protocol, input/output types, and format compatibility) is also important.
Second, workflow tools typically allow the user to modify or rerun the workflow if processing errors are indicated by the validation or quality-control procedure. In fact, in a flexible workflow tool such as LONI Pipeline, users can redesign the workflow. In addition, both flexible and fixed workflow tools (e.g., LONI Pipeline and PANDA) permit users to modify parameters in specific processing steps and rerun the workflow if errors are found during quality control. Using these features, investigators can therefore easily evaluate the effects of processing strategies or parameters on the final results by rerunning workflows with different parameters or structures.
Third, certain MRI post-processing steps remain controversial. For example, there are valid reasons for removing or retaining the global signal when pre-processing a resting-state fMRI dataset (Fox et al., 2009;Murphy et al., 2009). In such a case, fixed workflow tools should offer options to the user, rather than implementing only one solution. Furthermore, with additional research and further development in the field, human brain MRI post-processing practices will likely change. Thus, workflow tools must be constantly updated to remain consistent with currently recommended practices. Ongoing technical support and debugging/updating of tools should be provided, e.g., via online forums or mailing lists.
Finally, workflow tools are advantageous for replications and validations of scientific findings in the human brain MRI research community. In most cases, applying the entire analysis procedure in the exact same way as in a publication is difficult due to insufficient description of the method and the numerical instability across different computing platforms (Glatard et al., 2015). In contrast, flexible workflow tools, such as LONI Pipeline, provide a clear and complete record of the analysis protocol, processing modules, parameters, input/output and computing platform information. Similarly, fixed workflow tools (e.g., PANDA) typically save a configuration containing all of the parameters, input/output and computing platform information. Including these relevant information for the processing workflows in a publication is therefore highly encouraged, which can increase the reproducibility and transparency for both data processing and computing platform, ultimately enhancing the comparability of results between studies or datasets.
In summary, a number of flexible and fixed workflow tools exist for human brain MRI post-processing. These tools can greatly facilitate data processing, can save computational time and effort, and are being increasingly used. The application of these easy-to-use tools is therefore highly recommended for neuroscientists, psychologists, and clinical investigators, and particularly those with few computing and programming skills. | 4,988.2 | 2015-05-13T00:00:00.000 | [
"Computer Science",
"Engineering",
"Medicine"
] |
Experimental Research on Water Droplet Erosion Resistance Characteristics of Turbine Blade Substrate and Strengthened Layers Materials
In this paper, the water droplet erosion (WDE) performance of typical martensitic precipitation substrate 0Cr17Ni4Cu4Nb in steam turbine final stage, laser solid solution strengthened sample, laser cladding sample and brazed stellite alloy samples have been studied based on a high-speed rotating waterjet test system. The WDE resistance of several materials from strong to weak is in sequence: Brazed stellite alloy > laser cladding sample > laser solid solution sample > martensitic substrate. Furthermore, the WDE resistance mechanism and the failure mode of brazed stellite alloy have been revealed. It is found that the hard carbide in the stellite alloy is the starting point of crack formation and propagation. Under the continuous droplet impact, cracks grow and connect into networks, resulting in the removal of carbide precipitates and WDE damage. It is proved that the properties of the Co-based material itself is the reason for its excellent WDE resistance. And the carbides have almost no positive contribution to its anti-erodibility. These new findings are of great significance to process methods and parameter selection of steam turbine blade materials and surface strengthened layers.
Introduction
The last stage blades of thermal power and nuclear power condensing steam turbines often suffer from severe water droplet erosion damage due to the long-term operation in the wet steam zone, which will not only deteriorate the aerodynamic performance of the blades, but also threaten the safe operation of the units [1][2][3][4]. Besides, the lengthened blades in last stage with higher circumferential speed cause more serious erosion damage. Currently, the maximum circumferential velocity of ultra-long turbine blades developed has exceeded 600 m/s [5]. However, the maximum liquid-solid impact velocity is 500 m/s [6][7][8][9] in the former experimental research. Thus, WDE has become one of the most key problems that needs to be solved urgently. At present, application of surface strengthened blade materials in the area prone to water droplet erosion is the most effective method to improve the WDE resistance. The main mechanism is material damage and failure caused by high-speed liquid-solid impact. Many scholars have carried out a lot of research on the WDE and high-speed liquid-solid impact. In terms of experimental research, Smith et al. [10] designed a test device for simulating the Materials 2020, 13, 4286 2 of 17 water droplet erosion process of the final stage blade material in the final stage environment of the steam turbine. The test chamber was designed as a vacuum environment, considering the working environment of the last stage blade of the steam turbine. And the sample could obtain a higher linear speed by increasing the rotational speed of the motor in the test rig, thereby simulating the process of water droplet impingement of the last stage blades in steam turbine. Hattori et al. [11] carried out liquid-solid impact tests at different angles. In the test, samples of different angles were fixed on the sample holding tool. A high-speed waterjet was sprayed onto the sample through the nozzle. The mass loss of the samples was measured to compare the degree of water droplet erosion damage. The test results indicated that the sample was most severely damaged when the jet vertically impacted the sample and the mass loss rate was proportional to V 0 sin θ. V 0 is the impingement velocity and θ is the impact angle. Xie et al. [12] carried out high-speed jet impact test on 1Cr12Ni2W1Mo1V and 1Cr12MoV samples based on the high-speed digital photography system to capture the velocity of the jet. Xu et al. [13][14][15] improved the widely used rotary test rig to change the impact of water droplets or waterjet into the impact of steam flow and added micro-particle thrusters to the test system. Therefore, the influence of micro particles in steam on the water droplet erosion can be carried out based on the brand new test system. Oka et al. [16] studied the WDE characteristics of different ceramic bulk, coatings and metallic materials to analyze the erosion characters and resistance. Thomas [17] explored water droplet erosion characteristics of copper, brass, low carbon steel, silicon steel and alloys under droplets attack through the various surface morphology evolutions in several typical erosion stages based on the rotating jet experimental device. Ahmad et al. [18] studied the droplet impact wear performance of stainless steels and Ti6Al4V at impact velocity within the range of 350-580 m/s. The results showed that the speed exponent n for ductile materials was in the range of 3. 3-5. In general, due to the complexity of the water droplet erosion process and the great difference in WDE behavior of different erosion conditions, the effect of the surface strengthened process on WDE performance, the failure mode and WDE resistance mechanism of stellite alloy remain unclear and controversial.
In this paper, the WDE performances of typical martensitic blade substrate and surface strengthened layers samples have been studied experimentally at impact velocity in excess of 600 m/s. The effect of the surface strengthened process on WDE performance and the failure mode of several layers materials have been explored.
Experimental Procedures
A high-speed waterjet test system has been designed and installed in-house in order to study the water droplet erosion resistance of various materials with impact velocity over 600 m/s. The system is mainly composed of ultra-high pressure water pump, test chamber, DC motor, gear speed increaser, water ring vacuum pump, lubricating oil system and monitoring system. The main part of the test system consists of an ultra-high pressure pump, a rotating shaft, an impeller disk and a test chamber, as shown in Figure 1. Figure 1a,b present a real image of waterjet test system and the schematic diagram of the test chamber, respectively.
The ultra-high pressure pump is used to generate high-speed jet. The core component is a set of ultra-high pressure generator. The maximum stable work pressure of the ultra-high pressure pump is within the range of 240-260 MPa. Under the outlet pressure of 240 MPa, even with a nozzle of 0.3 mm diameter, the jet velocity is above 600 m/s. Based on the adjustable pressure of the ultra-high pressure pump, the jet velocity of the test system could meet a variety of test requirements, which is capable of simulating the high-speed impact process of the condensed droplets under the actual operation condition to the maximal degree compared with the reported water droplet erosion test platforms.
The DC motor is controlled by the motor control cabinet. And the output end of the motor is connected to the gearbox. A coupling is used to connect the output shaft of the gearbox and the rotating shaft. There is an impeller disk installed on the rotating shaft, which drives the specimens to rotate at high speed. A sealed shell is equipped on the outside. A nozzle is applied to lead the water from the Materials 2020, 13, 4286 3 of 17 ultra-high pressure pump outside the shell, and its position greatly corresponds to the position of the specimen. The shell is also connected with a water ring vacuum pump. A drain pump is arranged at the bottom of the cylinder. rotating shaft. There is an impeller disk installed on the rotating shaft, which drives the specimens to rotate at high speed. A sealed shell is equipped on the outside. A nozzle is applied to lead the water from the ultra-high pressure pump outside the shell, and its position greatly corresponds to the position of the specimen. The shell is also connected with a water ring vacuum pump. A drain pump is arranged at the bottom of the cylinder.
Test Condition and Parameters
In order to compare the WDE resistance characteristics of different materials, the water droplet erosion characteristics of 0Cr17Ni4Cu4Nb substrate and surface strengthening layers samples are investigated using a high-speed waterjet test system. A VHX-600 3D ultra-depth microscope (Keyence, Osaka, Japan) was used to analyze the eroded morphology of each specimen. The mass of the sample before and after the erosion was measured by the precision electronic balance CPA225D produced by Sartorius (Goettingen, Germany), with a measuring range of 100 g and an accuracy of 0.1 mg. In order to improve the measurement accuracy as much as possible, in addition to leveling, calibrating and peeling before measurement, each sample is demagnetized before testing and washed three times with an acetone or alcohol solution in an ultrasonic cleaner. The test scheme is shown in Table 1. Sample structure and layout diagram are presented in Figure 2.
Test Condition and Parameters
In order to compare the WDE resistance characteristics of different materials, the water droplet erosion characteristics of 0Cr17Ni4Cu4Nb substrate and surface strengthening layers samples are investigated using a high-speed waterjet test system. A VHX-600 3D ultra-depth microscope (Keyence, Osaka, Japan) was used to analyze the eroded morphology of each specimen. The mass of the sample before and after the erosion was measured by the precision electronic balance CPA225D produced by Sartorius (Goettingen, Germany), with a measuring range of 100 g and an accuracy of 0.1 mg. In order to improve the measurement accuracy as much as possible, in addition to leveling, calibrating and peeling before measurement, each sample is demagnetized before testing and washed three times with an acetone or alcohol solution in an ultrasonic cleaner. The test scheme is shown in Table 1. Sample structure and layout diagram are presented in Figure 2.
Test Results
The laser solid solution reinforcement is a strengthening treatment on the surface of the substrate. Laser cladding is to add cladding powder materials to form cladding layer on the surface of the substrate. In order to save the cost, the brazing stellite alloy sample is made by embedding a stellite alloy plate into the matrix. The test duration was 390 min for these samples. Figures 3 and 4 show the surface micro-morphologies of the substrate and the solid solution reinforced samples, respectively, at different test times under 0.2 mm nozzle diameter conditions. It can be seen from the figures that the erosion width of the solid solution sample at any test time is smaller than that of the substrate sample.
Test Results
The laser solid solution reinforcement is a strengthening treatment on the surface of the substrate. Laser cladding is to add cladding powder materials to form cladding layer on the surface of the substrate. In order to save the cost, the brazing stellite alloy sample is made by embedding a stellite alloy plate into the matrix. The test duration was 390 min for these samples. Figures 3 and 4 show the surface micro-morphologies of the substrate and the solid solution reinforced samples, respectively, at different test times under 0.2 mm nozzle diameter conditions. It can be seen from the figures that the erosion width of the solid solution sample at any test time is smaller than that of the substrate sample.
Test Results
The laser solid solution reinforcement is a strengthening treatment on the surface of the substrate. Laser cladding is to add cladding powder materials to form cladding layer on the surface of the substrate. In order to save the cost, the brazing stellite alloy sample is made by embedding a stellite alloy plate into the matrix. The test duration was 390 min for these samples. Figures 3 and 4 show the surface micro-morphologies of the substrate and the solid solution reinforced samples, respectively, at different test times under 0.2 mm nozzle diameter conditions. It can be seen from the figures that the erosion width of the solid solution sample at any test time is smaller than that of the substrate sample. Since the 2D microscopic morphologies cannot reflect the depth of water droplet erosion grooves, the 3D pit topographies of martensite substrate and solid solution reinforced sample obtained at four typical moments using the VHX-600 are presented in Figures 5 and 6. It can be clearly seen that the groove depth of the solid solution enhanced sample is obviously shallower than that of substrate at the same time. Since the 2D microscopic morphologies cannot reflect the depth of water droplet erosion grooves, the 3D pit topographies of martensite substrate and solid solution reinforced sample obtained at four typical moments using the VHX-600 are presented in Figures 5 and 6. It can be clearly seen that the groove depth of the solid solution enhanced sample is obviously shallower than that of substrate at the same time. Since the 2D microscopic morphologies cannot reflect the depth of water droplet erosion grooves, the 3D pit topographies of martensite substrate and solid solution reinforced sample obtained at four typical moments using the VHX-600 are presented in Figures 5 and 6. It can be clearly seen that the groove depth of the solid solution enhanced sample is obviously shallower than that of substrate at the same time. In order to further quantitatively compare the WDE characteristics of these two samples, the average width and depth of water droplet erosion traces of martensite substrate and solid solution samples at different times are fitted into logistic and exponential curves, as shown in Figure 7. The accuracy of these fitted curves is shown in Table 2. All Adj. R-Squares are above 0.9, indicating the high fitting accuracy. As Figure 7 shows, the erosion traces development is relatively consistent. When the surface is broken, the width and depth increase dramatically. Due to the limitation of the jet diameter, the width increase gradually becomes slow down, but both width and depth of the solid solution strengthened sample are less than that of the martensite material. The water droplet erosion resistance is obviously improved after the treatment of solid solution strengthening. In order to further quantitatively compare the WDE characteristics of these two samples, the average width and depth of water droplet erosion traces of martensite substrate and solid solution samples at different times are fitted into logistic and exponential curves, as shown in Figure 7. The accuracy of these fitted curves is shown in Table 2. All Adj. R-Squares are above 0.9, indicating the high fitting accuracy. As Figure 7 shows, the erosion traces development is relatively consistent. When the surface is broken, the width and depth increase dramatically. Due to the limitation of the jet diameter, the width increase gradually becomes slow down, but both width and depth of the solid solution strengthened sample are less than that of the martensite material. The water droplet erosion resistance is obviously improved after the treatment of solid solution strengthening. In order to further quantitatively compare the WDE characteristics of these two samples, the average width and depth of water droplet erosion traces of martensite substrate and solid solution samples at different times are fitted into logistic and exponential curves, as shown in Figure 7. The accuracy of these fitted curves is shown in Table 2. All Adj. R-Squares are above 0.9, indicating the high fitting accuracy. As Figure 7 shows, the erosion traces development is relatively consistent. When the surface is broken, the width and depth increase dramatically. Due to the limitation of the jet diameter, the width increase gradually becomes slow down, but both width and depth of the solid solution strengthened sample are less than that of the martensite material. The water droplet erosion resistance is obviously improved after the treatment of solid solution strengthening. Since laser cladding forms a denser cladding layer on the substrate surface, it theoretically has better WDE resistance. Figure 8 shows the eroded surface micromorphologies of three angle laser cladding samples after 450 min. Since laser cladding forms a denser cladding layer on the substrate surface, it theoretically has better WDE resistance. Figure 8 shows the eroded surface micromorphologies of three angle laser cladding samples after 450 min. As can be seen from the figure, although the test time is longer than that of the substrate and the solid solution samples, no obvious erosion pitting is formed on the surface of the three samples, with only a light-colored trace. The trace widths at different angles are slightly different. The impact mark width of 30° sample was only 290 microns, and the widest impact trace of 90° sample was about 800 microns. In contrast, the solid solution sample formed a very distinct crater at 60 minutes, which was close to forming a continuous groove.
Combined with the micromorphology of the laser cladding and solid solution samples at various moments, it can be seen that different surface treatment process has a greater effect on the WDE resistance. The sample with laser cladding treatment has much better water droplet erosion resistance than that of the solid solution strengthened sample.
The brazed stellite sample exhibited excellent WDE resistance performance. During the experiment, the surface of the eroded samples still almost kept intact after a long period of test time. Under the visual observation, only the color of the WDE zone on the sample surface was found to be darkened and blackened without any distinguishable damage features. Figure 9 shows the eroded micromorphology of the stellite alloy surface at 450 min. As can be seen from the figure, although the test time is longer than that of the substrate and the solid solution samples, no obvious erosion pitting is formed on the surface of the three samples, with only a light-colored trace. The trace widths at different angles are slightly different. The impact mark width of 30 • sample was only 290 microns, and the widest impact trace of 90 • sample was about 800 microns. In contrast, the solid solution sample formed a very distinct crater at 60 min, which was close to forming a continuous groove.
Combined with the micromorphology of the laser cladding and solid solution samples at various moments, it can be seen that different surface treatment process has a greater effect on the WDE resistance. The sample with laser cladding treatment has much better water droplet erosion resistance than that of the solid solution strengthened sample.
The brazed stellite sample exhibited excellent WDE resistance performance. During the experiment, the surface of the eroded samples still almost kept intact after a long period of test time. Under the visual observation, only the color of the WDE zone on the sample surface was found to be darkened and blackened without any distinguishable damage features. Figure 9 shows the eroded micromorphology of the stellite alloy surface at 450 min. It can be seen from the figure that only a very shallow erosion trace appeared on the surface of the three angles stellite alloy samples and the trace of the 30° sample is very light and could hardly be distinguished. The erosion width of the 90° sample is also slight and is only about 400 microns. The trace of the laser cladding sample is very evident and is about 800 microns in size. Therefore, it is not difficult to deduce that the WDE resistance of the brazed stellite alloy is significantly superior to the laser cladding material.
In order to quantitatively investigate the water droplet erosion process, the mass loss of the 0Cr17Ni4Cu4Nb martensitic substrate, solid solution, cladding and brazed stellite alloy specimens were compared under the condition of 0.2 mm diameter waterjet nozzle, as shown in Figure 10. It can be seen from the figure that the mass loss of stellite alloy and laser cladding is the smallest. Thus they show the most excellent WDE resistance performance. The martensitic substrate has the worst WDE resistance. The WDE resistance of samples after solid solution strengthening is significantly improved compared with the martensitic substrate. From the mechanical properties of the samples (see Table 3), it is found that the micro Vickers hardness values of the four samples have not changed distinctly. Even the hardness of the cladding samples is slightly higher than that of the brazed stellite alloy, it can be seen that the WDE resistance of brazed stellite alloy is much higher than that of solid solution samples, and slightly better than that of cladding samples. Moreover, in the sample machining process, stellite alloy plate was directly welded on the substrate. During the erosion process, the waterjet swept to the welding edge, it would erode the welding part. However, the WDE resistance of the welding part is inferior to the brazed stellite alloy. The mass loss error of the brazed stellite alloy will be relatively larger. Even so, the mass loss of brazed stellite alloy is still the smallest, which exhibits the optimal WDE resistance. The result further indicates that hardness is not the decisive factor for the WDE resistance of brazed stellite alloy. It can be seen from the figure that only a very shallow erosion trace appeared on the surface of the three angles stellite alloy samples and the trace of the 30 • sample is very light and could hardly be distinguished. The erosion width of the 90 • sample is also slight and is only about 400 microns. The trace of the laser cladding sample is very evident and is about 800 microns in size. Therefore, it is not difficult to deduce that the WDE resistance of the brazed stellite alloy is significantly superior to the laser cladding material.
In order to quantitatively investigate the water droplet erosion process, the mass loss of the 0Cr17Ni4Cu4Nb martensitic substrate, solid solution, cladding and brazed stellite alloy specimens were compared under the condition of 0.2 mm diameter waterjet nozzle, as shown in Figure 10. It can be seen from the figure that the mass loss of stellite alloy and laser cladding is the smallest. Thus they show the most excellent WDE resistance performance. The martensitic substrate has the worst WDE resistance. The WDE resistance of samples after solid solution strengthening is significantly improved compared with the martensitic substrate. From the mechanical properties of the samples (see Table 3), it is found that the micro Vickers hardness values of the four samples have not changed distinctly. Even the hardness of the cladding samples is slightly higher than that of the brazed stellite alloy, it can be seen that the WDE resistance of brazed stellite alloy is much higher than that of solid solution samples, and slightly better than that of cladding samples. Moreover, in the sample machining process, stellite alloy plate was directly welded on the substrate. During the erosion process, the waterjet swept to the welding edge, it would erode the welding part. However, the WDE resistance of the welding part is inferior to the brazed stellite alloy. The mass loss error of the brazed stellite alloy will be relatively larger. Even so, the mass loss of brazed stellite alloy is still the smallest, which exhibits the optimal WDE resistance. The result further indicates that hardness is not the decisive factor for the WDE resistance of brazed stellite alloy. Many researchers have tried to obtain the intrinsic relationship between the WDE resistance of materials and mechanical properties such as hardness and fracture strain energy, but unfortunately all these attempts have failed. For example, at the same hardness level, the WDE resistance of stellite alloy is much higher than that of austenitic and martensitic stainless steels [19]. The fracture strain energy of alloy 718 was five times higher than that of stellite 6B alloy, but its WDE resistance was two times smaller than that of stellite 6B alloy [20]. According to the hardness test, the hardness of 12% Cr martensitic steel and stellite 6B alloy are 380 kg/mm 2 and 420 kg/mm 2 , respectively, which are basically at the same level. However, the water droplet erosion resistance of stellite 6B alloy was 6 times more than that of 12% Cr martensitic steel [21]. These results also fully indicate that hardness is not the decisive factor of the WDE resistance.
In view of this, the authors have carried out a metallographic analysis for the samples after the WDE tests. Figure 11 is the metallographic diagram of substrate surface under two different magnifications. It can be seen that the grain profile is clear and presents a lath martensite structure for martensite substrate. Many researchers have tried to obtain the intrinsic relationship between the WDE resistance of materials and mechanical properties such as hardness and fracture strain energy, but unfortunately all these attempts have failed. For example, at the same hardness level, the WDE resistance of stellite alloy is much higher than that of austenitic and martensitic stainless steels [19]. The fracture strain energy of alloy 718 was five times higher than that of stellite 6B alloy, but its WDE resistance was two times smaller than that of stellite 6B alloy [20]. According to the hardness test, the hardness of 12% Cr martensitic steel and stellite 6B alloy are 380 kg/mm 2 and 420 kg/mm 2 , respectively, which are basically at the same level. However, the water droplet erosion resistance of stellite 6B alloy was 6 times more than that of 12% Cr martensitic steel [21]. These results also fully indicate that hardness is not the decisive factor of the WDE resistance.
In view of this, the authors have carried out a metallographic analysis for the samples after the WDE tests. Figure 11 is the metallographic diagram of substrate surface under two different magnifications. It can be seen that the grain profile is clear and presents a lath martensite structure for martensite substrate. Many researchers have tried to obtain the intrinsic relationship between the WDE resistance of materials and mechanical properties such as hardness and fracture strain energy, but unfortunately all these attempts have failed. For example, at the same hardness level, the WDE resistance of stellite alloy is much higher than that of austenitic and martensitic stainless steels [19]. The fracture strain energy of alloy 718 was five times higher than that of stellite 6B alloy, but its WDE resistance was two times smaller than that of stellite 6B alloy [20]. According to the hardness test, the hardness of 12% Cr martensitic steel and stellite 6B alloy are 380 kg/mm 2 and 420 kg/mm 2 , respectively, which are basically at the same level. However, the water droplet erosion resistance of stellite 6B alloy was 6 times more than that of 12% Cr martensitic steel [21]. These results also fully indicate that hardness is not the decisive factor of the WDE resistance.
In view of this, the authors have carried out a metallographic analysis for the samples after the WDE tests. Figure 11 is the metallographic diagram of substrate surface under two different magnifications. It can be seen that the grain profile is clear and presents a lath martensite structure for martensite substrate. The sample after solid solution strengthening is shown in Figure 12. And a high-density area is formed in the solid solution zone, as shown in Figure 12a. It is preliminarily judged that the refined low-carbon martensite has a higher dislocation density, which improves the WDE resistance of the material to a certain extent. However, compared with the solid solution transition region and the substrate, it can be found that there is a honeycomb-like morphology in the solid solution strengthening transition region (see Figure 12b), which may limit the further improvement of erosion resistance. 10 low-carbon martensite has a higher dislocation density, which improves the WDE resistance of the material to a certain extent. However, compared with the solid solution transition region and the substrate, it can be found that there is a honeycomb-like morphology in the solid solution strengthening transition region (see Figure 12b), which may limit the further improvement of erosion resistance. Figure 13 shows the metallographic structure of the eroded surface of the laser cladding sample. It can be seen from Figure 13a that the surface grains after laser cladding are very dense. With the increasing distance from the cladding surface, the grains gradually become loose. And the grains are no longer visible at the substrate region far away from the surface, so the compact structure is just the key that allows materials to avoid excessive concentration of energy when impact by a waterjet, improving the toughness and impact resistance of materials. Figure 13d,e are micrographs of the no WDE area and eroded area, respectively. It can be seen that the grains structures on the surface of the eroded area are similar to those in Figure 13a. It is proved that the WDE damage is slightly in the waterjet process, only the superficial layer of the surface was destroyed. The grains remain dense. Figure 13 shows the metallographic structure of the eroded surface of the laser cladding sample. It can be seen from Figure 13a that the surface grains after laser cladding are very dense. With the increasing distance from the cladding surface, the grains gradually become loose. And the grains are no longer visible at the substrate region far away from the surface, so the compact structure is just the key that allows materials to avoid excessive concentration of energy when impact by a waterjet, improving the toughness and impact resistance of materials. Figure 13d,e are micrographs of the no WDE area and eroded area, respectively. It can be seen that the grains structures on the surface of the eroded area are similar to those in Figure 13a. It is proved that the WDE damage is slightly in the waterjet process, only the superficial layer of the surface was destroyed. The grains remain dense. Figure 14 shows the metallographic diagram of the eroded stellite alloy. Figure 14a shows the metallographic structure of stellite alloy. Figure 14b,c is the microstructure of eroded area and no WDE area on the 90° sample surface, respectively. Figure 14 shows the metallographic diagram of the eroded stellite alloy. Figure 14a shows the metallographic structure of stellite alloy. Figure 14b,c is the microstructure of eroded area and no WDE area on the 90 • sample surface, respectively. It can be seen from the figure that grain boundary is faintly visible on the stellite alloy surface and the grain structure is not obvious yet. A possible reason is that the superficial layer of the surface has not been completely destroyed.
In order to further analyze the deep reason for the excellent WDE resistance of the brazed stellite alloy, scanning electron microscope was used to focus on the microscopic observation of the darkening and blackening WDE area after 450 minutes water droplet erosion test on the brazed stellite alloy sample. The observation results are shown in Figures 15-17. From Figure 15, it can be seen that the stellite alloy consists of a cobalt-based solid solution main phase and the reinforced carbide precipitates secondary phase with shape edges. It can be seen from the figure that grain boundary is faintly visible on the stellite alloy surface and the grain structure is not obvious yet. A possible reason is that the superficial layer of the surface has not been completely destroyed.
In order to further analyze the deep reason for the excellent WDE resistance of the brazed stellite alloy, scanning electron microscope was used to focus on the microscopic observation of the darkening and blackening WDE area after 450 min water droplet erosion test on the brazed stellite alloy sample. The observation results are shown in Figures 15-17. From Figure 15, it can be seen that the stellite alloy consists of a cobalt-based solid solution main phase and the reinforced carbide precipitates secondary phase with shape edges. Combined with Figures 16 and 17, it can be seen that there is almost no plastic deformation on the stellite alloy surface and little material removal on the cobalt matrix. Thomas [17] also found that the cobalt alloy had the best WDE resistance in several metals and alloys. And the fracture strength and strain energy of the cobalt alloy had no advantage in WDE test, but the deformation mode was very unique. In the early stages of water droplet erosion, the target surface did not appear any type of pit. And the impact area formed a very uniform block type. It was not until 150 × 10 3 times of water droplet impact that the weight loss of the material was detected. The shear and fracture were presented in the most severely deformation area. Combined with Figures 16 and 17, it can be seen that there is almost no plastic deformation on the stellite alloy surface and little material removal on the cobalt matrix. Thomas [17] also found that the cobalt alloy had the best WDE resistance in several metals and alloys. And the fracture strength and strain energy of the cobalt alloy had no advantage in WDE test, but the deformation mode was very unique. In the early stages of water droplet erosion, the target surface did not appear any type of pit. And the impact area formed a very uniform block type. It was not until 150 × 10 3 times of water droplet impact that the weight loss of the material was detected. The shear and fracture were presented in the most severely deformation area. In addition, it can be clearly seen from Figures 16 and 17 that the visible darkened and blackened eroded zone of the stellite alloy forms some cracks, which are initiated at the hard carbides boundaries and propagated along the direction of carbides. Under the continuous water droplet erosion, cracks grow and connect into networks, resulting in the removal of carbide precipitates and WDE damage.
Under the impact of waterjet, the microstructure of cobalt matrix will undergo subtle changes. A prominent feature is the formation of mechanical twins. It has been demonstrated that the excellent erosion resistance of stellite alloy is due to the deformation of mechanical twins in the cobalt matrix [22][23][24][25][26]. When they are subjected to impacted, the deformation caused by the transition from a stable face-centered structure to a metastable hexagonal close-packed structure can absorb most of the energy. Preece et al. [27] reported that twins had the effect of dividing the original grains into smaller segments (the size in the range of 0.1 to 1.0 μm). As the number of impacts increases, the increasing twin density will break grains into sub-micron size, which will reduce the mean free path of dislocations, thereby limiting surface deformation and thus higher erosion resistance. A large number of studies reported in the literature also show that as the grain size decreases, the WDE resistance increases [22,25,28,29].
Through a large number of microscopic observations on the surface of the stellite alloy samples, it is found that the cracks in the erosion damage zone are initiated at the interface and the interior of the hard carbide precipitates, resulting the internal stress concentration under the cumulative impact. Then the internal stress concentration points cause the initiation and propagation of cracks along the interface and interior of the hard carbide precipitates. Due to the formation of crack network caused carbides removal and erosion damage. The material loss in stellite alloy is observed mainly at the carbide precipitation site, while the cobalt matrix that is formed mechanical twins was much less damaged than that of the carbides.
It has been reported [30] that the water droplet erosion resistance of cobalt alloy without carbides and stellite 6B alloy was experimentally investigated, and the results also confirmed that the Co alloy was slightly better than carbide-containing stellite 6B alloy. Combined with the above analysis, it is not difficult to determine that the stable face-center structure of cobalt matrix will transform to the metastable hexagonal close-packed structure under impact. And excellent WDE resistance of stellite alloy is mainly related to the deformation and energy absorption involved in this transformation. In addition, in the microscopic observation of the stellite alloy, it was found that the hard carbide in the stellite alloy is the starting point of crack formation and propagation. Under the continuous droplet impact, the formation of crack network causes carbides removal and erosion damage, which is the main reason for the mass loss of stellite alloy. It is proved that the properties of the Co-based material itself is the reason for its excellent WDE resistance, and the carbides have almost no positive contribution to its anti-erodibility. These new findings are of great significance to process methods and parameter selection of steam turbine blade materials and surface strengthened layers. Combined with Figures 16 and 17, it can be seen that there is almost no plastic deformation on the stellite alloy surface and little material removal on the cobalt matrix. Thomas [17] also found that the cobalt alloy had the best WDE resistance in several metals and alloys. And the fracture strength and strain energy of the cobalt alloy had no advantage in WDE test, but the deformation mode was very unique. In the early stages of water droplet erosion, the target surface did not appear any type of pit. And the impact area formed a very uniform block type. It was not until 150 × 10 3 times of water droplet impact that the weight loss of the material was detected. The shear and fracture were presented in the most severely deformation area.
In addition, it can be clearly seen from Figures 16 and 17 that the visible darkened and blackened eroded zone of the stellite alloy forms some cracks, which are initiated at the hard carbides boundaries and propagated along the direction of carbides. Under the continuous water droplet erosion, cracks grow and connect into networks, resulting in the removal of carbide precipitates and WDE damage.
Under the impact of waterjet, the microstructure of cobalt matrix will undergo subtle changes. A prominent feature is the formation of mechanical twins. It has been demonstrated that the excellent erosion resistance of stellite alloy is due to the deformation of mechanical twins in the cobalt matrix [22][23][24][25][26]. When they are subjected to impacted, the deformation caused by the transition from a stable face-centered structure to a metastable hexagonal close-packed structure can absorb most of the energy. Preece et al. [27] reported that twins had the effect of dividing the original grains into smaller segments (the size in the range of 0.1 to 1.0 µm). As the number of impacts increases, the increasing twin density will break grains into sub-micron size, which will reduce the mean free path of dislocations, thereby limiting surface deformation and thus higher erosion resistance. A large number of studies reported in the literature also show that as the grain size decreases, the WDE resistance increases [22,25,28,29].
Through a large number of microscopic observations on the surface of the stellite alloy samples, it is found that the cracks in the erosion damage zone are initiated at the interface and the interior of the hard carbide precipitates, resulting the internal stress concentration under the cumulative impact. Then the internal stress concentration points cause the initiation and propagation of cracks along the interface and interior of the hard carbide precipitates. Due to the formation of crack network caused carbides removal and erosion damage. The material loss in stellite alloy is observed mainly at the carbide precipitation site, while the cobalt matrix that is formed mechanical twins was much less damaged than that of the carbides.
It has been reported [30] that the water droplet erosion resistance of cobalt alloy without carbides and stellite 6B alloy was experimentally investigated, and the results also confirmed that the Co alloy was slightly better than carbide-containing stellite 6B alloy. Combined with the above analysis, it is not difficult to determine that the stable face-center structure of cobalt matrix will transform to the metastable hexagonal close-packed structure under impact. And excellent WDE resistance of stellite alloy is mainly related to the deformation and energy absorption involved in this transformation. In addition, in the microscopic observation of the stellite alloy, it was found that the hard carbide in the stellite alloy is the starting point of crack formation and propagation. Under the continuous droplet impact, the formation of crack network causes carbides removal and erosion damage, which is the main reason for the mass loss of stellite alloy. It is proved that the properties of the Co-based material itself is the reason for its excellent WDE resistance, and the carbides have almost no positive contribution to its anti-erodibility. These new findings are of great significance to process methods and parameter selection of steam turbine blade materials and surface strengthened layers.
Conclusions
Based on the high-speed waterjet test system, the WDE performances of the typical martensite blade substrate 0Cr17Ni4Cu4Nb and surface strengthening materials: laser solution strengthening samples, stellite laser cladding samples and brazed stellite alloy samples are investigated. The influence of the surface strengthened process on the water-erosion characteristics is investigated systematically. The main conclusions are as follows:
1)
After solid solution strengthening, the sample hardness has been increased and the WDE resistance has been significantly improved. However, according to the metallographic analysis, there is a suspected honeycomb defect in the strengthened region, which may limit the further improvement of the WDE resistance. 2) At the same impact angle, the WDE resistance characteristics of martensite substrate, solution strengthening, laser cladding and stellite alloy samples are in order from excellent to inferior: brazed stellite alloy, laser cladding, solid solution strengthening and martensite substrate. Laser cladding and stellite alloy materials have excellent WDE resistance. Combined with the micro Vickers hardness test, it is fully indicated that the hardness parameter is not the decisive factor of WDE resistance of brazed stellite alloy. 3) Furthermore, the WDE resistance mechanism and the failure mode of brazed stellite alloy have been revealed. It is found that the cracks in the erosion damage zone are initiated at the interior and boundary of the hard carbide precipitates. And the cracks expanded along the direction of the carbides. Under the continuous droplet impact, cracks grow and connect into networks, resulting in the removal of carbide precipitates and WDE damage. It is proved that the properties of the Co-based material itself is the reason for its excellent WDE resistance, and the carbides have almost no positive contribution to its anti-erodibility. | 9,751 | 2020-09-25T00:00:00.000 | [
"Materials Science"
] |
Intracellular Delivery of mRNA in Adherent and Suspension Cells by Vapor Nanobubble Photoporation
Vapor nanobubble (VNB) photoporation represents a promising physical technique for mRNA transfection of adherent and suspension cells. A multitude of parameters related to the VNB photoporation procedure were optimized to enable efficient mRNA transfection. VNB photoporation was found to yield five times more living, transfected Jurkat T cells as compared to electroporation, i.e., currently the standard nonviral transfection technique for T cells. Vapor nanobubble (VNB) photoporation represents a promising physical technique for mRNA transfection of adherent and suspension cells. A multitude of parameters related to the VNB photoporation procedure were optimized to enable efficient mRNA transfection. VNB photoporation was found to yield five times more living, transfected Jurkat T cells as compared to electroporation, i.e., currently the standard nonviral transfection technique for T cells. Efficient and safe cell engineering by transfection of nucleic acids remains one of the long-standing hurdles for fundamental biomedical research and many new therapeutic applications, such as CAR T cell-based therapies. mRNA has recently gained increasing attention as a more safe and versatile alternative tool over viral- or DNA transposon-based approaches for the generation of adoptive T cells. However, limitations associated with existing nonviral mRNA delivery approaches hamper progress on genetic engineering of these hard-to-transfect immune cells. In this study, we demonstrate that gold nanoparticle-mediated vapor nanobubble (VNB) photoporation is a promising upcoming physical transfection method capable of delivering mRNA in both adherent and suspension cells. Initial transfection experiments on HeLa cells showed the importance of transfection buffer and cargo concentration, while the technology was furthermore shown to be effective for mRNA delivery in Jurkat T cells with transfection efficiencies up to 45%. Importantly, compared to electroporation, which is the reference technology for nonviral transfection of T cells, a fivefold increase in the number of transfected viable Jurkat T cells was observed. Altogether, our results point toward the use of VNB photoporation as a more gentle and efficient technology for intracellular mRNA delivery in adherent and suspension cells, with promising potential for the future engineering of cells in therapeutic and fundamental research applications.
Introduction
In recent years, mRNA has gained immense interest as a novel class of nucleic acid therapeutics [1][2][3]. In contrast to DNA therapeutics, mRNA does not require nuclear entry to be functional, being translated instantly after reaching the cell cytoplasm and thus avoiding potential insertional mutagenesis. In addition, mRNA-based therapeutics have a reduced risk of long-term side effects as they are only transiently active inside the cell. The affordability and ease of production have furthermore advanced the development of mRNA as a versatile class of nucleic acid therapeutics, while inherent obstacles such as unfavorable immunogenicity and short half-life time were addressed [1,2,4,5]. Driven by these advances, mRNA has also emerged as a promising tool for ex vivo engineering of adoptive T cells [2]. For this, patient-derived T cells are expanded ex vivo and engineered for targeted cytotoxicity against cancer or viral-infected cells, prior to re-injection into the patient. Recently, the first two chimeric antigen receptor (CAR) T cell products, i.e., Kym-riah™ (tisagenlecleucel; Novartis) [6] and Yescarta™ (axicabtagene ciloleucel; Kite Pharma, Gilead) [7,8], have been approved by the US Food and Drug Administration (FDA) [9,10]. Genetic modification of the T cells is performed using engineered viruses carrying a vector with the tumor antigen-specific CAR. The use of these viral vectors, however, comes with the limitations of being costly, time-consuming and often having variable results [11][12][13]. In addition, persistent expression of the CAR construct and risk of insertional mutagenesis contributes to their unfavorable safety profile [11]. While DNA transposons, e.g., sleeping beauty transposon, are considered as a safer nonviral approach, the risk of persistent side effects and insertional mutagenesis remains. mRNA, with its inherent safety features and ease of use, has therefore been raised as a promising alternative over viral-or transposon-based methods for the generation of adoptive T cells [2,14]. Gene editing by transient Cas9 mRNA expression, for example, became of interest to facilitate highly efficient therapeutic T cell engineering, while reducing the risk of off-target effects and overcoming DNA-related cytotoxicity [15,16].
The success of mRNA in cell-based immunotherapy strongly relies on the ability to efficiently deliver the mRNA molecules to target immune cells. Of note, improving the efficiency of current transfection technologies is also expected to strongly impact the scalability and production cost of cell-based therapies [5]. Many different technologies have emerged over the years to address the ever recurring issue of intracellular delivery of mRNA, though each of them faces divergent limitations. mRNA is a large negatively charged, single-stranded nucleic acid that can be encapsulated in synthetic nanocarriers for protection against ubiquitous serum nucleases and enhancing endocytic uptake [17]. Gold nanoparticles, for example, have been extensively studied as drug and gene delivery carriers because of their favorable physicochemical properties [18][19][20][21][22][23][24]. However, carrier-induced cytotoxicity and low transfection efficiency are common disadvantages for T cells [3]. Physical delivery methods have recently gained attention when it comes to in vitro and ex vivo cell modification, featuring a broad applicability on different cell types and cargos [17,25]. Electroporation, which makes use of strong electric fields to deliver nucleic acids to the cell interior, is currently the preeminent tool for mRNA transfections of hard-to-transfect immune cells [2,26]. It should be noted, however, that electroporation was amply shown to come with significant loss of cell viability, induction of unwanted phenotypic changes or loss of cell functionality [17,[27][28][29][30]. Laser-assisted photoporation, sometimes also referred to as optoporation, recently came up as a promising gentler technique for intracellular delivery of biological macromolecules [23,24,31]. Wayteck et al., for instance, previously showed in a one-on-one comparison between photoporation and electroporation on murine T cells that a threefold higher percentage of siRNA-transfected viable cells was obtained by photoporation as it induced much less cytotoxicity compared to electroporation [32].
In its most straightforward form, photoporation is obtained by focusing high-intensity femtosecond laser pulses onto the cell membrane, thereby inducing very local membrane permeabilization and allowing extracellular molecules to enter the cell cytoplasm [31]. It has been shown to enable efficient mRNA transfection in primary rat neurons even on a subcellular level [33,34], as well in single neurons of zebrafish embryos [35]. Although proven effective for single-cell transfections, its general usability is limited by low-throughput and labor-intensive procedures. The former can, however, be substantially increased by making additional use of photothermal nanoparticles. After attaching to 1 3 the cell membrane and applying laser irradiation, they can very locally disturb cell membrane integrity. The advantage over traditional photoporation is that these nanoparticles substantially reduce the required light density to enhance membrane permeability, thus allowing to use broad laser beams and resulting in an immensely increased photoporation throughput [36]. In addition, photothermal effects can be efficiently achieved with much less expensive nanosecond pulsed lasers. A particularly effective photothermal phenomenon for creating transient pores in the cell membrane is the generation of vapor nanobubbles (VNBs). These VNBs nucleate from the nanoparticles, such as plasmonic gold nanoparticle (AuNPs), by the rapid evaporation of the immediate surrounding liquid upon pulsed laser irradiation, while heat diffusion to the environment is negligible [37][38][39]. In addition to AuNPs, graphene-based nanoparticles [40], carbon black nanoparticles [41] and different types of metal alloys [42] have also been suggested for the same purpose. By rapid expansion and subsequent collapse of the VNB after absorption of a laser pulse, high-pressure shockwaves and fluid shear stress can cause physical damage to the neighboring cell membrane structures. In turn, this results in the formation of very localized and transient membrane pores, allowing extracellular cargo to passively diffuse into the cell interior [32,38,43,44]. Conveniently, the technique can be applied to both adherent cells [38] and suspension cells [32,44], while it is compatible with any type of transparent cell recipient (e.g., culture flasks, multiwell plates). Furthermore, it offers the possibility to transfect even single cells in high throughput [45,46].
While VNB photoporation has been demonstrated to be suitable to transfect a broad variety of cell types with many different cargos like siRNA [32,38], nanobodies [40] and other proteins [43], we here report for the first time its suitability for the intracellular delivery of mRNA. Since mRNA is a considerably large (between 20-200 nm), highly negative charged macromolecule compared to smaller antisense oligonucleotides or proteins (between 1-20 nm), effective intracellular delivery of these molecules across the negatively charged cell membrane is particularly challenging [17]. We performed experiments on HeLa and Jurkat T cells as models for adherent and suspension cells. Jurkat T cells serve as a valid model for primary human T cells [47] and are routinely used for screening and optimization of CAR constructs [48][49][50][51][52]. We started by systematically optimizing several parameters related to the VNB photoporation procedure, including AuNP concentration, laser fluence and transfection buffer. We found that for HeLa cells transfection efficiencies up to 38% could be obtained while maintaining a high level of cell viability. In Jurkat T cells, transfection efficiencies up to 20% could be obtained, which could be further enhanced to 45% by applying the procedure up to three times. These results were compared to mRNA transfections by electroporation, which is currently the method of choice for nonviral genetic engineering of T cells. Electroporation appeared to be extremely toxic to Jurkat T cells leading to a reduction by ~ 95% of the metabolic activity of the treated cells, even though in the 5% viable cells very high transfection efficiencies were obtained. Hence, VNB photoporation yielded five times more transfected viable Jurkat T cells as compared to electroporation. Altogether, this study establishes VNB photoporation as a promising, more gentle approach for mRNA transfections of adherent and suspension cells, which is expected to be beneficial for both research and therapeutic purposes.
In vitro Transcription of MLKL-mRNA
Murine MLKL-encoding mRNA was produced using a pIVTstab-MLKL template, as designed by Van Hoecke et al. [53]. The plasmid was first linearized by a PstI restriction digest (Promega, Leiden, the Netherlands), following purification using a QIAquick PCR purification kit (Qiagen, Chatsworth, CA, USA). MLKL-mRNA was obtained by in vitro transcription with the mMESSAGE mMACHINE™ T7 ULTRA Transcription Kit (Life Technologies, Merelbeke, Belgium), according to the manufacturer's instructions. The in vitro transcribed MLKL-mRNA was eventually purified by LiCl precipitation and stored at − 80 °C until further use.
Analysis of mRNA Integrity by Agarose Gel Electrophoresis
mRNA integrity after incubation with HeLa cells, either with or without a prior washing step with Opti-MEM, was assessed by native agarose gel electrophoresis. Prior to addition of the mRNA, the cells were washed once with DPBS, followed by an Opti-MEM washing step of 10 min (only for specified samples). Next, eGFP-mRNA was diluted in Opti-MEM to a final concentration of 0.3 µM and incubated on the cells for the specified time (5, 10, 20, or 30 min). mRNA diluted in Opti-MEM, mRNA incubated with 10 µg/ml RNAseA (Ambion, Merelbeke, Belgium) and a 0.5-10 kb RNA millennium™ marker were taken along as controls. The samples were loaded on a 1% agarose gel, and gel electrophoresis was performed at 100 V for 30 min. For visualization of the mRNA integrity, a Bio-Rad UV transilluminator 2000 (Hercules, CA, USA) was used.
Visualization and Quantification of AuNP Attachment
Cells were washed once with DPBS (HeLa) or culture medium (Jurkat, 250 × 10 3 cells) and incubated with AuNP in culture medium for 30 min at 37 °C. Next, the cells were washed once with DPBS (HeLa) or culture medium (Jurkat) and supplemented with new culture medium. AuNP attachment to the cells was visualized by confocal reflection microscopy (C1si or C2, Nikon BeLux, Brussels, Belgium) using a 60× water immersion lens (Plan Apo, NA 1.2, Nikon BeLux, Brussels, Belgium). Jurkat cells were additionally incubated with CellMask deep red (1000×) and Hoechst33342 (1000×) for 10 min at 37 °C to stain the cell membrane and nucleus, respectively. HeLa cells were first incubated with CellTrace Far Red (500×) for 20 min at 37 °C to stain the cytoplasm, after which they were washed twice with culture medium and incubated with Hoechst33342 for 10 min at 37 °C. After staining, the cells were washed with culture medium and imaged using confocal microscopy. Image analysis was performed using the ImageJ software (FIJI, https ://Fiji.sc/), including merging the different fluorescent or reflection images into a composite and dilation of the AuNP scattering signal (HeLa), to visualize and quantify the number of cellattached AuNPs. For each AuNP concentration and each independent experiment, a minimum of 50 (HeLa) or 150 (Jurkat E6-1) cells were analyzed for AuNP attachment by combination of multiple confocal reflection microscopy images recorded for different AuNP incubation samples (≥ 2 wells).
Determination of the VNB Generation Threshold
A previously reported in-house developed optical setup was used to determine the laser pulse fluence threshold [24,38], which is defined as the laser fluence of a single laser pulse at which 90% of the irradiated AuNPs generate a VNB. In short, 60 nm AuNPs (stock: ~ 4 × 10 10 AuNPs mL −1 ) were first diluted 50× in ddH 2 O and transferred to a 50 mm γ-irradiated glass bottom dish (MatTek Corporation, Ashland, MA, USA). After sedimentation, the AuNPs sample was mounted on an inverted microscope (TE2000, Nikon BeLux, Brussels, Belgium) and irradiated with a pulsed laser (~ 7 ns) tuned at a wavelength of 561 nm (Opolette™ HE 355 LD, OPOTEK Inc, Carlsbad, CA, USA). The laser beam diameter at the sample was 150 µm. The laser pulse energy was monitored using an energy meter (LE, Energy Max-USB/RS sensors, Coherent). An electronic pulse generator (BNC575, Berkeley Nucleonics Corporation) triggered individual laser pulses and synchronized an EMCCD camera (Cascade II: 512, Photometrics) to record dark-field microscopy images before, during and after VNB formation. VNB can be seen distinctly in dark-field microscopy images as brief bright localized flashes of light, due to the increase in light scattering during their lifetime. By quantifying the number of visible VNBs within the laser pulse area (150 µm diameter) for increasing laser pulse fluences, the VNB generation threshold was determined.
mRNA Transfection by VNB Photoporation
Cells were incubated with AuNPs at different concentrations, as described above. After washing away unbound AuNPs, cells were incubated with Opti-MEM for 10 min as it proved to be beneficial to minimize mRNA degradation. Next, cells were photoporated in the presence of mRNA diluted in the indicated transfection buffer. Opti-MEM, DMEM/F-12, DPBS+ and DPBS were all used as transfection buffer in various experiments. After laser treatment, the cells were supplemented with fresh cell culture medium and allowed to settle for 6 h (RLuc) or 24 h (eGFP) prior to analysis of mRNA expression or cell viability.
Transfection of Jurkat Cells by Nucleofection
Jurkat cells were transfected with eGFP-mRNA using a 4D-Nucleofector™ according to the manufacturer's recommendations with the SE Cell line 4D-Nucleofector kit (V4XC-1032) (Lonza, Breda, the Netherlands). First, 2 × 10 5 Jurkat cells together with 2 µg eGFP-mRNA were resuspended in 20 µL SE cell line solution and transferred to a 16-well Nucleocuvette™ strip. The cells were transfected using the pulse program CL-120 and immediately after supplemented with 80 µL preheated culture medium. Finally, 50 µL of that cell suspension was transferred to a 96-well plate already containing 150 µL preheated culture medium and incubated at 37 °C, 5% CO 2 for 24 h prior to analysis by confocal microscopy, flow cytometry and viability assays.
Analysis of eGFP Expression by Confocal Microscopy and Flow Cytometry
Efficiency of eGFP-mRNA transfection was visualized by confocal microscopy (C1si, Nikon BeLux, Brussels, Belgium) using a 10 × objective lens (Plan Apo, NA 0.45) or 60× water immersion lens (Plan Apo, NA 1.2, Nikon BeLux, Brussels, Belgium). Quantification of the percentage eGFPpositive cells was performed by flow cytometry using a CytoFLEX flow cytometer (Beckman Coulter, Suarlée, Belgium). The resulting flow cytometry data were analyzed using FlowJo (Treestar Inc, Ashland, USA) software.
Analysis of RLuc mRNA Expression
Efficiency of RLuc mRNA expression was determined 6 h after VNB photoporation using the Renilla-Glo™ Luciferase assay system (Promega, Leiden, the Netherlands). In short, 50 × 10 3 Jurkat cells in 50 µL culture medium were combined with an equal volume of Renilla-Glo™ Luciferase Assay Reagent. After 10 min, the luminescent signal was measured using a GloMax™ luminometer (Promega, Leiden, the Netherlands). The luminescent signal of each condition was background subtracted (wells with reagent but no cells) and normalized relative to the untreated control.
CellTiter-Glo® Viability Assay
Viability of HeLa, B16F10 cells or Jurkat cells was assessed for 18 h (B16F10) or 24 h (HeLa, Jurkat) after VNB photoporation or nucleofection using the CellTiter-Glo® luminescent cell viability assay, as recommended by the manufacturer (Promega, Leiden, the Netherlands). Briefly, HeLa, B16F10 and Jurkat cells were supplemented with an equal volume of CellTiter-Glo® reagent for each well, mixed for 5-10 min using an orbital shaker (120 rpm) and transferred to an opaque 96-well plate. After allowing the plate to stabilize for 10 min, the luminescent signal of each well was measured using a GloMax™ luminometer (Promega, Leiden, the Netherlands).
Evaluation of Cell Viability and Cell Proliferation by Trypan Blue Cell Counting
HeLa cells were first harvested by trypsinization (0.25% trypsin/EDTA), following neutralization with cell culture medium. HeLa or Jurkat cell density in each condition was assessed using a Bürker counting chamber (Brand GMBH + CO KG, Wertheim, Germany) and trypan blue exclusion (0.4%, Sigma-Aldrich, Overijse, Belgium). Cell viability of the different samples was calculated relatively to their respective untreated control. Cell growth was normalized against the untreated control at day 0 and followed for up to 5 days.
Statistical Analysis
All data are shown as mean ± standard deviation, unless stated differently. Statistical differences were analyzed using the GraphPad Prism 8 software (La Jolla, CA, USA). The statistical tests used in each figure are mentioned in the figure caption. Statistical differences with a p value < 0.05 were considered significant.
VNB Photoporation Procedure for mRNA Transfection
In this study, we investigated the applicability of VNB photoporation using 60 nm cationic PDDAC-coated AuNPs as photothermal nanoparticles for mRNA transfection. AuNPs with a diameter of 60 nm were previously described to be ideal photothermal sensitizers, requiring a minimum bubble nucleation threshold laser fluence [54]. The experimental procedure for mRNA transfection by VNB photoporation is illustrated in Fig. 1a. For transfection by VNB photoporation, cells are first incubated with cationic AuNPs that will be adsorbed to the cell surface.
After washing away unbound AuNPs, irradiation with a single laser pulse (7 ns) leads to the generation of VNBs arising from the cell-bound AuNPs. The inevitable collapse of the VNBs when the thermal energy is consumed causes local pore formation in the cell membrane, allowing extracellular mRNA molecules to diffuse through these membrane pores directly into the cytoplasm. Effective generation of these VNBs can be visualized using dark-field microscopy as a result of an increased amount of light scattering during their lifetime (Fig. S1a). Upon laser irradiation and subsequent VNB generation, the AuNPs are known to fragment into smaller pieces that scatter less light. These AuNP fragments are therefore not visible anymore in the laser-irradiated region [23,40]. By quantification of the number of generated VNBs within a defined laser irradiated area as a function of the laser fluence (i.e., energy per unit area), the so-called VNB generation threshold fluence was assessed (Fig. S1b). This value is defined as the fluence at which VNBs are formed with 90% certainty and was, in good agreement with previously reported work [44], determined to be 0.9 J cm −2 .
Considering the inherent labile nature of naked mRNA, premature degradation of these nucleic acids prior to transfection can easily take place. To test this, a native agarose gel electrophoresis assay was performed to qualitatively evaluate the physical integrity of the mRNA after incubation on the cells (Fig. 1b). As we observed that five minutes of incubation of the mRNA solution on HeLa cells already resulted in complete mRNA degradation, an Opti-MEM washing step for 10 min prior to the addition of mRNA to the cells was included to wash away any remaining RNAses as much as possible. After that, the mRNA remained intact for at least 10 min, which is sufficient as the photoporation procedure only lasts ~ 3 min. This washing step was, therefore, included in all further experiments before performing the photoporation procedure.
mRNA Transfection of Adherent Cells by VNB Photoporation
To date, the applicability of nanoparticle-sensitized photoporation for transfection of mRNA has not yet been investigated. The HeLa human epithelial adenocarcinoma cell line served here as a reference cell type for initial optimization, as it has already previously been used extensively to quantify intracellular delivery of a wide range of molecules (e.g., siRNA and nanobodies) by VNB photoporation [38,40,55]. Different parameters related to the VNB photoporation procedure were optimized to reach maximum transfection efficiency with acceptable cytotoxicity, including AuNP concentration, laser fluence, transfection buffer and mRNA concentration. In concordance with the vast majority of scientific studies on adherent cell lines, a cytotoxicity threshold level of 80% was chosen for HeLa cell experiments.
First, different AuNP concentrations and laser fluences were screened for transfection efficiency and cell viability. Cells were incubated for 30 min with AuNP concentrations of 4, 8, and 16 × 10 7 AuNPs mL −1 (Fig. 2a). After washing, this led to ~ 3 ± 1 AuNPs, 5 ± 1 AuNPs and 10 ± 2 AuNPs per cell on average (mean ± SD), as determined by confocal reflection microscopy (Fig. S2). Next, laser irradiation (561 nm) was applied such that every cell in the sample essentially received a single laser pulse of 1.8 J cm −2 , which is about twice the VNB generation threshold for these gold nanoparticles and therefore ensures effective VNB generation [44]. Using 0.3 µM eGFP-mRNA, up to 21% eGFP-positive cells were obtained depending on the AuNP concentration. At the same time, a slight drop in cell viability was seen 24 h after photoporation, as measured by the CellTiter-Glo assay and further confirmed by a trypan blue cell counting assay (Fig. S3). When considering 20% loss of metabolic activity as a commonly chosen acceptable level of cytotoxicity, 8 × 10 7 AuNPs mL −1 (~ 5 AuNPs/cell) was selected as the most optimal concentration, yielding about 16% eGFP-positive cells. Higher laser fluences were previously suggested to result in bigger VNBs and membrane pores [38] and could therefore further enhance the delivery efficiency of these high molecular weight mRNA molecules. Using the previously optimized AuNP concentration, three different laser fluences were Next, we investigated the influence of transfection buffer on eGFP-mRNA transfection efficiency, which reportedly can greatly influence cell viability and transfection efficiency of physical transfection methods [17]. Therefore, we performed photoporation experiments on HeLa cells in different commercially available buffers or media, including Opti-MEM, Dulbecco's phosphate buffered saline with (DPBS+) or without Ca 2+ /Mg 2+ (DPBS−) or DMEM/F-12. Confocal microscopy images showed that transfection efficiency was highest for DPBS+ (Fig. 3a). This could be quantitatively confirmed by flow cytometry, with a 1.55-fold increase in the number of transfected cells as compared to Opti-MEM (Fig. 3b), while cell viability remained > 80% (Fig. 3c). Based on these results, DPBS + was selected as transfection buffer for all further transfections of HeLa cells.
Next, we evaluated the effect of increasing the mRNA concentration (0.3, 0.9, and 1.5 µM). As can be seen from the flow cytometry data in Fig. 4a, the percentage of eGFPpositive cells increased for higher mRNA concentrations, reaching up to 38% eGFP-positive cells for 1.5 µM mRNA. This trend is furthermore illustrated in Fig. 4b, showing contour plots that display eGFP expression 24 h after a representative mRNA transfection experiment. Taken together, the results above provide a first proof-of-concept on the applicability of VNB photoporation for intracellular delivery of mRNA. Moreover, extensive optimization of different parameters related to the photoporation procedure allowed to obtain favorable mRNA transfection efficiencies of up to 38%.
Finally, to provide further proof that successful mRNA transfections are not limited to the eGFP-mRNA used so far, we proceeded with the transfection of murine MLKL (mixed lineage kinase domain-like)-encoding mRNA in B16F10 murine melanoma cells. MLKL is a known necroptosis executioner, i.e., a type of immunogenic cell death, so that MLKL-mRNA transfection is expected to cause decreased cell viability [53]. Optimized VNB photoporation conditions for B16F10 cells were previously determined by our group, as reported by Van Hoecke & Raes et al. [43]. The results in Fig. S5 show that a significant drop in cell viability of 17% is obtained after transfection of MLKL-encoding mRNA in comparison with the VNB photoporation control without MLKL-mRNA. This level of mRNA transfection is in line with our expectations, given that eGFP-mRNA transfection efficiencies of ~ 16% are obtained in HeLa cells using similar VNB photoporation conditions (0.3 µM, 8 × 10 7 AuNPs mL −1 , 1.8 J cm −2 ). With these results, we demonstrated the applicability of the VNB photoporation technology for intracellular delivery of functional mRNA molecules as well. . Cell viability values were determined with the CellTiter-Glo assay and expressed relatively to the untreated control (n = 3). One-way ANOVAs with Dunnett's multiple comparison test were performed to determine statistical differences (ns = nonsignificant; *p < .05; **p < .01; ***p < .001) 1 3
Transfection of Jurkat T Cells with eGFP-mRNA by VNB Photoporation
The Jurkat E6-1 human leukemic T cell line was used here as a model for primary human T cells [47]. Analogous to the HeLa cell transfection experiments, different key parameters in the VNB photoporation procedure were first optimized for Jurkat cells (Fig. 5), i.e., (1) AuNP concentration, (2) laser fluence and (3) transfection buffer. Jurkat cells were first incubated for 30 min with increasing AuNP concentrations, ranging from 1 to 16 × 10 7 AuNPs mL −1 . Using confocal reflection microscopy, it was found that the corresponding number of cell-attached AuNPs ranged from ~ 1 to ~ 5 AuNP/cell (Fig. S6). After transfection by VNB photoporation with a laser fluence of 1.8 J cm −2 , again an increasing percentage of eGFP-positive cells was obtained for increasing AuNP concentrations, with a concomitant decrease in cell viability (Fig. 5a). Eventually aimed at producing therapeutic engineered patient-derived T cells, in this case it is of interest to re-express these data as the percentage of transfected living cells. Indeed, limited T cell numbers are typically collected from profoundly lymphopenic patients owing to multiple previous rounds of cancer treatment, highlighting the need to maximize the Fig. 5b, an optimum is found for 4 × 10 7 AuNPs mL −1 (~ 2 AuNPs/cell) at which ~ 13% of the initial cell population is viable and transfected. In the next section, we will put these results in perspective against transfection by electroporation. As a next step, different laser fluences were tested using a fixed AuNP concentration of 4 × 10 7 AuNPs mL −1 . eGFP expression was evaluated both qualitatively by confocal microscopy (Fig. S7) and quantitatively by flow cytometry (Fig. 5c, d). The percentage of positive cells did not increase with the laser fluence, but cell viability did decrease slightly (Fig. 5c). As a result, the best yield of living and transfected cells (~ 14%) was obtained for the lowest laser fluence of 0.9 J cm −2 (Fig. 5d). When evaluating the effect of different transfection buffers, contrary to HeLa cells, DPBS+ did not enhance eGFP-mRNA transfection of Jurkat cells (Fig. 5e, f). Therefore, we chose to continue further experiments on Jurkat cells using Opti-MEM as transfection buffer. These optimized conditions were furthermore shown to enable effective Luc mRNA transfection of Jurkat cells (Fig. S8).
Until this point, the most optimal conditions has led to 75% viable Jurkat cells of which 20% are transfected with eGFP-mRNA. This means that after applying one time the photoporation procedure, there remain still 60% Jurkat cells that are alive but untransfected. As such, it is of interest to try to repeat the photoporation procedure to see whether that can further enhance the final yield of living transfected cells. Figure 6 shows the results for 1× , 2× and 3× photoporation of Jurkat cells. Between each of the procedures, cells were allowed to recover for 30 min, after which cells were again incubated with AuNP for 30 min and washed with Opti-MEM before photoporating again with eGFP-mRNA. After 2× photoporation, there remained 61% viable cells, of which now 33% are positive for eGFP. Repeating photoporation for a third time led to 45% viable cells, of which 45% was positive for eGFP. These results are summarized in Fig. 6b, showing for each repetition the fraction of nonviable (grey), viable untransfected (blue) and viable transfected (green) cells. Repeating photoporation two times increased the transfected cell yield significantly to 20% (p < 0.05). Repeating photoporation a third time did not produce a net beneficial effect as the increase in the number of transfected living cells is compensated for by an increase in cell death as well.
VNB Photoporation Produces More Living mRNA-Transfected Jurkat Cells than Nucleofection
Electroporation is currently the most common nonviral technique for transfection of nucleic acids and ex vivo modification of T cells [9]. Having previously optimized the VNB photoporation procedure for mRNA transfection of Jurkat cells, we here compared our technology with nucleofection as a stateof-the-art commercial electroporation system (Fig. 7). Nucleofection of Jurkat cells was performed using the optimized protocol from the manufacturer (Pulse code: CL-120; SE cell line solution); 24 h after transfection, a drastic impact on cell viability was observed with only 4% viable cells (Fig. 7a), which is in concordance with several other studies on transfection of lymphocytes by electroporation [27,28]. Nearly all of those (98%) were transfected with eGFP-mRNA, leading to final yield of 4% living transfected cells after electroporation (Fig. 7b). This is about 5 × less as what we obtained with the two times repeated photoporation procedure.
In addition to acute cytotoxicity shortly after transfection, nucleofection was previously shown to significantly impact on the long-term behavior of T cells [28]. For this, we here extended the comparison between VNB photoporation and nucleofection by follow-up of the cell viability (Fig. 7c, d) and cell proliferation (Fig. 7e, f) up to 5 days after Jurkat T cell transfection. While cell viability remained favorable (> 60%) for VNB photoporation 5 days post-transfection, no improvement in cell viability was observed for cells transfected by nucleofection. We furthermore found no significant difference in cell growth between the untreated cells and photoporated cells, whereas no sign of recovery was observable even 5 days post-nucleofection. Altogether, these data put VNB photoporation forward as a more gentle approach for mRNA transfection of T cells.
Discussion
In the last few years, cell-based therapeutics such as CAR T cells have emerged as a very promising approach for the treatment for hematological malignancies [13]. In 2020, over 500 clinical trials employing CARs have been reported worldwide, which clearly highlights the enthusiasm for adoptive cell therapies [56]. The success of T cell-based therapies, however, strongly depends on the ability to engineer these immune cells [9,57]. Viral vectors are currently the clinical and commercial standard for this purpose, but they face multiple issues such as immunogenicity, high cost and variable outcomes. Indeed, transduction efficiencies typically range from a few percentages to over 80% in reported clinical trials [58][59][60]. As a consequence, mRNA-based cell therapies have come up as a safer and cheaper alternative to viral transductions [2]. In this work, we report for the first time on the use of VNB photoporation as a promising physical technique for gentle but efficient mRNA transfections. In its most common implementation, VNB photoporation harnesses a combination of plasmonic gold nanoparticles attached to the cell membrane and laser irradiation to transiently generate membrane pores and enable intracellular delivery of macromolecules. An incubation step of 30 min was previously found convenient to get the AuNPs well positioned for VNB photoporation, being either endocytosed but still in close proximity of the cell membrane (e.g., HeLa) or adsorbed to the cell membrane (e.g., Jurkat) [38,44]. This AuNP incubation alone did not cause any significant cytotoxicity, which is in line with previously reported work on comparable AuNPs showing no impairment of cell viability or long-term cell homeostasis [23,38]. At first, we evaluated and systematically optimized the VNB photoporation procedure for transfection of mRNA in the adherent HeLa cell line as a proof-of-concept. Several characteristics of mRNA make its intracellular delivery challenging, including their relatively large size, strong negative charge and susceptibility to degradation by nucleases [17]. The latter was indeed something we encountered in our study as well. Even though nucleotide-modified mRNA was used, gel electrophoresis clearly showed rapid degradation of mRNA within a few minutes after addition to the cultured cells [61]. This prompted us to include an extra washing step to remove remaining serum nucleases, which could prevent mRNA degradation for at least 10 min. While this is still quite short, it is sufficient to carry out the photoporation procedure which only took ~ 3 min. Based on earlier reports in the literature on the influence of the transfection buffer [17,62], we also tried out different buffers for photoporation. We found that the percentage of transfected HeLa cells could be increased by a factor of ~ 1.5 using DPBS+ (containing Ca 2+ and Mg 2+ ) as transfection buffer instead of Opti-MEM. While supplementation with Ca 2+ has been suggested to influence membrane repair kinetics [17,63], we rather hypothesize that Ca 2+ and Mg 2+ may bind to mRNA, resulting in a reduced electrostatic repulsion between the mRNA molecules and the cell membrane. This is because the same enhanced effect of DPBS + was not observed for mRNA transfection of Jurkat cells which indeed have a lower density of negatively charged glycosaminoglycans on their cell membrane [44,64]. Analogous to other physical transfection approaches, VNB photoporation locally disturbs the integrity of the plasma membrane and allows direct access to the cell cytoplasm. Once membrane pores are formed, mRNA molecules have only a short period of time (seconds to minutes) to reach the cell cytoplasm before membrane integrity is restored. Translocation of mRNA molecules to the cell cytoplasm is mainly thought to occur by passive diffusion during the pore lifetime [31]. For this, higher concentrations of the mRNA molecules were thought to increase the probability of mRNA molecules reaching the cytoplasm. Indeed, the percentage eGFP-positive cells reached up to 38% when using an mRNA concentration of 1.5 µM.
In the field of T cell-based therapeutics, the Jurkat T cell line is a frequently used model for primary human T cells [47]. Jurkat cells are, for instance, routinely used for initial in vitro screenings of novel CAR or engineered T cell receptor designs. A method that enables efficient and quick screening of different CAR constructs, without the need for designing a new dedicated viral vector for each construct, is therefore highly desirable [48][49][50][51][52]65]. As a consequence, we selected the Jurkat T cell line to deliver the proof-of-concept that photoporation holds promise for the production of engineered T cells by mRNA transfections. Considering that high levels of transfection efficiency and cell recovery are both essential in the manufacturing of clinical-grade adoptive T cells [9], we expressed our transfection data in terms of the percentage of transfected, living cells. We demonstrated that photoporation could produce 14% transfected, living Jurkat cells. In addition, we showed that repeating the photoporation procedure a second time increased the transfected, living cell yield further to 20%. Most notably, this was fivefold more than what was obtained with nucleofection as a stateof-the-art electroporation technology. This is primarily due to the vast difference in the level of cytotoxicity induced by both techniques. Indeed, 24 h after treatment the metabolic activity of electroporated Jurkat cells had dropped dramatically to only ~ 4%, while this remained over 60% after two consecutive photoporation treatments. These results are in line with previous work on siRNA transfection of murine T cells [32], where photoporation yielded three times more living transfected cells as compared to electroporation.
Although electroporation was previously proven successful for mRNA transfection of T cells with efficiencies > 90% [66][67][68][69], more recent studies have raised the striking issue of extremely high acute cytotoxicity [27,28], as we also showed here. Moreover, we demonstrated that Jurkat cells did not recover even 5 days post-nucleofection, whereas the cells treated by VNB photoporation maintained their proliferative potential. Apart from acute cytotoxicity, loss of functionality and nonspecific and unintentional changes in the cellular phenotype have been reported before as disadvantages of electroporation [27,28]. These unfavorable effects were also shown to negatively influence the survival and in vivo potency of T cells to suppress tumor growth [27,70]. At the same time, injection of nonviable T cells upon adoptive cell transfer can elicit immune responses and promote toxicity in vivo [71]. Spurred by the positive findings in our study, it will therefore be of interest to investigate the use of VNB photoporation for mRNA transfection of primary human T cells and its influence on T cell homeostasis and therapeutic functionality.
Conclusion
Gold nanoparticle-mediated VNB photoporation proves to be a promising approach for safe and efficient intracellular mRNA delivery in both adherent and suspension cells. After rigorous optimization of different parameters, a good balance between mRNA transfection efficiency and cell survival was obtained. Most importantly, comparison of VNB photoporation and electroporation for mRNA transfection of Jurkat T cells indicated a marked fivefold increase in the percentage of transfected living cells for photoporation. These results position the VNB photoporation technology as a promising, more gentle approach toward safe and efficient engineering of T cells.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 8,936.2 | 2020-09-27T00:00:00.000 | [
"Biology",
"Medicine"
] |
Review on Machine Learning Techniques to predict Bipolar Disorder
: Bipolar disorder, a complex disorder in brain has affected many millions of people around the world. This brain disorder is identified by the occurrence of the oscillations of the patient’s changing mood. The mood swing between two states i.e. depression and mania. This is a result of different psychological and physical features. A set of psycholinguistic features like behavioral changes, mood swings and mental illness are observed to provide feedback on health and wellness. The study is an objective measure of identifying the stress level of human brain that could improve the harmful effects associated with it considerably. In the paper, we present the study prediction of symptoms and behavior of a commonly known mental health illness, bipolar disorder using Machine Learning Techniques. Therefore, we extracted data from articles and research papers were studied and analyzed by using statistical analysis tools and machine learning (ML) techniques. Data is visualized to extract and communicate meaningful information from complex datasets on predicting and optimizing various day to day analyses. The study also includes the various research papers having machine Learning algorithms and different classifiers like Decision Trees, Random Forest, Support Vector Machine, Naïve Bayes, Logistic Regression and K-Nearest Neighbor are studied and analyzed for identifying the mental state in a target group. The purpose of the paper is mainly to explore the challenges, adequacy and limitations in detecting the mental health condition using Machine Learning Techniques.
Introduction
World Health Organization WHO implies, a person having healthy mind and physical fitness is a healthy person. Any changes in thought process and mental health are some of the age related process and changes across the world. Depression and anxiety are mental health disorder associated with unhealthy mind. As the age increases, the consequences and vulnerability associated with depression and anxiety also increases [3]. The development and advances in big data Analytics and Technology results in more attention in prediction of disease. Various studies and researches on large number of dataset has been conducted automatically to improve the accuracy of risk classification instead of selected characteristics previously [1]. Patients having bipolar disorder significantly experiences day-today and week-to-week swings in mood. This instability in mood increases the relapse of disease and reoccurrence of risk with time, this indicates that the disease is still active. The purpose of monitoring and symptom prediction of disorder is to investigate and correlate the symptoms of disorder [8].
Machine Learning Techniques are increasingly present in almost all systems that Process and gather bulk amounts of data. The field of medicine is largely benefited by Machine Learning. Machine Learning algorithms design the regression and classification models that help in different disease diagnosis, drug recommendation, drug administration and so on. ML is the process of creating certain models and algorithms to predict values based on different features [13]. This review paper analysis ML technique for bipolar disorder and clinical procedures. The Data of healthy and unhealthy person are reviewed from a certain survey to apply prediction algorithm.
Categories of Bipolar Disorder
Bipolar disorder is a lifelong mental illness that occurs due to episodes of Mania and depression. Even after getting treatment for the illness people continues to have symptoms for it. Each type of disorder is identified and treated differently depending upon the type of disorder as shown in
1). Planning and Background Analysis:
In this phase all the information regarding the individual and his illness is gathered which includes various qualitative factors like personal information, demographic information, sociodemographic characteristics, their Symptoms, and disease. Age, gender, their past history, chronic medical conditions, family status and environment, marital status, job security etc. are analyzed for detecting mental condition like depression and anxiety in older people [3]. These attributes are used as predictors in automated system for disease prediction [3].
(3.3). Data Preprocessing-Feature Selection:
The raw data collected in data collection phase is preprocessed in an understandable format by two methods, namely data cleaning and data transformation. This is used to differentiate the various behaviors of the patients and to select the feature on the basis of which medical assistance is given. For describing and demonstrating depressive and non-depressive comments and posts, different features were extracted from user's post having psycholinguistic features.
.1) Random Forest
This classifier is a supervised classification method also known as ensemble classification method. The result of individual prediction is averaged out using predictor attributes. They are trained by bagging method that consists of random sample subsets. By sampling the training data, the method fits a decision for each tree and aggregate the result. Many base classifiers are provided by Random Forest. Values are inputted randomly in each tree and their distribution is equal in each tree. [20].
(3.4.2) Support Vector Machines (SVM)
This classifier is a linear and non-probabilistic binary classifier used to classify data for anomaly detection. Among other ML Techniques this modal is very popular as well as costly to compute [21]. SVM is used for reducing noisy data for good results and is able to make decisions [22]. This classifier is a Statistical model used for regression challenges and classification. SVM is a supervised Machine Learning Algorithm in which for n number of parameters, each n parameters are plotted in n-dimensional space and assigned a specific coordinate. The classifier built a hyper plane into high dimensional feature space to isolates data in two classes. This finds a hyper plane for the purpose to separate close training datasets. SVM creates hyper plane by calculating the best possible margin (which is the difference between hyper plane and support vectors). Support vector divides dataset in higher n dimensional space. approach. This makes the hierarchical tree from the training dataset. The states of decision tree are to divide the hierarchy of data having different characteristics. For example, in text documents classification, roots are mainly identified in terms and internal individual nodes are subdivided into its children in view of the yes or no of a term in Ensemble. Ensemble methods use multiple learning algorithms of decision tree for better predictive performance.
Investigating ML techniques for the prediction of Bipolar disorder
The review aims to study and give a clear and concise literature investigating Machine Learning ML techniques for the prediction of Bipolar disorder. The literature review aims to reduce the occurrence and prevalence of the anxiety disorders by effective early prediction. This results in significant minimization of hospitalization, improving their quality of life and reduces their health care bills to a large extent. The Literature review has three stages as shown in [10] in this a AD novel prediction model is proposed for prediction of anxious depression in real time tweets. This studies the mixed disorder od anxiety and depression associated with thought process, lack of sleep and restlessness. Ezekiel Victor et al. [11] suggested a methodology which evaluates ML techniques for detecting depression that require minimal human intervention.in terms of data collection and data labeling. Emmanuel G et al. [12], Conducted a comparative literature search using ML Techniques to predict specific types of stress and anxiety disorder and to develop certain tools that assists doctors in prediction of mental health and support in caring patients. Md. Rafiqul Islam et al [13] implements ML techniques to identify and implement a quality solution for mental disorder problem by studying Social media users comments and posts specially Facebook users. For this they monitor their attributes, feeling and behavior. Their mood swing patterns while they communicate with other user in online communications. Adrian B. R. Shatte et al. [14] according to this paper there is scope of Machine Learning in the area of Psychology and Mental Health, and evidently focus on prediction and diagnosis of mental health condition. Liana C.L.et al. [15] Their study and findings gives an early neuroimaging techniques for clinical assessment in young adults irrespective of objective and qualitative estimation of Psychopathology. Alicia Martinez et al. [16] Abd Rehman et al. [22] This paper aims at providing a guideline for further research in the direction of health care prediction system using Machine Learning Techniques. The electronic dataset on health records Provides a valuable information about the health risk and its predictions. The ML applications and methods has provided benefits in treatment, support and diagnosis of research and clinical administration. A. Khatter et al. [23], how to deal with pandemic situations and save lives in Lockdown. Students adjusted them well at their homes with restrictions and with a that one day their life will again be normal as before. They cope up with online teaching Learning method, online exam patterns, admissions to higher studies and summer internships etc. MS. Purude Vaishali Narayanrao et al. [25], include different approaches to predict heart attack, peer pressure and depression. The data collection mechanism includes questionnaires and surveys with different people, their social media posts, text messages and verbal communication and facial expressions. Ela Gore et al. [26] aim of the paper is to present commonly used algorithms and methods to describe the performance that act as a guide in selection of appropriate model. The alternates and possibilities of ML helps to bridge the gap between psychiatrist and patients to reveal the embarrassment of patient in critical shortfalls. Norah Saleh Alghamdi [28] study the use and benefits of Artificial intelligent application which uses text analytical tool for mental health support. This app uses different technologies and innovative sensors built in smart devices. By using camera sensors and performing self testing scales it detects anxiety and depression. U Srinivas ulu Reddy et al. [29] by applying ML Techniques, a stress analysis pattern is studied in working employees and to narrow down the stress levels. For the study they used 2017 mental health survey that includes responses of technologies working employees. Vidhi Mody Pruthi Mody [30] facilitates a specialized care and emotional support for mental health people with the help of Machine Learning algorithms and Advanced Artificial Intelligence Techniques. Shahidul Islam Khan et al. [31] The study examines a classification algorithm that is used to predict mental health disorder.
Future Challenges in Mental Health Detection
The mental health disorder is difficult to categories because various feature selection processes are implemented by Researchers which is a major challenge in this study. The quality of dataset and its interpretation is a challenge. The data collected from the various devices should be very accurate and precise. Imprecise data from devices will lead to failure of the proposed system. The security, Privacy and ethical issues are important challenges in this field. To avoid issues related to privacy, safety precautions need to be taken, such as user authentication mechanisms and encryption of data. The information available in Online Social Networks [22] provides a huge or bulk of data having immense potential that is to be explored in modern research. For this we extract millions of data to understand the phenomenon selected for a study. The researchers focused on detection of mental health problem through several findings to be referred by researchers for future studies.
i) Initially few studies on mental health are found very informative like people with this disorder isolates themselves do not communicates with other peoples. They are quite simple and do not interact easily. Their social life is not normal as compared with non-stressed people. [28]. It's a challenges to make them feel that they are also normal humans.
ii) People with depression are involved in negative emotions and religious judgements [14]. They are involved in their own self. iii) The major challenge is of language barriers as they use different languages in mental health problem detection. During data analytics, in Online Social Networks it was diagnosed that people with depression behaves differently on various situations [22]. iv) Detection of Mental health involves several challenges in non-face-to-face communication and human computer interaction [22]. v) The use of machine learning(ML) can help them to understand and determine the possibility of an existing mental health state behind the words and languages in [14]. vi) The Privacy and security policies are the challenges faced by many researchers during their data preparation as due to the collection of public user data, such as those collecting from Twitter. Some of these Challenges are summarized as follows: - The Quality and interpretation of data set and modal. Detection of mental health with time. Multiple categories of mental health problems. Preprocessing of data. Data quantity and generalizability. Data sparsity and ethical code.
Conclusion:
This review paper concludes that if the clinical heterogeneity of the samples of patient's data having bipolar disorder is given then by using machine learning techniques will provide researchers and clinicians with great insights in the fields such as diagnosis of diseases, their personalized treatment and prognosis orientation. Machine learning techniques for the prediction of stress and mental health condition will gives significant response and this can be studied and explored for further research objectives. Over the time, if we do not control the emotional conditions, anxiety will become worse day by day and turns into a pathological situations and that is quite challenging to treat. These mental health disorders result in harm to the human body as it leads to suppression of the immune system, which then increases the chance of susceptibility to various infectious diseases, increase in blood pressure, and diabetes. | 3,099.6 | 2021-04-05T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Morpho-molecular diversity of Linocarpaceae (Chaetosphaeriales): Claviformispora gen. nov. from decaying branches of Phyllostachys heteroclada
Abstract In this paper, Claviformisporagen. nov. in Linocarpaceae is introduced from Phyllostachys heteroclada in Sichuan Province, China. The new genus is characterised by its distinct morphological characters, such as ostiole with periphyses, asci with a thick doughnut-shaped, J- apical ring and clavate ascospore without septum-like band and appendage. Maximum Likelihood and Bayesian Inference phylogenetic analyses, based on DNA sequence data from ITS, LSU, SSU and TEF-1α regions, provide further evidence that the fungus is a distinct genus within this family. The new genus is compared with similar genera, such as Linocarpon and Neolinocarpon. Descriptions, illustrations and notes are provided for the new taxon.
The present research is a part of our investigations on the taxonomic and phylogenetic circumscriptions of pathogenic and saprobic micro-fungi associated with bamboo in Sichuan Province, China. In this paper, we introduce a new genus Claviformispora in Linocarpaceae, typified by C. phyllostachydis from Phyllostachys heteroclada Oliv., 1894 (Poaceae). The morphological differences and analyses of a combined ITS, LSU, SSU and TEF-1α sequence dataset support the validity of the new genus and its placement in Linocarpaceae. The new genus is compared with other genera in the family. The comprehensive descriptions and micrographs of new taxa are provided.
Specimen collection and morphological study
Bamboo materials were collected from Ya'an City, China. Single ascospore isolations were carried out following the method described by Chomnunti et al. (2014) and the germinating spores were transferred to PDA, incubated at 25 °C in the dark and cultural characteristics were determined. Ascomata were observed and photographed using a dissecting microscope NVT-GG (Shanghai Advanced Photoelectric Technology Co. Ltd, China) matched to a VS-800C micro-digital camera (Shenzhen Weishen Times Technology Co. Ltd., China). The anatomical details were visualised using Nikon ECLIPSE Ni compound microscope fitted to a Canon 600D digital camera and an OPTEC BK-DM320 microscope matched to a VS-800C micro-digital camera (Shenzhen Weishen Times Technology Co. Ltd., China). Iodine reaction of the ascus wall was tested in Melzer's reagent (MLZ). Lactate cotton blue reagent was used to observe the number of septa. The gelatinous appendage was observed in Black Indian ink. Type specimens were deposited at the Herbarium of Sichuan Agricultural University, Chengdu, China (SICAU) and Mae Fah Luang University Herbarium (MFLU). The ex-type living cultures are deposited at the Culture Collection in Sichuan Agricultural University (SICAUCC) and the Culture Collection at Mae Fah Luang University (MFLUCC). Index Fungorum numbers (http://www.indexfungorum.org/Names/ Names.asp) are registered and provided.
DNA extraction, PCR amplification and DNA sequencing
Total genomic DNA was extracted from mycelium that were grown on PDA at 25 °C for two weeks using a Plant Genomic DNA extraction kit (Tiangen, China) following the manufacturer's instructions. The primers pairs LR0R and LR5 (Vilgalys and Hester 1990), NS1 and NS4, ITS5 and ITS4 (White et al. 1990), EF1-983F and EF1-2218R (Rehner 2001) were used for the amplification of the partial large subunit nuclear rDNA (LSU), the partial small subunit nuclear rDNA (SSU), internal transcribed spacers (ITS) and translation elongation factor 1-alpha (TEF-1α), respectively.
Polymerase chain reaction (PCR) was performed in 25 μl final volumes containing 22 μl of Master Mix (Beijing TsingKe Biotech Co. Ltd.), 1 μl of DNA template, 1 μl of each forward and reverse primers (10 μM). The PCR thermal cycle programmes for LSU, SSU, ITS and TEF1-α gene were amplified as: initial denaturation 94 °C for 3 minutes, followed by 35 cycles of denaturation at 94 °C for 30 seconds, annealing at 55 °C for 50 seconds, elongation at 72 °C for 1 minute and final extension at 72 °C for 10 minutes. PCR products were sequenced with the above-mentioned primers at TsingKe Biological Technology Co. Ltd, Chengdu, China. The newly-generated sequences from the LSU, SSU, TEF-1α and ITS regions were deposited in GenBank (Table 1).
Notes. New species in this study is in bold. "-" means that the sequence is missing or unavailable.
Phylogenetic analyses
Taxa to be used for phylogenetic analyses were selected, based on results generated from nucleotide BLAST searches online in GenBank and recent publications (Lu et al. 2016;Konta et al. 2017;Senwanna et al. 2018;Wei et al. 2018;Lin et al. 2019). Gelasinospora tetrasperma (CBS 178.33) and Sordaria fimicola (CBS 508.50) were selected as the outgroup taxa. The sequences were downloaded from GenBank (http:// www.ncbi.nlm.nih.gov/) and the accession numbers are listed in Table 1. A combined ITS, LSU, SSU and TEF-1α sequence dataset was used to construct the phylogenetic tree. DNA alignments were performed by using MAFFT v.7.429 online service (Katoh et al. 2019) and ambiguous regions were excluded with BioEdit version 7.0.5.3 (Hall 1999). Multigene sequences were concatenated by using Mesquite software (Maddison and Maddison 2019). Maximum Likelihood (ML) and Bayesian Inference (BI) analyses were performed. The best nucleotide substitution model was determined by MrModeltest v. 2.2 (Nylander 2004). Maximum Likelihood analysis and Bayesian Inference analysis were generated by using the CIPRES Science Gateway web server (Miller 2010). RAxML-HPC2 on XSEDE (8.2.10) (Stamatakis 2014) with GTR+GAMMA substitution model with 1000 bootstrap iterations was chosen for Maximum Likelihood analysis. For BI analyses, the best-fit model GTR+I+G for ITS, LSU and SSU was selected in MrModeltest 2.2 and GTR+G for TEF. The analyses were computed with six simultaneous Markov Chain Monte Carlo (MCMC) Chains with 8,000,000 generations and a sampling frequency of 100 generations. The burn-in fraction was set to 0.25 and the run automatically ended when the average standard deviation of split frequencies reached below 0.01.
Phylogenetic trees were visualised with FigTree v.1.4.3 (Rambaut and Drummond 2016) and edited using Adobe Illustrator CS6 (Adobe Systems Inc., United States). Maximum Likelihood bootstrap values (MLBP) equal to or greater than 70% and Bayesian Posterior Probabilities (BYPP) equal to or greater than 0.95 were accepted. The finalised alignment and tree were deposited in TreeBASE (http://www.treebase. org), submission ID: 25996. The new taxa introduced follow the recommendations of Jeewon and Hyde (2016).
Phylogenetic analyses
Phylogenetic analyses of a combined dataset (ITS, LSU, SSU, TEF-1α) comprises 51 taxa within the order Chaetosphaeriales (Table 1), including 24 taxa in family Chaetosphaeriaceae, nine taxa in Helminthosphaeriaceae, ten taxa in Linocarpaceae, six taxa in Leptosporellaceae and two outgroup taxa in Sordariales. The dataset consisted of 5,849 characters including gaps (LSU = 1,571, ITS = 736, SSU = 2,522, TEF = 1,020). The best scoring tree of RAxML analysis is shown in Fig. 1, with the support values of ML and BI analyses. greater than 70% and Bayesian posterior probabilities (PP, right) equal to or greater than 0.95 are indicated at the nodes. The tree is rooted to Gelasinospora tetrasperma (CBS 178.33) and Sordaria fimicola (CBS 508.50). All sequences from ex-type strains are in bold. The newly-generated sequence is in red.
The best scoring RAxML tree with the final optimisation had a likelihood value of -26,415.700648. The matrix had 1,751 distinct alignment patterns and 64.64% in this alignment is the gaps and completely undetermined characters. Estimated base frequencies were as follows: A = 0.236065, C = 0.261532, G = 0.295313, T = 0.207091, with substitution rates AC = 1.062535, AG = 1.855434, AT = 0.940219, CG = 1.052604, CT = 4.590285, GT = 1.000000. The gamma distribution shape parameter α = 0.311923 and the Tree-Length = 2.281738. The Bayesian analysis resulted in 20,502 trees after 8,000,000 generations. The first 25% of trees (1,624 trees), which represent the burnin phase of the analyses were discarded, while the remaining 4,878 trees were used for calculating posterior probabilities. Bayesian posterior probabilities were evaluated by MCMC with a final average standard deviation of split frequencies = 0.009877.
Phylogenetic trees generated from Maximum Likelihood (ML) and Bayesian Inference analyses were similar in overall topologies. Phylogeny from the combined sequence data analysis indicates that all families were monophyletic with strong bootstrap support values (Fig. 1). Phylogenetic results show that our novel species Claviformispora phyllostachydis (SICAUCC 16-0004) belongs to family Linocarpaceae with 91% ML and 1.00 BYPP support and close to genera Neolinocarpon and Linocarpon (Fig. 1). The new genus Claviformispora constituted a distinct lineage in the family Linocarpaceae (Fig. 1). Description. Saprobic and endophytic fungi on monocotyledons and rarely dicotyledons. Sexual morph: Ascomata solitary or aggregated, superficial or immersed comprising black, dome-shaped or subglobose, slightly raised blistering areas with a central ostiole or immersed with a black shiny papilla. Peridium composed of dark brown to black cells of textura angularis. Hamathecium comprising septate paraphyses that are longer than asci, wider at the base, tapering towards the apex. Asci 8-spored, unitunicate, cylindrical, with a J-, apical ring, developing from the base and periphery of the ascomata. Ascospores parallel or spiral in asci, hyaline or pale yellowish in mass, filiform or claviform, straight or curved, unicellular with or without refringent bands, with or without polar appendages. Asexual morph: Phialophora-like spp. were found in Linocarpon appendiculatum and L. elaeidis cultures (Hyde 1992b), but no records are available for other species.
Taxonomy
Notes. Linocarpaceae was introduced as a new family to accommodate Linocarpon and Neolinocarpon species, based on morphology and phylogeny (Konta et al. 2017). Appressoria were first recorded from Neolinocarpon rachidis . The new genus Claviformispora, which is well-supported within Linocarpaceae suggests that there is a need to amend the morphological circumscriptions of the family given that the ascomata (subglobose) and ascospore (claviform) characters are so different from the other two genera. Etymology. Name reflects the claviform ascospores. Description. Saprobic on dead branches. Sexual morph: Stromata solitary or gregarious, black, erumpent. Ascomata solitary or aggregated, immersed, subglobose, slightly raised blistering areas with a central ostiole with periphyses. Peridium outer cells merging with the host tissues, composed of pale to dark brown cells of textura angularis. Hamathecium comprising hyaline, septate paraphyses, longer than asci, wider at the base, tapering towards the apex. Asci 8-spored, cylindrical to cylindric-clavate, unitunicate, short pedicellate, apically rounded, with a doughnut-shaped, refractive, J-apical ring. Ascospores overlapping uniseriate or 2-seriate, clavated with a thin pedicellate, 1-celled, hyaline, without appendage and refringent bands, smooth-walled. Asexual morph: Undetermined.
Culture characters. Ascospores germinated on PDA within 12 hours at both ends. Colonies on PDA reaching 5 cm diameter after 7 days at 25 °C, white to grey with strong radiations outwards on forward side. Colonies became dark brown and black on the reverse after a long time of cultivation. The hyphae are septate, branched, smooth.
Discussion
This study establishes a new genus and also provides further insights into the phylogeny of members associated with Linocarpaceae. Morphologically-based examinations of Claviformispora (as discussed above) clearly show that the morphological circumscriptions (familial concept) of species should be broadened and possibly indicate that this family is much more diverse than expected. Our collection can be clearly distinguished from other groups of similar fungi in Linocarpaceae with its interesting ascospore morphology. In addition, we also noted some peculiarities in the DNA sequences we analysed. A comparison of ITS sequences, based on BLAST reveals 34%, 26% and 30% base pair differences with L. cocois , N. arengae and N. rachidis (MFLUCC 15-0814a), respectively. There are more than 9% and 5% sequence differences with the three taxa when the LSU and SSU rDNA sequences were compared respectively. Following the guidelines recommended by Jeewon and Hyde (2016), there are therefore sufficient grounds to establish a new species at the genus rank. Species of Linocarpaceae have been found on Arecaceae, Poaceae, Euphorbiaceae, Zingiberaceae, Pandanaceae, Fagaceae, Fabaceae and Smilacaceae, including Arenga, Attalea, Calamus, Trachycarpus, Acrocomia, Archontophoenix, Cocos, Daemonorops, Licuala, Livistona, Plectocomia, Phoenix, Raphia, Sabal, Mauritia, Nypa, Elaeis, Pinanga, Eugeissona, Pennisetum, Gramineae, Stipa, unidentified bamboo, Hevea, Manihot, Alpinia, Pandanus, Quenrcus, Cajanus andSmilax (Hyde 1988, 1992a, b;Dulymamode et al. 1998;Hyde et al. 1998;Hyde and Alias 1999;Thongkantha et al. 2003;Cai et al. 2004;Bhilabutra et al. 2006;Vitoria et al. 2013;Konta et al. 2017;Senwanna et al. 2018). More than 50% of the species were recorded from hosts of the Arecaceae. Species in Linocarpaceae are mostly saprobic, except Linocarpon palmetto which was discovered as a pathogen of Sabal palmetto in Florida (Barr 1978). Four species in Linocarpaceae from Poaceae have been reported so far, including Neolinocarpon penniseti on Pennisetum purpureum (Bhilabutra et al. 2006), Linocarpon williamsii on Gramineae sp. (Hansford 1954), L. stipae on Stipa sp. (Hansford 1954) and L. bambusicola on unidentified bamboo submerged in a river (Cai et al. 2004).
Phyllostachys heteroclada, mainly a food source and use as a material in the weaving industry, is distributed along the Yellow River Valley and the southern Provinces in China. It is common in the mountainous areas of Sichuan Province with distribution up to 1,500 m above sea level (Yi 1997;Yi et al. 2008). There is a large area of pure forest in Yibin, Leshan and Ya'an Cities and sporadic distribution in other areas. According to preliminary statistics, bambusicolous fungi from seven orders (excluding fungi referred to as Sordariomycetes incertae sedis) have been recorded on P. heteroclada, including Hypocreales, Ostropales, Pleosporales, Phyllachorales, Pucciniales, Ustilaginales and Xylariales, of which Pleosporales is the largest one. Most bambusicolous fungi in China were recorded with inadequate morphological descriptions or molecular data. The early known fungi on P. heteroclada are documented as Aciculosporium take, Ellisembia pseudoseptata, Fusarium oxysporum, F. semitectum, Phyllachora gracilis, Ph. orbicular, Shiraia bambusicola, Stereostratum corticioides and Ustilago shiraiana (Zhou et al. 2001;Xu et al. 2006). In recent years, some new records and taxa, viz. Bambusicola subthailandica, B. sichuanensis, Neostagonosporella sichuanensis, Parakarstenia phyllostachydis, Phyllachora heterocladae, Podonectria sichuanensis, Arthrinium yunnanum and A. phyllostachium have been reported (Yang et al. 2019a, b, c, d, e, f ). Here, we introduce a new genus in order Chaetosphaeriales, which is a contribution to fungal diversity on P. heteroclada. | 3,148.6 | 2020-07-15T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Low-Cost and Energy-Efficient Alternatives for Home Automation using IoT
— It is known that a large sector of the population keeps their electronic equipment connected to the power supply for prolonged periods of time. Even in many households, devices such as wireless routers and/or voice assistants are kept switched on every day at all hours even though they are not in use or at nighttime. As a solution to this problem, this paper introduces automation alternatives based on IoT (Internet of Things) making use of the NodeMCU board, a relay module, the Sinric Pro application, Google Home, or Amazon Alexa mobile applications, and smart speakers from Google or Amazon companies. As a result, it was obtained that the 4 proposals are efficient in terms of easy implementation and reduction of electricity consumption around 30% annually. This research helps families to improve their energy efficiency and daily produc - tivity through IoT.
Introduction
One of the concepts that have become increasingly popular in recent years is the Internet of Things (IoT) [1]. This concept refers to the digital interconnection of objects through the Internet making them "smart", which gives us the opportunity to offer solutions that positively impact the quality of life of people achieving efficiency [2]. Some of the components that are part of IoT are low-power and low-cost boards such as the Arduino [3], NodeMCU [4], and/or Raspberry Pi [5].
There are several IoT applications, such as: knowing the status and evolution of a patient remotely [6], applications to monitor environmental pollution [7], monitoring of domestic electrical energy [8], [9], the implementation of smart homes [10], [11] or in smart buildings [12]. All these projects help people by generating a positive impact on their quality of life. Also, there is the possibility of integrating intelligent voice assistants such as Amazon Alexa and Google Home [13], in order to generate automation solutions in the home or office, and optimally improve daily personal activities [14].
The present work aims to analyze and offer low energy consumption and cost automation alternatives using IoT using NodeMCU and Sinric Pro. Also, the proposal includes the integration of mobile applications of Google Home and Amazon Alexa virtual assistants, as well as smart speakers. All this looking for a positive impact on improving productivity and energy saving.
Literature review
Despite the increase in popularity the IoT has had in recent years, especially in home automation [11], the opportunity to implement this type of technology has not been taken advantage of [12]. We know that to maintain a garden it is essential to water every so often, thus the different plants that are in a garden continue to survive. For this reason, at this time it would be helpful for families to have their gardens automated to work when necessary. There are several projects using boards such as Arduino [13], Raspberry Pi [14], and NodeMCU [15], as well as different IoT platforms such as Adafruit IO [16], [17], Bylnk [18], and Home Assistant [19].
It must be also pointed out that voice assistants have become popular nowadays through smartphones or through smart speakers. These devices help to generate lists, reminders, alarms, routines among other functions that offer according to the model of the speaker or application that is being used. In addition, with the integration of voice assistants, an infinity of features is added, such as the linking of intelligent equipment like plugs, lights, and other sorts of equipment that are used in a smart home. Likewise, these technologies can be used by any developer to create their applications to improve a specific area such as security [20] or to improve quality of life by avoiding a sedentary lifestyle [21] and generating energy efficiency [2], [22].
Home automation system
Home automation solutions can be found with various budgets as presented in [9] and [23]. However, this research proposes more efficient alternatives, easy to implement, with low electricity consumption and low cost. In addition, these alternatives improve people's productivity on a day-to-day basis, as mentioned in the literature review. Four alternatives of low electricity consumption and cost are proposed for home automation using the NodeMCU, Sinric Pro as well as two additional alternatives that use the most popular smart speakers, Google Home and Amazon Alexa. Any of the proposals aim at generating savings at home. Proposals will work with the following components: • NodeMCU, also called Lolin. It is a board based on the ESP8266 chip, it has Wi-Fi connectivity (802.11 b / g / n), an analog pin, and 17 digital pins (See Figure 1). • Sinric Pro is a home automation platform (See Figure 1) that allows controlling Raspberry Pi, ESP8266, ESP32, or Arduino boards. In this manner, they can be linked to Amazon Alexa or Google Home for free [24]. • Mini Google Home is a smart speaker that allows voice recognition, for integration with Google's personal assistant services. The device is integrated with a mobile App (Google Home) that allows configuring, managing, and controlling all Google Home devices, as well as various compatible products using voice (See Figure 2). • Echo Dot is a smart speaker that uses Amazon technology. It has its mobile application called Amazon Alexa that allows configuring, managing, and controlling devices of various products compatible with voice (See Figure 2).
Fig. 2. Mini Google Home and Amazon Echo dot
• A Relay Module is an electromechanical device used to switch circuits. It works as a switch and is controlled by an electromagnet. This device can be used to turn on or off any electronic equipment that is at home. • Electronic equipment. Devices with different voltage and amperage (from 5v to 220v) can be turned on or off using the Sinric Pro platform and through voice commands using Google Home or Amazon Alexa. Some examples of devices that can be controlled with the platform are light bulbs, switches, fans, or solenoid valves, to watering a garden for example [24].
The four alternatives presented in this article are for families that have a wireless router and/or voice assistants (Google Home or Alexa) turned on 24 hours a day. The proposals feature the use of the Sinric Pro platform, the NodeMCU board, relay modules, and the Google Home and Amazon Alexa mobile apps, which can be used with Google or Amazon smart speakers (See Figure 3).
Fig. 3. Home automation applications
The proposals presented have the following sequence of installation, configuration, and coding. There are slight differences among the alternatives. (See Figure 4).
Proposal 1: Sinric Pro, NodeMCU, and App Amazon Alexa
For the first alternative for home automation, it will be required a NodeMCU board, two Relays, the solenoid valve, the Sinric Pro platform, and the Alexa App.
First, an account must be configured in Sinric Pro [24], within the platform two Switch-type devices are added and then notifications are activated. Afterward, the Arduino IDE must include the Sinricpro libraries [25], with their dependencies on Arduino JSON [26] and Arduino Web sockets [27], finally the libraries to generate a time system with NTP Client [28] must be also included. Then the codes established by the Sinricpro of MultiSwitch_advance and the NTPClient of Advanced must be joined. Next, the WIFI SSID and password must be modified according to the home wireless network. Then the credentials of the APP_KEY, APP_SECRET, and the deviceID must be placed, these are located on the Sinric Pro platform.
Finally, the two relay modules and the electronic components must be connected to the NodeMCU to be controlled. In our example case the solenoid valve to water the garden (See Figure 5). Then to work with the Amazon Alexa application, the Sinric Pro Skill must be installed; in this way, it will recognize a new device. Then, within the Amazon Alexa App, routines should be generated. In our example case, watering the garden by turning on or off the solenoid valve can be set at an established hour or executed by voice command at any time. The same would happen with controlling the wireless router (See Figure 6).
Proposal 2: Sinric Pro, NodeMCU, and App Google Home
In this second option, the same process of creating an account in Sinric Pro and NodeMCU board is done. As well as the installation of the libraries and the configuration of the NodeMCU from the Arduino IDE. But, now instead of using the Amazon Alexa App, the Google Home App is used. After running the application, we should go to the home control option and search for Sinric Pro to link the account created on the platform. Finally, we have the option of creating a routine and setting a time as needed (See Figure 7).
Proposal 3: Sinric Pro, NodeMCU with Amazon Alexa App linked with an Amazon smart speaker
As a third option, the Amazon Alexa smart speaker will be used. Also, the same process of creating the Sinric Pro account and configuring the NodeMCU board is also performed. However, these speakers must be turned on all day causing high consumption of electricity. For this, it is proposed to provide a system to turn on or off the devices when all members of a family are awake or resting. The only thing that was added is a new relay that can control the on or off, to generate independence. It would be necessary to add and configure the new Switch in the Sinric Pro platform and add a new instruction within the code of the new digital pin to use. It is important to remember that the equipment shutdown routine should be as follows: first the electronic equipment, then the smart speaker, and finally the wireless router. In the case of turning on, the sequence is in reverse of the power-off sequence (See Figure 8).
Proposal 4: Sinric Pro, NodeMCU with Google Home App linked with a Google smart speaker
This option is very similar to proposal 3, but this time Google technology is used. It is necessary to install the Google Home application and make the connection as shown in Figure 8. All four proposals have the NTP Client implemented, which by adding a conditional at the end of the code will allow the wireless router to be turned off and on, depending on what time the family members go to bed and what time they get up. Figure 9 shows an example of pseudocode that at 23 hours and 30 minutes the wireless router is turned off and at 5 hours and 30 minutes the wireless device is turned on. In case a member of the family needs to work later, he or she will only have to wait one minute and press the RST button of the NodeMCU, which will activate everything off.
Result and discussion
The tests were carried out for 1 year and the results of monthly and annual electricity consumption were analyzed. A comparative table of the costs required for implementation was also generated. The results are described below:
Proposal 1: Sinric Pro, NodeMCU and App Amazon Alexa
By using the Sinric Pro platform and the NodeMCU board, it was possible to successfully link the Amazon Alexa mobile application, which made it possible to control the on and/or off electronic equipment from a robust application such as Amazon Alexa. Also, create routines that allow generating an on and off automatically. It should be mentioned that it did not require the purchase of an intelligent horn to carry out the automation, being one of the cheapest proposals (energy used and cost of equipment) to be implemented in families with low economic resources. One of the advantages of using the Amazon Alexa App is that it allows us to know how much the electrical energy expenditure of the equipment is turned on in a certain time.
Proposal 2: Sinric Pro, NodeMCU and App Google Home
The second proposal also using Sinric Pro and NodeMCU, was integrated with the Google Home mobile app. This allowed the use of Google technology to turn on and off through the voice assistant, just like option 1. This allows the generation of routines through the Google mobile application without the need to purchase a smart speaker. This Google Home mobile app does not have power consumption calculations, as with Proposal 1.
Proposal 3: Sinric Pro, NodeMCU with Amazon Alexa App linked with an Amazon smart speaker
Unlike the first two, in this proposal they considered Amazon's smart speakers (Echo dot), to turn on or turn off an electronic component, a new relay has been added, so that it also turns off or on the speaker, just like the router wireless.
Proposal 4: Sinric Pro, NodeMCU with Google Home App linked with a Google smart speaker
Finally, this last option is like the previous proposal, but uses Google's smart speaker (Mini Google Home), to turn on or off an electronic component.
When analyzing the last two proposals (option 3 and 4), they have a higher electricity consumption when working 24 hours. Google's technology is the one that consumes less electricity compared to Amazon (difference in consumption = 4320W/month). However, If the automation of the latest proposals is applied, electricity savings can be seen with Google (6408 W/month) and Amazon (7848W/month) (See Figure 10).
In a first impression, it turns out that the last two options are the least popular because it requires the purchase of smart speakers, but in reality more and more families choose to acquire them in recent years [29], due to the great help that voice assistants give in different areas such as in-vehicle assistance, home automation, among others [12]. Through this research, we can improve the energy efficiency of homes by automating the switching on and off home devices. By experimenting with the four options turned on 24 hours a day for one year, results were obtained to analyze the electricity consumption of each one (See Table 1): • Proposal 1: It reached a consumption of 171720W/annual, but by performing the automation, a consumption of 120744W/annual was achieved, resulting in a saving of 29.7%. • Proposal 2: Had the same monthly consumption and efficient consumption as option 1. • Proposal 3: The monthly consumption with Amazon technology was 301320 W/ annual and its efficient monthly consumption is 207144W/annual, representing a saving of 31.3%. • Proposal 4: The monthly consumption with Google technology was 249480 W/ annual and its efficient monthly consumption is 172584W/annual, generating a saving of 30.8%. Figure 11 shows the analysis of the four proposals regarding energy consumption. The first two alternatives are the cheapest. For families that have smart speakers, the cheapest is Google's (Mini Google Home).
NodeMCU hardware features
In the world of IoT, there are several devices that can be used to carry out the proposals that have been presented in this research. Nonetheless, there are few with little money investment. The automation of houses in a stable and reliable way, it is only necessary to have basic knowledge of programming and computer networks for them to be implemented. Some reasons why NodeMCU board is used in home automation are as follows: • It has more RAM (4MB) compared to an Arduino, it also has more flash memory, providing much more space for more complex applications (source code). • The price is much lower than an Arduino.
• Finally, the power consumption is almost like that of the Arduino, but as it natively has a wireless network, it has become the favorite in IoT solutions.
Cost comparison of the proposals
According to Table 2: • Option 1 and option 2: Both are proposed with the same cost, because it is being considered that families already have a Smartphone, they will only have to install the Google Home App or Amazon Alexa, which are free. • Option 3 and option 4: they are for families who have an Amazon or Google smart speakers such as Echo Dot 3 or the Mini Google Home.
All the mentioned proposals are scalable to automate more electronic equipment in a home, you just must remember that by adding more components (relay module) the electrical consumption and the cost of the solution increase. Compared with other automation projects, it has been seen that there is a great variety of proposals that use mobile applications developed by themselves using the Arduino board [30], the NodeMCU board [31], or the Raspberry Pi [32]. These proposals have some limitations because there is not a platform that allows integration with other brands. Instead, our proposals can be linked to Google or Amazon Alexa Apps where there will be no problems regarding compatibility. Likewise, other platforms have been found for home automation such as the Home Assistant [19], which would help to carry out automation with different commercial IoT devices, but it requires a monthly payment of 5 dollars if it is required to control from outside the house, or as it happens with the Adafruit IO [17]. This platform is a cloud service that works as an intermediary with the help of the IFTTT in all communications with Google or Alexa technology, but the free version is not 100% functional. On the other hand, our proposals are designed to generate an electrical saving of 30%, by turning off the equipment that is not used when the members of a family are resting, also if in a home it is required to automate more than 3 electronic equipment, a payment of 3 dollars should be done per year for each device to the Sinric Pro platform, being the cheapest proposal on the market to automate a home. In the world, the demand for smart speakers from Google and Amazon continues to grow, which are compatible with various IoT companies, which allow home automation at high prices in different parts of a home. However, the proposals presented in this article require a basic knowledge of programming and computer networks to personalize the needs of the home without spending a lot of money, as shown in Table 2.
Conclusion
The four proposals are low-electricity consumption and low cost in their implementation, being accessible to any family. In addition to having the possibility of generating routines with Google Home or Amazon Alexa technology for turning on and off household appliances, allowing to improve productivity.
It is very common that people who have a wireless connection at home, leave the Wi-Fi and their smart speakers 24 hours a day, this bad habit generates a waste of electricity in which using the IoT, could be corrected using automation to turn off the devices every day at least 6 to 8 hours a day, generating savings in monthly electricity consumption. Considering this scenario, the savings calculated in this research are around 30% depending on the alternative chosen.
The first two alternatives of this research are the most economical solutions because with a small budget you can automate a house. Just by having a smartphone, you can turn on or off any electronic equipment and/or create routines that allow you to activate or deactivate the electrical flow. The last two alternatives have a higher price compared to the previous ones, since they have a smart speaker, proposing can work with the use of the Smartphone or through the smart speaker. This will allow having an advantage since any member of the family can perform the same processes in contrast to the first two alternatives mentioned above.
Through the Sinric Pro platform, it is easy and fast to link it with the Amazon Alexa or Google Home mobile applications, which makes it possible to have a large number of low-cost components at home. The four proposed alternatives are highly scalable since there is the option of adding more relay modules. This takes into consideration a maximum of 8 modules, because if you want to add more, you must add more Noce-MCU boards. Also, consider that the free version of Sinric Pro only allows a maximum of 3 devices. If you want to include more a payment of $3 per year for each one is required. | 4,664.8 | 2022-03-08T00:00:00.000 | [
"Computer Science"
] |
Smart management of combined sewer overflows: From an ancient technology to artificial intelligence
Sewer systems are an essential part of sanitation infrastructure for protecting human and ecosystem health. Initially, they were used to solely convey stormwater, but over time municipal sewage was discharged to these conduits and transformed them into combined sewer systems (CSS). Due to climate change and rapid urbanization, these systems are no longer sufficient and overflow in wet weather conditions. Mechanistic and data‐driven models have been frequently used in research on combined sewer overflow (CSO) management integrating low‐impact development and gray‐green infrastructures. Recent advances in measurement, communication, and computation technologies have simplified data collection methods. As a result, technologies such as artificial intelligence (AI), geographic information system, and remote sensing can be integrated into CSO and stormwater management as a part of the smart city and digital twin concepts to build climate‐resilient infrastructures and services. Therefore, smart management of CSS is now both technically and economically feasible to tackle the challenges ahead. This review article explores CSO characteristics and associated impact on receiving waterbodies, evaluates suitable models for CSO management, and presents studies including above‐mentioned technologies in the context of smart CSO and stormwater management. Although integration of all these technologies has a big potential, further research is required to achieve AI‐controlled CSS for robust and agile CSO mitigation.
and 107,900 km of combined sewer. Thus, combined sewers account for 21% of the total system . In Ireland, the estimated length of the sewer system is 25,000 km, with the majority of combined pipes being built in metropolitan areas where stormwater comes into the sewers (IrishWater & Ervia, 2015). This can result in flooding incidents during heavy rains, overflows into rivers and streams, and an increased load and capacity conveyed to wastewater treatment plants and infrastructure.
In Chicago, 100% of the 8000 km sewer network is reported to be combined sewer (Lossouarn et al., 2016). The CSS ratio in New York is reported to be 60% (in a city with 12,070 km of sewerage infrastructure) and in Paris, this ratio is 66% (of 3200 km of sewerage infrastructure; Lossouarn et al., 2016). The length of the sewer network in Beijing, Buenos Aires, London,Los Angeles,and Tokyo are 14,290,11,000,21,720,10,780,and 16,000 km, respectively (Lossouarn et al., 2016). In Istanbul, both combined and separate systems are being used (Samsunlu, 2020). Currently, a total of 30,609 km of sewer network and 4621 km of stormwater system are being used in Istanbul for urban water management and an additional 4649 km of stormwater pipes is required. The data available in the literature of some countries and megacities supports the fact that combined sewers represent a significant proportion of the network in developed countries.
CSO increases the concentration of pathogens, toxic substances, bacteria, solids, and debris in the receiving water bodies (McGinnis et al., 2022;Miller Alyssa et al., 2022;Whelan et al., 2022). Moreover, as a result of the decrease in the oxygen level due to the degradable organic matter, it creates important public health, aquatic organism stress, and water quality concerns (Bohannon & Lin, 2005;Field, 1985;House et al., 1993;Mailhot et al., 2015). The water quality concern brought the requirement of legislations such as the urban wastewater treatment directive (UWWTD; EC, 1991) and WFD (EC, 2000) to control CSO pollution in Europe. CSO characteristics were discussed in several studies (Botturi et al., 2020;García et al., 2017;Li et al., 2010). CSO characteristics are primarily reported as total suspended solid, organic matter, total nitrogen and phosphorous, heavy metals, and microbial pollution in literature (Table 1). Sandoval et al. (2013) indicated that CSO quantity is mostly determined by the maximum rainfall intensity, whereas CSO pollutant concentrations are primarily influenced by the rainfall duration. Additionally, CSO pollutant loads are mainly affected by the dry weather duration preceding the rainfall (Sandoval et al., 2013).
In terms of impact on the water quality of a receiving stream, untreated overflows from combined sewers were proven to be a significant pollution source, particularly during wet weather as runoff collects several pollutants F I G U R E 1 Combined and separate sewer system. 1-Wastewater; 2-stormwater; 3-CSS; 4-WWTP; 5-WWTP discharge; 6-WW bypass; 7-manhole; 8-polluted receiving waterbody; 9-urban runoff; 10-receiving waterbody T A B L E 1 Combined sewer system overflow characteristics Abbreviations: BOD 5 , 5-day biochemical oxygen demand; Cd, cadmium; COD, chemical oxygen demand; Cu, copper; E. coli, Escherichia coli; Pb, lead; TN, total nitrogen; TP, total phosphorous; TSS, total suspended solid; Zn, zinc. generated via different urban pollution sources (Bohannon & Lin, 2005;Burm et al., 1968;Field & Struzeski, 1972;Li et al., 2010). Mean concentrations of micropollutants such as heavy metals, polycyclic aromatic hydrocarbons (PAH), pesticides, pharmaceuticals, benzotriazoles, sweeteners, and phthalates were measured for more than 110 overflow events at 10 CSO facilities across Bavaria, Germany (Nickel & Fuchs, 2019). The results indicated that CSOs must be incorporated in discussions on micropollutant emissions, and that knowledge of their concentrations at a regional level must be strengthened. CSOs, in comparison to wastewater treatment plants, are a significant source of pollution and cause failure to achieve good chemical status of surface waters. CSO management has become a challenge. Many researchers have sought to address this through proactive actions comprising problem prevention at its source, problem quantification and risk analysis, and reactive actions including solving the problem prior to further deterioration. Much of the literature on CSO and stormwater (Ahammed, 2017;Botturi et al., 2020;Imran et al., 2013;Shishegar et al., 2018) concentrates on a single point of view, for example, stormwater models or real-time control (RTC) applications in sewer system management (Creaco et al., 2019;Lund et al., 2018;van Daal et al., 2017;van der Werf et al., 2022;Wang & Xie, 2018). Amidst such studies, Garz on et al. (2022) reviewed recent articles on machine learning-based surrogate modeling for urban water networks (Garz on et al., 2022). Rizzo et al. (2020) surveyed constructed wetlands (CWs) for CSO treatment.
Their survey described the current treatment schemes through a literature analysis, discussed the treatment performance of standard pollutants, micropollutants, and microbial contamination, presented a summary of modeling studies, and emphasized additional ecosystem services that can be ensured by CSO-CWs (Rizzo et al., 2020). A comprehensive systematic review on impact of sewer overflow on public health by Sojobi and Zayed (2022) aimed to identify the most significant studies and researchers involved in the study of sewer overflows and public health, and mark significant and emerging research gaps (Sojobi & Zayed, 2022). However, environmental issues are often interrelated, so CSO management should be contemplated as a multi-perspective problem. Therefore, researchers from different disciplines need to work together to tackle the challenge with a holistic approach. Within the context of this approach, building collaboration between key disciplines to identify issues of concern and research gaps plays an important role. Problem characterization and proposing solutions require significant knowledge of management science and engineering which includes a multidisciplinary understanding of hydrology, hydraulics, geographical information systems, meteorology, computer science, and so forth. Therefore, the adopted holistic approach which contemplates the problem from different angles will address the existing gaps in the literature for more sufficient and effective management of CSO.
The primary objective of this study is to present a comprehensive review of the role of mechanistic and data-driven modeling in CSO management. We present a brief historical outlook of sewer systems evolving with the needs of mankind and technological advances starting with simple conduits and extending to artificial intelligence (AI), geographic information systems (GIS), and remote sensing (RS). This review: • acknowledges CSO characteristics and associated impact on receiving waterbodies; • evaluates suitable models for CSO management; • presents GIS and RS applications studies and identifies gaps that could be fulfilled through the use of these cuttingedge technologies; and • presents studies about CSO and stormwater smart management applications in the context of sustainable urban water management.
| MECHANISTIC MODELING
There are many models used for CSO and stormwater modeling including open-source and commercial packages. Commercial models are generally more user-friendly and easier to apply. The main disadvantages of these models are the cost and the lack of flexibility in accommodating users' specific needs. Open-source models can be used in many studies at no cost and the users can access the functions programmatically (being more flexible) and can have a better insight into the processes happening in the background. The advancement in technology over time improved measurements and computational techniques. However, these models still only represent an approximation of physical situations due to the involved complex processes. Within this study, widely used and comprehensive open-source and commercial models are briefly presented. Table 2 provides a summary and comparison of research integrating modeling for CSO and stormwater management. • Developing and testing passive ultrahigh-frequency radiofrequency identification (UHF-RFID) based sensors for monitoring sewer blockages and illicit connections.
Not Mentioned --RTC Sensors (Tatiparthi et al., 2021) • Integrating gray wolf optimizer (GWO) and adaptive neuro-fuzzy inference system (ANFIS) in order to predict multi-ahead influent flow rate. It is an open-source model which is widely used in research related to combined and separate sewer system planning, analysis, and design applications for stormwater runoff and wastewater management (Niazi et al., 2017). SWMM was used in several studies for runoff and hydrological modeling (Knighton & Walter, 2016;Niemi et al., 2017;Ouyang et al., 2012;Ress et al., 2020;Samouei & Özger, 2020). Runoff simulation is important for CSO management (Field & Cibik, 1980), flood management (Ouyang et al., 2012;Rabori & Ghazavi, 2018), impact of climate change on urban drainage systems (Kovacs & Clement, 2009), sponge city (Jia et al., 2018) and sponge airport (J. applications, and low-impact development (LID) applications (Chui et al., 2016). SWMM is frequently applied in CSO modeling and control studies (Barone et al., 2019;Crocetti et al., 2021;García et al., 2017;Jean et al., 2018;Liao et al., 2015). In Liao et al. (2015), CSO control scenarios has been designed by using SWMM (Liao et al., 2015). SWMM has also been linked with nondominated sorting genetic algorithm II (NSGA-II) multi-objective optimization module for CSO management (Rathnayake, 2015). This study has evaluated the importance of nonstructural measures combined with structural measures to control CSO. Sun et al. (2014) employed SWMM to evaluate the effect of catchment discretization on model outputs and examined their response and parameter values to different model scales. This work showed the applicability of the parameters calibrated from finer delineation in one catchment to another for better model performance in CSO management. As computational load and simulation time are important factors in modeling studies, application of calibrated parameters of a small catchment with high resolution in a large and complex catchment will be economically reasonable and buys time. However, this methodology might only be efficient for catchments with similar characteristics (Sun et al., 2014). In another study, SWMM was used to investigate the suitability of the distributed storage option over the concentrated storage option for CSO volume control (Piro et al., 2010). The results confirmed that from a sustainable development perspective, a distributed system of storage tanks in series is the preferred method for mitigating the impacts of CSOs from the Liguori Channel Catchment on receiving waters, compared to traditional interventions employing large storage tanks. While applying this methodology, the impact of several factors including the economical factors and land use/land cover (LULC) restrictions should not be ignored. Hence, sometimes traditional interventions may be more feasible. Variation of meteorological and hydrological regimes due to climate change has become a major challenge for the operation of urban wastewater infrastructures which have been often designed and built decades ago. Extensive research indicates that stormwater runoff and flooding have increased due to the increase in rainfall magnitude, intensity, and frequency (Hamouz et al., 2020;Yazdanfar & Sharma, 2015;Zahmatkesh et al., 2015). Changes in climate and land cover significantly increased stormwater runoff in tropical and sub-tropical coastal-urban environments resulting in escalated flooding risks (Huq & Abdul-Aziz, 2021). The adverse effects of climate change on the hydrological cycle need to be accounted for the design and operation of urban drainage systems. In this context, models that incorporate the impact of climate change can play a key role in building resilient infrastructures. SWMM was used in studies on the impacts of climate change on urban drainage systems (Rosenberger et al., 2021;Yazdanfar & Sharma, 2015). Lu and Qin (2020) applied SWMM to evaluate the impact of different general circulation models, namely, MIROC5, EC-EARTH, HadGEM2-ES, GFDL-CM3, and MPI-ESM-MR, and climate model selection on future runoff simulation (Lu & Qin, 2020). Using a variety of statistical and modeling tools, Lu and Qin (2020) developed an integrated framework for assessing climate change impact on excessive rainfall and urban drainage systems. For rainfall disaggregation and design, the simple scaling method and the Huff rainfall design were used, starting with synthetic future climate data generated by the stochastic weather generator. The proposed framework was demonstrated through a case study in a tropical city (Hohhot, Inner Mongolia, China). The approach is applied to a relatively small catchment. However, the stochastic weather generator can be impacted by the size of the study area. Moreover, they did not include prospective LULC changes for hydrological simulations. Climate change also impacts CSO occurrence, duration, and frequency (Tavakol-Davani et al., 2016). SWMM results indicated that climate change will increase CSO occurrence, duration, and frequency by 12%-18% in the City of Toledo, Ohio in the future (2030-2034) under maximum impact scenario. The impact of climate change can vary depending on the subwatershed characteristics such as area, width, slope, and imperviousness (Zahmatkesh et al., 2015). A stormwater climate sensitivity factor (SCSF) was used to further analyze the sensitivity of runoff to climate change in relation to the subwatershed characteristics. SCSF is a positive dimensionless factor with larger values suggesting greater climate change sensitivity of storm water. The SCSF was created by combining subwatershed features by trial-and-error. This factor can be used to analyze sub-watersheds based on their features and anticipate their reaction to climate change in a simple and quick method. SWMM outcomes revealed that climate change resulted in a 40% increase in runoff volume in sub-watersheds with SCSF larger than 0.1. In these sub-watersheds, runoff is sensitive to the slope rather than other characteristics.
| Urban flooding
Hydrology of urban areas has been changing as a result of rapid urbanization in conjunction with climate change (Hamouz & Muthanna, 2019;Leopold, 1968). Consequently, infiltration has decreased due to the transformation of pervious surfaces to impervious surfaces resulting in an increase in runoff. Therefore, flooding may occur in some regions during extreme precipitation. SWMM is among the favored urban models in urban flood management studies (Rabori & Ghazavi, 2018;Sin et al., 2014). The model was integrated with different models and approaches to overcome flooding problems in urban areas. SWMM model outcomes were linked to a proportional integral derivative controller to develop an urban drainage model for flood control in Delhi, India . SWMM was linked with a 2D hydrodynamic model named LISFLOOD-FP for modeling urban flooding (Chen et al., 2018). Within this integration, SWMM was used to simulate dynamic flows in 1D sewers, while LISFLOOD-FP was used to simulate 1D river channel flows and 2D overland flow propagation. This model integration was employed to investigate the interaction between sewer flow and surcharge-induced flooding. It was tested against four major historical floods in the Shiqiao Creek District of Dongguan City, South China. The results showed that the integrated model was capable of forecasting urban flooding. The output of the integration is in raster format, an advantage for a GIS integration. However, the approach has not investigated uncertainties and therefore further verification is required prior to application. SWMM was also coupled with a recently built noninertia 2D model to simulate the dynamic and complex bidirectional interaction between sewer system and the urban floodplain (Seyoum et al., 2012). In this study, the water level variations between the sewer network and aboveground flows were used to measure the interacting discharges. The feasibility of SWMM as a river flood simulator was tested by developing a GIS-based SWMM model for Brahmani Delta, which is prone to large-scale flooding . The results showed that, in addition to its use in urban catchments, SWMM could also be used to simulate the response of catchments to flood events in natural systems like rivers. Therefore, rather than investing heavily in flood monitoring stations, SWMM could be used as a low-cost early warning flood prediction tool.
| Sponge city and sponge airport
Sponge city program (SCP) was first announced by the Chinese government in 2013 as a new urban drainage infrastructure building paradigm in order to endorse a sustainable urbanization strategy (Jia et al., 2018). SCP encourages the use of natural systems including soil and vegetation as a part of the urban runoff control strategy. Within the scope of SCP, challenging volume capture rate (VCRa) for annual rainfall was set based on the region. For example, VCRa target in the southernmost regions, where precipitation is high, is 60%-85%. However, this target is 80%-85% in the Beijing area where the region is relatively dry. Given this significant Chinese investment in Sponge Cities, it is important to conduct modeling-based studies to help with planning, while also improving LID representation in hydrologic models and collecting additional data from existing LID installations (Randall et al., 2019). SWMM has been applied in research to promote sound decision-making in the development of sponge cities in urbanized watersheds (Mei et al., 2018). SWMM outcomes have shown that a 75% annual total runoff can be captured using a scenario containing 34.5% of bio-retention facilities and 46% of sunken green spaces for a sports center project in Guangxi, China (Li et al., 2019). According to the continuous SWMM modeling results, the VCRa target of 80%-85% in Beijing can be met with a LID scenario comprising 35% paved areas transformed to the permeable pavement, 30% roofs transformed to green roofs, and 10% green areas transformed to rain gardens (Randall et al., 2019).
Studies have integrated SWMM with other models, applications, and optimization algorithms in sponge city applications (He et al., 2019;She et al., 2021;Zhang et al., 2021). For example, SWMM was coupled with an Isochrone model to create an analytic framework (Yang et al., 2021). This framework was used in simulation and comparison of rainfallrunoff process before and after sponge city construction. An integrated stormwater system called Uwater was designed based on SWMM integrated with computer-aided design (CAD) and GIS applications in the context of the SCP (He et al., 2019). In this study, GIS functions were used to build tools for visual design of LID facilities, visual evaluation of the stormwater pipe system drainage capability, and inundation limits for further optimization of the design plans. Big Data, Internet of Things, and Cloud Computing technologies were used to build the operation and maintenance monitoring information management service system. In various stages of SCP, this integrated framework could be used for simulation, analysis, and decision-making.
Airports are severely impacted by extreme weather conditions including flooding, due to extreme rainfalls which adversely impact operations (J. Peng, Zhong, et al., 2020). Therefore, airport stormwater management has evolved into a significant task requiring careful planning and the application of sophisticated modeling methods. Recently, sponge airport concept, which not only relieves airport flooding but also considers rainwater as a resource, has been considered as a solution through the use of LID facilities (Peng et al., 2021). SWMM has been widely used in simulating rainfallrunoff, flooding, and the effect of LID facilities in sponge airports (Peng et al., 2021;Peng, Ouyang, et al., 2020;J. Peng, Zhong, et al., 2020). SWMM can conceptualize the stormwater drainage performance of an airport and compare the impact of different LID control strategies. However, the lack of monitoring data in airports is adversely impacting model setup, calibration, and validation. Moreover, research into the sensitivity of the parameters is required for more accurate and reliable results (Peng et al., 2021;J. Peng, Zhong, et al., 2020).
| LID hydrologic effectiveness assessment
There is a wide application of SWMM in the assessment of the hydrologic effectiveness of LID (Chui et al., 2016;Joksimovic & Alam, 2014;Zanandrea & de Silveira, 2018;Z. Zhu, Chen, et al., 2019). Zanandrea and de Silveira (2018) studied the effects of LID application on hydrological processes for a consolidated case study in Brazil. In this study, consolidated catchments referred to regions which, despite being heavily occupied, did not have all the necessary urban infrastructure. Through the provision of basic sanitary facilities among other services, these areas began to be incorporated into city planning. Vegetative swales and permeable pavements were chosen as LIDs for the study. The results illustrated that the LID performance was satisfactory and the runoff volume generated by urban area was reduced by 10% (Zanandrea & de Silveira, 2018). SWMM was applied in a study investigating the combined effects of different LIDs including permeable paved surfaces on a parking lot, green roof, infiltration trench, and permeable soil layer in a shopping mall site within the Suewiecki Stream sub-catchment in Warsaw (Barszcz, 2015). A significant reduction in the surface runoff depth (28.0%-29.6%) and maximum flow rate ($20%) was achieved. The scenario analysis revealed that infiltration trenches and permeable soil layers only took in surface runoff from main highways and parking lots within the catchment. Moreover, infiltration trenches resulted in the greatest increase (23%) in the infiltration depth (Barszcz, 2015). Figure 2 presents a generic graphical illustration of an infiltration trench. The figure presents details about the materials used in the construction of the LID and the mechanism which LID uses to manage runoff and inflow. SWMM was also implemented in a study evaluating the behavior of rain gardens and rain barrels under steadystate and unsteady-state situations (Abi Aad et al., 2010). The rain garden had the best response in terms of peak flow and volume mitigation. The volume reduction was as high as 38% despite the fact its area was only 3.9% of the total rooftop area. According to Dussaillant Alejandro et al. (2004), the size of the rain garden must be between 10% and 20% of the impervious surface to achieve some level of groundwater recharge (Dussaillant Alejandro et al., 2004). Therefore, if the surface area of the rain garden was three times larger in the study by Abi Aad et al. (2010), it would not only remove the impact on sewer system but also some level of groundwater recharge could be anticipated.
Permeable pavements are resilient structures which can help reduce runoff and peak flows while also increasing landscape perviousness (Monrose & Tota-Maharaj, 2018). Permeable pavements have been successfully employed in many studies all over the world largely in the United States, the United Kingdom, China, Japan, and Australia (Ahiablame et al., 2013;Chen et al., 2021;Imran et al., 2013;Monrose & Tota-Maharaj, 2018;Takahashi, 2013;H. Zhu, Yu, et al., 2019). SWMM was implemented to model the effect of different pavement structures under varying rainfall conditions on reducing surface runoff and urban stormwater on a two-way, six-lane road in Nanjing (H. . The findings showed that the permeable road had a greater impact on lowering the runoff coefficient and peak flood flow. Moreover, the permeable pavement could reduce surface runoff by more than 50% and reduced flood peak and hysteresis of flood peak. SWMM was applied to evaluate the performance of porous pavements and bioretention cells in a high-density urban catchment in response to anticipated climatic changes for stormwater management (M. Wang, Zhang, et al., 2019). Porous pavements and bioretention cells were relatively successful at controlling runoff and peak flow volume. Moreover, the storms with a quick return period and shorter length had greater impacts than storms that were less frequent and lasted longer on both LIDs. Bioretention cells improved the hydrologic and water quality performance of urban impermeable areas by reducing runoff volumes, flow rates, and durations (Olszewski & Davis, 2013). In northeast Ohio, the hydrologic performance of three bioretention cells (UC, HA South, and HA North) built on low-conductivity soils was evaluated using SWMM (Winston et al., 2016). The results of the study demonstrated that the UC, HA South, and HA North cells had a 59%, 42%, and 36% reduction in runoff, respectively.
Green roofs allow storm runoff to be delayed and attenuated at the source, resulting in fewer CSO discharges and flooding problems in urban areas (Akther et al., 2018;Burszta-Adamiak & Mrowiec, 2013;Cipolla et al., 2016). However, because of the impact of the layer materials, vegetation, physical features of the substrate, design specification, and climate conditions, their level of performance is site specific. Three different types of green roofs were tested between June and November 2009 and 2010 in Poland employing SWMM. The outcomes confirmed that they had positive impact on volume reduction, peak intensity values, and the occurrence of runoff (Burszta-Adamiak & Mrowiec, 2013). Another study in Colle Ometti, in Genoa (Italy), created a methodological approach for estimating the actual evapotranspiration as climate input data in SWMM (Palla et al., 2018). The suggested methodology was calibrated on a single green roof installation based on one-minute continuous simulations over 26 years of climatic records. Next, a continuous simulation of a small urban catchment, retrofitted with green roofs, was performed. The average peak and volume reduction rate for 1433 rainfall events was 0.3 (with maximum values of 0.96 for peak and 0.86 for volume).
| Mike Urban
Mike Urban is a flexible system developed by the Danish Hydraulic Institute for independent design and modeling of water supply, wastewater, and stormwater. It is a commercial model which combines 1D sewer modeling with 2D overland-flow modeling and it is integrated with aeronautical reconnaissance coverage geographic information system (ArcGIS) using "geo-database" concept (Locatelli et al., 2015). Mike Urban was implemented in several studies relating to CSO and stormwater management ( It was used in assessing the effect of sustainable drainage systems (SUDS) scenarios on reducing CSOs volume and duration in a catchment in Norway (Hernes et al., 2020). This study assessed the hydrological performance of green roof and rain garden SUDS control modules of the model and the effect of SUDS scenarios on CSOs employing both F I G U R E 2 Top and side views of infiltration trench event-based and continuous simulations. The results of event-based analyses revealed the superior performance of rain garden in lowering CSOs for large precipitation events, while green roofs benefited smaller incidents. The software was used in a study that predicts infiltration impact on CSO in a 3 km 2 urban catchment in Copenhagen (Roldin et al., 2012). This was done in a three-step scheme including a baseline scenario, a potential infiltration scenario, and a realistic infiltration scenario. The potential infiltration scenario in which soakaways were connected to 65% of the total impervious area, resulted in a 68% reduction in annual CSO volume. The third step included groundwater restraints, resulting in a more realistic scenario in which only 8% of the impervious area was linked to soakaways and CSO volume was reduced by 24%. Locatelli et al. (2015) used Mike Urban to model a retention-detention system in a small catchment in Copenhagen. The retention-detention system prevented flooding for a 10-year rainfall event. For a 22-year period, annual stormwater runoff was reduced by 68%-87%, and the retention volume averaged 53% full at the start of rain events (Locatelli et al., 2015). The authors extended their research to hydrologic impact of urbanization with extensive stormwater infiltration (Locatelli et al., 2017). They used a coupled MIKE SHE-MIKE URBAN groundwater model to explore the impact of urbanization with stormwater infiltration on groundwater height and water balance of a watershed. The hydrologic influence of urbanization with stormwater infiltration was investigated using different land use scenarios. This research found that an increase in urbanization with stormwater infiltration resulted in a rise in groundwater levels as a result of the changes in the water balance, specifically with impervious area construction decreased evapotranspiration, and increased recharge from stormwater infiltration systems.
| InfoWorks
InfoWorks is also a commercial model and there are different versions of it available for urban water management. For example, Wallingford Software UK developed InfoWorks CS (InfoWorks Combined Sewer), which integrates a relational database with spatial analysis to give a unified platform for asset and network modeling (Koudelak & West, 2008). It allows for accurate and consistent modeling of major components of combined sewer systems, such as backwater effects and reverse flow, open channels, trunk sewers, complex pipe connections, and ancillary structures. Many researchers adopted InfoWorks in a wide variety of studies such as integrated urban hydrologic and hydraulic modeling (Zhu et al., 2016), adaptation of urban drainage networks to climate (Kourtis & Tsihrintzis, 2021), RTC application for CSO impact mitigation (Dirckx et al., 2011), and LID applications for CSO management (Benisi Ghadim et al., 2016). InfoWorks was coupled with TETIS in the CSO impact assessment on a river system in Spain. TETIS is a hydrological model with physically based parameters spread in space that allows for the results to be obtained at any point in the basin while also taking into account the spatial variability of the water cycle (Andrés-Doménech et al., 2010). The results of these two models were combined to determine the final concentration of some pollutants in the river after CSOs. Mantegazza et al. (2010) applied InfoWorks in a study comparing the use of dynamic modeling with "normative criteria" to analyze the impacts of CSOs using a case study on the Lambro River and the Stream Bevera. The normative standards are the current approach in use in Italy for designing first-flush water tanks in order to decrease pollutant discharges. The results showed the existing regulation, which was based on effluent discharge standards, underestimated the size of CSO storage tanks. The authors suggested that the regulation needs to be based on stream standard approach and the CSO tanks need to be designed based on the pollutant concentration and volume of the CSO using a dynamic model (Mantegazza et al., 2010). Another study implemented InfoWorks in cluster analysis for the characterization of rainfalls and CSO behavior in an urban drainage area in Tokyo (Yu et al., 2013). InfoWorks CS was used to simulate all 117 rainfall occurrences recorded in 2007. The rainfall incidents were categorized using two sets of rainfall pattern criteria as well as CSO behavior. Similarity analysis was adopted to link clustered rainfall and CSO groups. The results showed that while small and intense events indicated significant correlations with CSO behavior, moderate events had a weak correlation. This means one can clearly identify patterns of important and negligible rainfalls in CSOs, whereas influences from the drainage area and network had to be included when evaluating moderate rainfall-induced CSO.
InfoWorks ICM (Innovyze Ltd, Oxfordshire) is another version of InfoWorks that was developed to combine natural watersheds and unnatural environment hydrology and hydraulics into a single integrated model (Gong et al., 2017). Peng et al. (2015) adopted InfoWorks ICM in two CSS case studies in Yangpu District, Shanghai. Model calibration and validation were performed using water level measured in the pumping station. They found that to decrease overload in pipelines, prevent manhole overflow, and minimize waterlogging period, conduit diameter, and the green area should be increased (Peng et al., 2015). InfoWorks ICM was integrated with SIMDEUM to establish a stochastic sewer model for hydraulic flow prediction (Bailey et al., 2019). Calibration of the model was performed against metered consumption data. The flow data acquired at the outfall of catchment was used to validate this model. The model was applied in several catchments in the Wessex Water area of the UK. Results showed that the model was more accurate than the classic continuous sewer models in terms of flow, depth, and velocity predictions. Moreover, a low water consumption scenario decreased overnight and daytime flows by up to 80%, but evening flows remained relatively unchanged. Stagnation times in household laterals remained the same while street-scale pipes had longer stagnation times than in the "current" water use scenario. (Edmondson et al., 2018), and optimization techniques (Shishegar et al., 2018) have been widely used in combined sewer systems management. Smart storage tanks (Troutman et al., 2020), smart pipelines (Stoianov et al., 2007), wireless sensors (Montestruque, 2008), smart metering (Lund et al., 2021), smart sensors (Tatiparthi et al., 2021), cloud computing (Troutman et al., 2017), supervisory control, and data acquisition (SCADA; Larry, 2000) have also been used in data collection, control, and operation of sewer systems.
Sensors play an important role in smart management of combined sewer systems (Pu & Lemmon, 2007;Ruggaber & Talley, 2005;See et al., 2021). Montestruque (2008) described a metropolitan scale wireless sensor actuator network that was developed to control the frequency of CSO events in South Bend Indiana (Montestruque, 2008). The system was known as CSOnet comprising 150 wireless sensor nodes that monitored 111 locations in the South Bend sewer system. Figure 3 illustrates the smart elements used in CSS management.
| Model predictive control
MPC is an adaptive control approach for combined sewer systems that recalculates the optimal control iteratively whenever new information about the state of the sewer system and new rainfall forecasts become available (Lund et al., 2018). Pedersen et al. (2017) used MPC to control the Barcelona sewer network system based on the benchmark model developed by Ocampo-Martinez (2010). The results demonstrated the importance of estimating rainfall inflows in order to optimize sewer network control. The MPC approach allowed the network capacity to reduce floods and the flow of wastewater directly into the sea (Pedersen et al., 2017). Lund et al. (2020) used MPC to dynamically control stormwater inlets and a pump to evacuate a retention basin. The MPC decides whether stormwater should be maintained in the gray-green infrastructure or permitted to enter the underground sewer system. A simulated proof-ofconcept study was performed using a small-scale watershed in Copenhagen with a cloudburst road and a retention space in an area served by a combined sewer system with one CSO structure. The results showed that when the prediction horizon was longer than the transport period (18.3 min) in the pipe system, MPC of stormwater inflows greatly reduced the number and amount of CSOs. The annual CSO reduction increased from 9.9% to 12.4% when the horizon was increased from 30 to 120 min (Lund et al., 2020). Ocampo-Martinez et al. (2005) compared active fault tolerant model predictive control (AFTMPC) to passive fault tolerant model predictive control (PFTMPC) in combined sewer system under realistic rain and fault scenarios (Ocampo-Martinez et al., 2005). AFTMPC minimized CSO flooding in all cases and in rain scenarios where the sewer network reached its design capacity; therefore, AFTMPC could prevent or significantly decrease flooding. Zimmer et al. (2015) developed a set of MPC genetic algorithms and tested them offline to see their efficacy in reducing CSO levels in a deep-tunnel sewer system during real-time operation. The GA methods used were micro-GA, probability-based compact GA, and domain-specific GA approaches. These methods limited the number of decision variable values analyzed within the sewage hydraulic model, thus lowering algorithm search space. Of these, the GA approaches that started with a coarse decision variable discretization and then switched to a finer resolution after initial convergence, produced the best control solution with the lowest computational demand. However, more testing on additional applications is required to confirm performance. Further research was needed to see if these results apply to other types of nonlinear optimization algorithms (Zimmer et al., 2015) such as differential evolution and particle swarm optimization. They expanded their research (Zimmer et al., 2018) to investigate the effects of long-term capital investments on CSO frequency. Replacing small-diameter pipes across the network that was creating high hydraulic grade lines could be an alternative strategy to mitigating CSOs in real-time with sluice gates. Conduit replacement was effective but costly, and optimization over a greater spatial extent (without conduit replacement) was demonstrated to minimize CSOs by 14%.
| Real-time control
RTC is associated with interventions on a combined sewer system in the events of stormwater inflow (Weyand, 2002). This can be accomplished by either modifying the stormwater flow direction or controlling the available storage capacity of existing detention facilities. The discharge (e.g., basin outflow, CSO, etc.) in terms of quantity and quality, the storage capacity, and the amount of precipitation are all useful information for the control action. RTC was applied in a wide range of studies related to combined sewer systems (Meng et al., 2017;van Daal et al., 2017;Weyand, 2002). Garofalo et al. (2017) developed a decentralized real-time control (DRTC) system based on a multi-agent paradigm and specifically a gossip-based algorithm and integrated it with the SWMM hydrodynamic simulation model to be applied to the UDS in Cosenza, Italy. Multi-agent systems make it feasible to obtain sophisticated emergent behaviors based on interactions among simple-behaving agents. On the other hand, there are numerous nodes connected through a network in a Gossip-Based Algorithm. Each node has a set of numerical values and can only communicate with a restricted number of peer nodes (i.e., its neighborhood). Despite only being able to communicate locally, the purpose of this type of algorithm is to estimate global aggregate values such as average, variance, maximum, and so on. The UDS in Cosenza has a set of adjustable gates that operate as actuators, as well as sensors that monitor the water level in each conduit. The DRTC algorithm successfully controlled the water level within the UDS, ensuring that the system real storage capacity was fully utilized. The findings showed that the DRTC had a positive impact on the management of the UDS by significantly reducing the risk of flooding and CSO (Garofalo et al., 2017). Seggelke et al. (2013) presented an integrated RTC for an urban drainage system in Wilhelmshaven, Germany. This fuzzy-based RTC approach was used to control both the sewer system and inflow to the WWTP. The primary goal was to decrease the number and volume of CSO located beside a bathing beach. The monitoring of the integrated RTC during 2011 demonstrated that CSO frequencies decreased by 23% and CSO volumes were lowered by 25%. Moreover, CSO volumes could be reduced by 40% on average in the case of single events (Seggelke et al., 2013). Jafari et al. (2018) coupled RTC applications with a PSO algorithm to compare the performance of two approaches including multi-period and single-period simulation-optimization that were used for regulating controllable elements of an urban drainage system. During heavy rains, the proposed models were used to discover the best pump and gate operating policies at each decision time. The multi-period optimization technique was effective in reducing or eliminating peak water level deviation from the allowable range in front pools of pump stations. It also resulted in a 59% reduction in the pumping station maintenance costs. However, doing multi-period optimization at each decision time adds to the computational load, which could restrict its applicability in larger, more complex urban drainage systems (Jafari et al., 2018).
| Optimization techniques
Optimization techniques were extensively applied within the context of smart management and real-time control of CSS (Shishegar et al., 2018). Wang, Lei, et al. (2019), developed an optimization method based on PSO to achieve a logical pump start-up depth while reducing the number of start-ups/shutoffs (H. Wang, Lei, et al., 2019). SWMM was used to calculate the objective function in assessing different trial solutions, and the PSO iterative computations were used to direct the search and find the optimal solution. The method was adopted in a case study in Beijing comprising nine pumping stations for both multistage pumping station optimization and single pumping station optimization. The multistage method yielded a small number of start-up/shutoff times (from 8 to 114 in different rainfall scenarios) and less pump operating time. However, the developed optimization method did not take pump efficiency into account, which was an important parameter in its operation optimization.
Bachmann-Machnik et al. (2021) optimized static outflow settings of CSO tanks using highly resolved online flow and quality monitoring data and real-time control strategies. The method was tested on two CSO tanks in a conceptual drainage system before six tanks in a case study in Southern Germany. In both the conceptual catchment and the case study, a 6-month measured time series was sufficient for reliable optimization results. An average reduction potential of 2% for the overflow volume and 6% for the overflow duration could be expected for the study area (Bachmann-Machnik et al., 2021).
Optimization techniques were also adopted in smart management of gray-green structures (Boulos, 2017;Oberascher et al., 2021;Paul & Andrew, 2015). A simulation-optimization framework for optimizing urban drainage systems employing hybrid green-blue-gray infrastructures (HGBGIs) and various degrees of centralization was developed and evaluated using a real case study in Ahvaz, Iran (Bakhshipour et al., 2019). HGBGIs of stormwater management systems could challenge the traditional gray-only pipe networks in terms of cost. In centralized networks, green-blue infrastructures (GBIs) were more effective, however, HGBGIs were more costly for decentralized usage compared to traditional solutions. Despite the fact that GBIs were more environmentally friendly and sustainable, system resilience was compromised in the process. Therefore, the optimal DC was determined by the objectives, and it varied in terms of cost, resilience, and sustainability. This suggested that optimal decisions could only be made using a multiobjective optimization framework. For example, a small increase in pipe widths could result in large benefits in resilience at a reasonable cost increase and a minor reduction in sustainability. A multi-objective optimization approach was built by integrating an updated NSGA-II with SWMM to undertake drainage network rehabilitation using pipe substitutions and storage tank placement (Ngamalieu-Nengoue et al., 2019). Flood damages were calculated in monetary terms based on the water level of flood. A collection of Pareto fronts linking both investment and damage costs was obtained. Network managers could use these solutions to make decisions about rehabilitation plans and investments while working within budget limitations of a project.
| Artificial intelligence
AI can be applied as model-based AI or data-driven-based AI depending on the case study. Model-centric AI refers to the development of an AI system that incrementally improves an existing model (algorithm/code), while maintaining the amount and form of the collected data fixed. On the other hand, developers of data-centric AI maintain a fixed model while continually enhancing the quality of the data (Hamid, 2022). Advanced data-driven methods including machine learning (ML; Hadjimichael et al., 2016) and deep learning (DL; Kazaz et al., 2021) were adopted in urban water resources management (Xiang et al., 2021). Artificial network generator (ANN), ANFIS, Gaussian process regression, Wavelet Transform-AI integrated models are among several AI models that have been used in sewer system management (Zhu & Piotrowski, 2020). Although artificial intelligence is applied in many urban water cycle-related studies, limited number of studies related to CSO control and mitigation is reported and further research is required to fill this gap. Furthermore, Additional AI studies are required for the management of stormwater drainage sewers. The following studies present the AI applications on urban waters.
ANN was used to predict CSO performance in a catchment in north of England (Mounce et al., 2014). The depth of flow in the CSO chamber and rainfall radar records were utilized as training data to establish the relationship between the parameters. The outcomes demonstrated that the technique could predict CSO depth for unseen data five-time steps in advance with less than 5% error. The method eliminates the need for manual modeling overheads and calibration data needs, making it a very helpful alternative to creating a full physical-based model of a catchment. Sousa et al. (2014) applied ANN and support vector machines (SVMs) to evaluate the performance of AI in the prediction of sewer systems structural condition (Sousa et al., 2014). This methodology can be useful to estimate probable structural problems and faults within combined sewer systems due to the fact that CSSs have been used for a long time. Therefore, these types of innovative applications would prevent enormous maintenance costs. They compared the performance of these methods with logistic regression in a case study. The uncertainty associated with ANNs and SVMs was defined, as well as the comparative results of a trial-and-error technique versus optimization algorithms to construct SVMs. Halfawy and Hengmeechai (2014) implemented ML to assess municipal sewer pipes. This study provided a pattern recognition algorithm to automatically detect and classify pipe problems in images generated from traditional closedcircuit television (CCTV) inspection video. To identify pipe faults, the algorithm adopted histograms of oriented gradients (HOG) and SVM. The algorithm used image segmentation to derive suspicious regions of interest which indicated candidate defect areas. These regions of interest were classified using an SVM classifier that was trained using HOG features taken from both positive and negative samples of the defect. The proposed approach was exercised to solve tree root intrusion detection. The performance of SVM classifiers with linear and radial basis functions was tested. The algorithm tested on actual CCTV videos from the Canadian cities of Regina and Calgary. The results demonstrated the algorithm feasibility and resilience (Halfawy & Hengmeechai, 2014). Another study applied ML methodologies to categorize flood versus nonflood events using a rainfall threshold in Shenzhen, China (Ke et al., 2020). ML projected numerous rainfall threshold lines in a plane spanned by two principal components, yielding a binary outcome (flood or no flood). The proposed models, particularly the subspace discriminant analysis, could classify flooding and nonflooding by combinations of multiple-resolution rainfall intensities, increasing the accuracy to 96.5% and lowering the false alert rate to 25% compared to the conventional critical rainfall curve. The crucial indices of accuracy and true positive rate obtained in ML models were 5%-15% higher than in conventional models.
Deep learning is an approach to AI that is specifically a form of machine learning, which use multiple hidden layers to extract features from raw data (Goodfellow et al., 2016). Dong et al. (2020) developed and tested a hybrid deep learning model for urban flood prediction and situation awareness using channel network sensor data called fast, accurate, stable, and compact gated recurrent neural network-fully convolutional network (FastGRNN-FCN). They used the data of three historical flood events from 2016 and 2017, which were collected from the channel sensor in the Harris County, Texas to train and validate the hybrid DL model. The hybrid DL model was used to forecast a flood event in 2019 in Houston and the results are comparable with flood modeling using empirical methods. The results showed that the model could accurately forecast spatial-temporal flood propagation and recession, and that it might be used by emergency responders to prioritize flood response and resource allocation strategies (Dong et al., 2020). Additional research proposed a method for automatically detecting and localizing manhole covers using convolutional neural network (CNN) DL approach in very high-resolution aerial and remotely sensed photos (Commandre et al., 2017). This is more extensive than current small object detection/localization approaches because the full image was processed without prior segmentation. More than 49% of the ground truth database was detected with a precision of 75% in the initial experiments using the Prades-Le-Lez and Gigean datasets. Another paper used a DL technique called faster regionbased convolutional neural network (faster R-CNN) to develop an automated approach for detecting sewer pipe defects (Cheng & Wang, 2018). 3000 CCTV inspection photos of sewer pipes were used to train the detection model. Using mean average precision, missing rate, detection speed, and training time, the model was evaluated in terms of detection accuracy and computing cost after training. The proposed approach was effective in detecting sewer pipe defects with high accuracy and speed.
| GIS and RS applications in sewer system management
The integration of GIS and RS data can be a powerful tool for generation of input data in water resources management and specifically CSS management. The first GIS systems and basic spatial data handling techniques made their way into computer technology in 1960s. However, the use of GIS in the field of water management started in 1980s and it is on the rise (Tsihrintzis et al., 1996). Since then GIS has been used as a powerful tool for storing, organizing, and visualizing geographical data, which is common in water management (Wilson et al., 2000). In a pioneering study on GIS applications in urban stormwater management, it was demonstrated that GIS could help with issues like data precision, accuracy, resolution, and degree of aggregation (Meyer et al., 1993). When compared to previous methods, GIS provided a more accurate assessment of the reliability of calculated parameters. In contrast to traditional methodologies, GIS provided a consistent and reliable method of estimating model parameters and input data for stormwater modeling. Sample et al. (2001) reviewed the application of GIS in urban stormwater modeling. A GIS application in urban stormwater management was described. A GIS, a database, a stormwater system design template, and an optimization capability for screening alternatives were included in the neighborhood scale program. Runoff from GIS data was calculated using the soil conservation service (SCS) approach, which is based on area and soil type (Sample et al., 2001). GIS was selected in a preliminary infiltration rating (PIR) calculation that evaluates the possibility of a future surface infiltration based-stormwater control measure at a given location (Tecca et al., 2021). The surface saturated hydraulic conductivity, depth to the water table, slope, and relative elevation were the input variables. Maintenance inspections from 104 rain gardens, done by Minnesota Anoka Conservation District, were used to calibrate and validate the PIR. In 85% of rain gardens, the PIR predicted an accurate or reasonable performance estimation. The PIR can assist in the site-specific investigations and act as an excellent planning tool for future surface infiltration based-stormwater control measures in the land development process.
Likewise, RS technology is also a powerful integration in urban water management. RS technology has been applied in studies such as land use/land cover and impervious surface determination, and other hydrologic parameters like rainfall, temperature, snow cover, elevation, and sewer system physical properties (Abellera & Stenstrom, 2005;Ravagnani et al., 2008;Slonecker et al., 2001). Cermak et al. (1979) used Landsat multispectral scanner (MSS) data in their own developed classification technique and tested it in the Crow Creek and Walnut Creek watersheds near Davenport, Iowa, and Austin, Texas, respectively. The land uses generated as a result of the classification were applied to the stormwater model Hydrologic Engineering Centre (HEC) developed by the US Army Corps of Engineers. Discharge frequency curves based on Landsat MSS were similar to those based on traditional land uses. The flood monitoring and damage estimation relied heavily on these curves (Cermak et al., 1979). In another paper, radial-basis-function neural network (RBF-NN) and the ANN artificial intelligence techniques were applied on panchromatic imageries of Landsat thematic mapper (Landsat TM) and Korea multi-purpose satellite (KOMPSAT) in land use/cover classification in an area in Korea (Ha et al., 2003). The outcome was exerted as input for SWMM to predict stormwater runoff quantity and biological oxygen demand (BOD) loading. Classification accuracy and percentile unit load significantly affected runoff, peak time, and pollutant emissions. Park and Stenstrom (2006) implemented RS to predict stormwater pollutant loadings using Landsat-enhanced thematic mapper Plus (ETM+) images. They presented a Bayesian network to classify RS images of the Marina del Rey area in the Santa Monica Bay watershed. TSS, chemical oxygen demand (COD), nutrients, heavy metals, and oil and grease were among the eight water quality metrics studied. The findings provided thematic maps with spatial predictions of each pollutant load, allowing the highest pollutant loading regions to be identified. These results could be significant in defining the optimal stormwater pollution management techniques at regional and global scales, and determining total maximum daily loads in the watershed (Park & Stenstrom, 2006).
GIS and RS applications have been combined in studies on managing urban water (Svejkovsky & Jones, 2001;Tsihrintzis et al., 1996). In one of the early research in managing stormwater, RS technology was used in automatic inference of elevation and drainage models from a satellite image (Haralick et al., 1985). Thanapura et al. (2007) integrated GIS and RS to determine the runoff coefficient. The goal of this study was to apply an unsupervised classification and the iterative self-organizing data analysis techniques (ISODATA) technique to map impervious area and open space for the determination of runoff coefficient in GIS spatial modeling using 8-bit and 16-bit QuickBird normalized difference vegetation index (NDVI) satellite images. The impervious area and open space were mapped using high spatial resolution NDVI satellite images created with the ISODATA algorithm. This was an efficient and successful information extraction method for reliably predicting spatial representative C values. The six QuickBird NDVI thematic maps produced had similar classification accuracies, averaging around 92%. The C values were generated in GIS spatial modeling and compared to the industry standard C to investigate high spatial resolution satellite data and to validate the composite runoff index geographic model created by Thanapura in 2005 and. Finally, it was agreed that the greater resolution image and mapping approach improved land cover discrimination and resulted in more accurate C estimation (Thanapura et al., 2007). Aerial images and height data have been used to determine the coefficient of imperviousness (Paul et al., 2018). In this work, random forest (RF) and conditional random fields supervised classification techniques were compared. The outcomes of land cover classification demonstrated that none of the classifiers had an obvious advantage, with both having an overall accuracy of 85.5%. The results required modification to account for the occlusion of the ground surface by trees to calculate the coefficient of imperviousness. This was accomplished using a heuristic approach that employed data from a GIS. The best coefficient of imperviousness result was obtained using the RF classifier with a root mean square error of 3.8%.
GIS and RS have also been integrated into stormwater and flood modeling research (Sytsma et al., 2020;Wang & Xie, 2018). Hong et al. (2017) combined the 2D-surface two-dimensional runoff, erosion, and export (TREX) model and the 1D-sewer CANOE model in the TRENOE platform at a small urban catchment near Paris. The detailed land-use data generated from various information sources was a critical feature for reliable simulations. Khin et al. (2015) used a high-resolution WorldView-2 image in a two-stage classification process and implemented the outcome in hydrologic modeling and performance evaluation of several LIDs. In classifying the same urban region into six land cover classes, the suggested two-stage classification method achieved an overall accuracy of 80.6%, compared to 68.4% for a traditional pixel-based method. The hydrologic parameters of micro-sub-catchments were fed into the SWMM to analyze the performance of LIDs based on the classification results. In a typical low-rise residential area in San Clemente, California, the use of porous pavement and bioretention reduced run-off volume by 18.2% and 37.1%, respectively (Khin et al., 2015). Sytsma et al., (2020) coupled GIS with python stormwater management model (PySWMM) and regression tree analysis to predict hydrological connectivity of impervious surfaces. Impervious areas were separated into directly or physically connected and variably connected including impervious area that drained into pervious area categories. GIS was used via an ArcGIS tool to enable the application of these methods in real life. This was used to, delineate subcatchments, extract the impervious area categories, apply the regression tree algorithm to predict incident rainfall fraction, and summarize the resulting hydrologically connected impervious areas by sub-catchment. The connectivity of the impervious areas were mainly sensitive to the soil type, rainfall depth, area fraction, and antecedent soil moisture conditions of the downslope pervious area parameters (Sytsma et al., 2020).
| CHALLENGES AND FUTURE PERSPECTIVES
The management of sewer systems and mitigation of CSOs is a very complex field and there is a need to adopt a casespecific holistic approach. Within the scope of this approach, building partnerships between key stakeholders to develop preliminary goals and identify issues of concern plays an important role. Problem characterization, setting goals, and proposing solutions require significant knowledge of management science and engineering which includes a multidisciplinary understanding of hydrology, hydraulics, geographical information systems, meteorology, computer science, and so forth, to develop holistic solutions. This interdisciplinary approach is needed to build, operate, and optimize climate-resilient and robust wastewater infrastructure that can protect human and ecosystem health under the most challenging conditions. Modeling that integrates catchment properties, climate, and receiving water bodies can help to develop holistic solutions which enable better management and integration of sewer systems and end-of-pipe WWTPs.
The mechanistic stormwater and CSO models comprise the mathematical representation of the relevant real-case situations and generate results based on the principles of physics and chemistry (Al-Amin & Abdul-Aziz, 2013). These models require moderate or extensive input data and most of the time the input data comes with several uncertainties. For instance, digital elevation models (DEMs) are used to delineate the catchment and flow routing. Moreover, satellite imagery is used to generate LULC and soil maps. However, there are several sources of error associated with remote sensing-derived information (Jensen, 2015). Data-driven models analyze data about CSO and stormwater by providing impulse-response type relationships between variables. These models try to correlate and find the connection between variables without considering the principles of physics and chemistry. Whereas, selecting the optimum relationship between variables is a challenge due to complex interactions among different parameters and requires the user to have good knowledge of the physical and chemical processes happening within the system. Another limitation of the datadriven models is that they are mostly constructed based on correlations which are site-specific and can significantly vary from site to site. Data-driven models also suffer from uncertainties related to data. The question of which models should be preferred has always been under the spotlight. Rather than being comparable modeling approaches both mechanistic and data-driven models can complement each other. Mechanistic models can be used to learn the physiochemical correlations of different parameters and variables within a system while data-driven models will complete the missing elements.
Different CSO and stormwater mechanistic models have their own advantages and disadvantages. Some models such as SWMM has kept its application as simple as possible to be able to save time and manage the computational load. Moreover, this simplified representation of the model has made parametrization, sensitivity analysis, calibration, and validation manually doable. However, this simplification sometimes could be a disadvantage due to neglecting some processes happening within the catchment. All these factors bring out the necessity of an automated sensitivity analysis, calibration, and validation method for the model. Moreover, the rapid urbanization has let to expansion in catchment areas and increase in the interaction of rural and urban hydrology while SWMM is being mostly preferred for small catchments (Niazi et al., 2017). This interaction increases the complexity of the processes happening within the urban catchments and the simplified representation of SWMM may become a disadvantage pending on modeling objectives and spatial scale magnitude. Moreover, there is regularly input to urban catchments either in terms of surface runoff or subsurface runoff. Therefore, a simple portrayal would not be enough to effectively model stormwater and CSO. Based on the objectives of the modeling, the aforementioned issues could be dealt through advancing SWMM to deal with moderate to big complex catchments or integrate with a hydrological model such as soil and water assessment tool (SWAT) which can be used for modeling large watersheds.
An urban environment is a set of systems either interconnected to each other or separated from each other. The bigger the systems the more the probability of interaction with other systems. Sewer systems are also a part of this set with a different magnitude based on the urban environment. The larger the magnitude the more need for a tool that is useful for better analyzing, organizing, and decision making. Moreover, most of the time the problems happening within these systems are location based. GIS and RS are the location-based sources of information that could be used to express the reality and connections between different elements of the sewer system. However, integrating GIS and remote sensing with CSO and stormwater models is a big gap identified within this survey. This integration is mostly seen within the commercial models. The integration will not only promote multidisciplinary research but also save time for researchers to focus on other research gaps as well. For example, the open-access SWMM does not have a GIS integration, whereas most of the input data required for model setup are geospatial data. Moreover, one of the factors that may have prevented the SWMM to be preferred for larger catchments is the lack of GIS integration because dealing with that amount of data manually is tiresome. Therefore, GIS and RS integration will make research easier and will open doors for addressing other research gaps such as AI adoption in urban water management. Thanks to the advancements in satellite technology, GIS and RS could be used to create data with high resolution and accuracy which will be used as input for modeling studies.
Adopting data-driven models for CSO and stormwater management is not something recent since RTC, MPC, and several optimization techniques have already been used in different research. However, AI integration is a big gap that has a long way to be closed. Even though this survey mentioned the AI applications in urban water cycle, very limited research about CSO management were encountered. AI can be applied for smart management of urban water network, stormwater and CSO modeling, and automated calibration of the mechanistic models. Despite AI models such as ANN, ANFIS, Gaussian process regression, Wavelet Transform-AI integrated models have been used in sewer system management, applications of reinforcement learning are rare, and it is expected to be a future trend. Reinforcement learning (RL) has evolved as a state-of-the-art methodology for autonomous control and planning systems throughout the AI research areas (Mullapudi et al., 2020). The RL problem is defined as a framework in which an agent seeks to maximize a reward function supplied by an environment by selecting actions from a pool of possible interactions known as a policy (Ochoa et al., 2019). RL specifically can be applied in smart management of sewer network and gray and green infrastructures. Lastly, AI is already a fundamental part of remote sensing technology in terms of DL, and smart city, and digital twin concepts.
Finally, yet importantly, concepts like smart city and digital twin alias Metaverse application in CSO and stormwater management is a cutting-edge of today but expected to be the norm of the near future. In terms of CSO and stormwater management, Metaverse will not only be used for autonomous control of sewer networks but also in tracking the past of the physical environment and predicting the future state to prevent flooding and other disasters related to sewer systems. In Metaverse, the digital copy of the sewer network can be created and synchronized with the physical network. Metaverse can be used to monitor the physical environment and receive operational insights. Moreover, using AI different simulations can be applied to the copies of this digital counterpart to obtain the optimum and costeffective operational scenarios.
Although calibration and validation are used to verify the proposed approach in both mechanistic and data-driven modeling studies, their accuracy is related to several factors including data and modeling uncertainties, unavailability of critical data for calibration, and validation for both wet and drought periods. However, as metaverse will create a virtual representation of the physical environment and monitor its physical counterpart itself, uncertainties related to data, and unavailability of calibration and validation data for the required period are anticipated to be less effective, therefore more accurate results might be obtained.
| CONCLUSION
This study presents a comprehensive survey of combined sewer overflow management research. At first, the history of urban water supply and sanitation, CSO definition in terms of characteristics, problems, and its impact on receiving waterbodies were introduced. Subsequently, the studies utilizing mechanistic and data-driven modeling, and operational controls including RTC, MPC, optimization techniques, and AI for CSO and stormwater management were discussed. Finally, a review of GIS and RS applications in urban water management is given. The following conclusions can be drawn from this work: • Combined sewer overflow adversely affects human health and aquatic ecosystems, and it is proven to be a significant pollution source. • Due to the population increase and climate change, sewer systems capacity is not adequate in most cases, and enlarging the infrastructure is not feasible due to economic and land restriction factors. As a result, the need for dynamic and smart management of CSO has increased. • Mechanistic modeling applications such as SWMM integrated with LIDs can be used to statically control CSO. However, these applications are more effective when urbanization rate and climate change phenomena are considered. Moreover, LID applications not only aid in CSO control but also promote hydrological cycle. • RTC, MPC, and optimization techniques are significantly used in the context of CSO management. However, only a limited number of studies related to AI and ML applications in CSO control and mitigation is encountered in the literature and further research is required to fill this gap. • AI and ML can be used in the intelligent management of gray-green infrastructures and sewer networks, and/or applied as data-driven models for CSO modeling and site specification of LIDs. Further research is needed to evaluate the potential of these methods in these two applications. • GIS and RS applications have the potential to capture, manage, and analyze data related to CSO, but may also provide opportunities for the agile management of CSO. RELATED WIREs ARTICLES Testing the impact of at-source stormwater management on urban flooding through a coupling of network and overland flow models Integrated modeling in urban hydrology: Reviewing the role of monitoring technology in overcoming the issue of 'big data' requirements | 15,269.8 | 2023-01-21T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Quasi-Probability Husimi-Distribution Information and Squeezing in a Qubit System Interacting with a Two-Mode Parametric Amplifier Cavity
: Squeezing and phase space coherence are investigated for a bimodal cavity accommodating a two-level atom. The two modes of the cavity are initially in the Barut–Girardello coherent states. This system is studied with the SU(1,1)-algebraic model. Quantum effects are analyzed with the Husimi function under the effect of the intrinsic decoherence. Squeezing, quantum mixedness, and the phase information, which are affected by the system parameters, exalt a richer structure dynamic in the presence of the intrinsic decoherence.
Introduction
Quantum coherence and correlations are the main resources for more quantum applications [1,2], such as teleportation, cryptography [3][4][5] and quantum memory [6,7]. Quantum coherence arises from superposition and is a prerequisite for many types of quantum entanglement [8], such as discord, nonlocality and steering [9]. von Neumann entropy [10] and linear entropy [11] are utilized to estimate the amount of entanglement generated by a pure quantum state [1] and a mixed state [12].
Quantum phase information and quantum coherence are quantified by Wehrl density and entropy. These two measures are based on Husimi distribution function (HF) [13]. One of the main advantages of Husimi distribution is the positivity of the distribution [14]. The associated Wehrl entropy of HE gives quantitative and qualitative phase space information about a pure/mixed qubit state [15][16][17].
The interaction of quantum systems addresses an attractive topic for its multiple applications in quantum optics and quantum computing. In particular, three types of interactions have been extensively studied: field-field [31], field-qubit [32], and qubit-qubit couplings [33,34]. These interactions participated to various phenomena observed in experiments [35].
The two-photon field in quantum systems contains a large amount of entanglement between the photons emanating from the interaction cavity. Several models that achieve two-photon transition have been explored experimentally for two-photon microscopic maser [36].
More attention has been paid to nonlinear interactions between electromagnetic fields and other quantum systems. From these interactions arise phenomena related to physical applications-for example, stimulated and spontaneous emissions of radiation, Raman and Brillouin scattering. The nonlinear interactions are divided into two main types, the first one is a multi-mode type of frequency amplifier while the second is a multi-mode type of frequency converter [37,38]. Two-photon transitions are another resource for non-classicality, presenting high quantum correlations between emitted photons. They were experimentally realized via two-photon micromaser [36].
The decoherence in quantum qubit-cavity systems destroys the quantum effects, which are generated due the unitary qubit-cavity interactions [39][40][41][42]. The decoherence have several origins, such as the interaction of the system with the environment without energy relaxation and the intrinsic decoherence (ID) that occurs without the interaction with the surrounding environment. In the ID models, the quantum effects deteriorate when the closed qubit-cavity system evolves [43][44][45].
In this manuscript, we propose a model containing a two-or four-photon field coupled to a qubit system. Through transformations between the modes, the model is generalized to the SU(1, 1) algebraic system. Then, we analyze the dynamics of quantum coherence (based on the quasi-probability Husimi distribution), and the quantum squeezing when the two-mode parametric amplifier cavity fields start with a Barut-Girardello coherent state.
The rest of this manuscript displays the ID model and its dynamics in Section 2. The study of the phase information via the Husimi function and Quantum coherence via Wehrl entropy is presented in Section 3. The squeezing phenomenon is analyzed in Section 4. Finally, Section 5 is dedicated to the conclusion.
Physical Model
Here, the Hamiltonian describes a two-level atom (qubit with upper |1 A and |0 A states), interacts with two cavities involving a two parametric amplifier fields. the cavity-qubit interactions are through a two-photon transition [31,46]. The cavity-qubit Hamiltonian of the system iŝ where ω 1 and ω 2 design the frequencies of the bimodal cavity fields, which have the annihilation operatorsψ 1 andψ 2 , respectively. The constant λ represents the coupling between the qubit system and the two-mode parametric amplifier.
Here, we consider the case where we set ω = ω 1 = ω 2 with the representation of SU(1, 1) Lie algebra generators (K ± ,K z ): whereK 2 = k(k − 1)Î is the Casimir operator and k is the Bargmann number, Therefore, the Hamiltonian of Equation (1) is then rewritten as: whereK + ,K − andK z applied on the eigenstates |l, k givê It is found that the decoherence [39][40][41] has crucial effect on quantum squeezing and quantum coherence. Here, we adopt an important type of the decoherence known as "intrinsic decoherence" (ID). The master equation of the intrinsic decoherence can be written as [43] d whereρ(t) represents the time-dependent SU(1, 1)-SU(2) system density operator and γ is the parameter of the decoherence.
We focus on the case where the atom described by the SU(2)-system starts with the upper state-i.e., ρ A (0) = |1 A 1 A |, while the two-mode parametric amplifier cavity fields start with a Barut-Girardello coherent state (GB-GS) [47], described by where I ν (x) represents the modified Bessel function. Based on the considered initial states, and the Hamiltonian eigenstates of Equation (4), the analytical solution of the Equation (6) in the space state {|1 A , i, k , |0 A , i + 2, k } can be expressed as with while V ± i represent the Hamiltonian eigenvalues of Equation (4), where δ = (ω 0 − 2ω)/2 is the detuning between the frequencies of the qubits and the two-mode parametric amplifier cavity fields. Before exploring the quantum effects generated in this system, let us define the atomic reduced density matrix where Tr F represents the operation of the tracing out the state of the two-mode parametric amplifier cavity.
Husimi Distribution (HD)
Here, we consider the quasi-probability Husimi distributions that depend on the reduce density matrix elements of the qubit system. Quantum effects of the HD and their associated as Wehrl entropy, phase space information and the mixedness, will be analyzed.
Husimi Function
For, angular momentum j, the j-spin coherent states |θ, φ [48,49], can be written as For a qubit state, spin-1 2 (j = 1 2 ), the Bloch coherent state, |µ , is defined by The SU(2)-system is identified in the phase space by θ, and φ, where dµ = sin θ dθ dφ. Therefore, the H-function (HF) is defined by [13] H Due to the fact that the H(µ, t) depends on the angles of the phase space distribution, it is used as a measure of the information loss for the SU(2)-system. Figure 1 illustrates the effects of the detuning and the intrinsic decoherence on the behavior of the Husimi distribution H(µ, t) = H(θ, φ, t). From Figure 1a, the H-function has distributed regularity with 2π-period with respect to the angles θ and φ. In general the H-function has periodical-peaks with a Gaussian distribution, and the distribution is centered at ( θ π , φ π ) = ( 2n+1 2 , 2n+1 2 ), (n = 0, 1, 2, 3, ...). In the absence of ID and detuning, the maximum values of the peaks increase with enhancement of φ as can be observed in Figure 1a. The location of the peaks does not change but reduces with the increase in φ after considering the detuning. Both peaks and bottoms are squeezed and almost vanish after adding the decays, as seen in Figure 1c.
In the resonance case, and neglecting the intrinsic decoherence, the H(t) function oscillates chaotically; the amplitude of the oscillations range from 0.02 to 0.13. We also note that the function reaches its smallest values at nπ 2 (n = 1, 2, 3, ...) as can be observed in Figure 2. The chaotic oscillations become regular for the off-resonant case and the maximum values are enhanced. Note that the maximum values are periodically achieved and are following the pattern (0, 0.4, 0.8, 0.12, ...) as shown in Figure 2. After considering the ID, the previous fluctuations will completely disappear over time, and the Husimi function tends to the value 0.8, as shown in Figure 2.
Wehrl Entropy
For the case where the intrinsic decoherence is existing, the atomic Wehrl entropy [50] is used to quantify qubit mixedness, which measures the entanglement in closed systems γ = 0 [51,52]. It is defined by where the Wehrl density of the Husimi function D(µ, t) is represented by When the the SU(2)-system is initially in the excited state |1 A , the Wehrl entropy is given by Therefore, it satisfies [53], The initial value of E(0) is for a pure state, and its maximal value ln(4π) indicates that the qubit is a maximally mixed state. The Wehrl entropy is a good quantifier that provides a measure about the degree of the mixedness of the considered qubit state.
In the absence of the decoherence, the dynamics of the Wehrl entropy, E(t) is shown (in Figure 3a) to investigate the generated partial and maximal entanglement between the atomic SU(2)-system and the SU(1, 1)-system of the two-mode cavity. We note that the generated entanglement grows with a quasi-regular oscillatory dynamic. In resonance and by neglecting the decoherence, we see that the Wehrl entropy function E(t) oscillates regularly. We also find that the function reaches the smallest values periodically at points nπ 2 (n = 1, 2, 3, ...), which is completely consistent with the observations mentioned in the previous section, see the Figures 2 and 3a).
By considering the intrinsic decoherence, the Wherl entropy E(t) starts from its initial value E(0) of the pure qubit state |1 A and quickly reaches the maximum value, without oscillations, as can be observed in the Figure 3a).
For the non-resonant case, without ID, the function E(t) fluctuates more than the previous case. When the intrinsic decohernce enters into play, gradually the oscillations are eliminated and the maximum values are quickly attained.
Squeezing Phenomenon
Based on the Pauli operatorsσ j r of a qubit system, the atomic SU(2)-system information entropy (AIE) is defined by [10,11] The AIE is used as a general criterion for the the atomic SU(2)-system squeezing. For δS r ≡ exp[S r ], the entropy uncertainty relation is therefore, these information entropies satisfy the entropy uncertainty relation [54] S x + S y + S z ≥ 2 ln 2.
Based on the AIEs, the entropy squeezing is given by The fluctuations in the atomic SU(2)-system componentsσ r (r = x, y, z) are indicators of "squeezing phenomenon" if the condition E r (t) < 0 is occurred. It is found that the component σ x is not squeezed-i.e., E x (t) > 0. Consequently, the dynamics of the entropy squeezing of the componentσ y is investigated only.
In Figure 4, the intrinsic decoherence effect on the dynamics of the entropy squeezing E y (t) is shown for the resonant and non-resonant cases. Solid curve of Figure 4a, representing the case where δ = 0, illustrates that the squeezing phenomenon of the entropy squeezing appears during several time windows, periodically with the π-period.
In the case where zero-detuning takes place and in the absence of the intrinsic decoherence, we notice that the squeezing intervals appears periodically around the points λt = nπ(n = 0, 1, 2, ..., due to the unitary interaction. These squeezing intervals are decreased and vanished after we consider the ID decay, as seen in Figure 4a. Figure 4b illustrates that the non-resonant case leads to the destruction of the squeezing hefting the minima E y (t). The squeezing disappears completely after the intrinsic decoherence is considered.
Conclusions
We have explored here, the dynamics of bimode cavity fields incorporating a qubit system by applying the SU(1, 1)-algebraic representation. The dynamics of the Husimi distribution and its associated Wehrl entropy, as well as the entropy squeezing are discussed. The non-resonance amplifies the fluctuations of those quantities. The intrinsic decoherence reduces the squeezing intervals. The detuning leads to the enhancement of the generated Wehrl mixedness entropy, and delays the appearance of the stationary mixedness. For the off-resonant case, the squeezing decreases significantly. It is found that the phase space Husimi distribution information, the quantum coherence (qubit-cavity entanglement and atomic mixedness) and the quantum atomic squeezing are very sensitive to the nonlinear qubit-cavity couplings. | 2,851.8 | 2020-10-19T00:00:00.000 | [
"Physics"
] |
Corrigendum: Climate change in sub-Saharan Africa: Nature restoration as an ethical issue
Copyright: © 2020. The Authors. Licensee: AOSIS. This work is licensed under the Creative Commons Attribution License. In the version of the article initially published, Buwani, D.N. & Dolamo, R.T.H., 2019, ‘Climate change in sub-Saharan Africa: Nature restoration as an ethical issue’, Theologia Viatorum 43(1), a4. https://doi.org/10.4102/TV.v43i1.4, on page 4, the acronym ‘REDD’ was incorrectly defined. The correct definition for the acronym ‘REDD’ should be ‘reducing emission from deforestation and forest degradation’ instead of ‘deforestation and forest degradation’. The correct definition of the term is updated in the sentence as follows:
Introduction
Today, climate change in sub-Saharan Africa remains a big threat. Climate change is the result of human activities, the burning of fossil fuel and the clearing of forests (Arnold 2011:1). Therefore, mankind is facing consequences of climate change such as floods, sea level rise and extreme weather and deviations in rainfall (Levy & Sidel 2014:33).
In this article, the authors will analyse the consequences of climate change threats. Climate change will affect every continent and particularly the African continent will be the most affected as it depends on natural resources (Peach Brown 2011:164). Ethical virtues will play a key role by preserving future generations. To avoid the negative impact of climate change, the authors recommend mitigation and adaptation measures to reduce greenhouse gases in the atmosphere in order to protect our lives and biodiversity. Also, the implementation of renewable energy will be required to avoid pollution in sub-Saharan areas. To achieve a sustainable development, the contribution of all will be required, especially of politicians and economists.
Climate change
Climate change is the variability of the temperature, 'a permanent change in weather conditions' (Longman 2003:276). Many scientific reports suggest that humanity is causing environmental change at an unprecedented rate (Gardiner 2012:241). If current trends continue, massive devastation will be inflicted on non-human life, future humans and the current poor will be in danger (Gardiner 2012:241). Greenhouse gases from any part on the Earth's surface enter the atmosphere and affect the climate globally (Gardiner 2010:88).
Current climate change is driven by increasing atmospheric concentrations of greenhouse gases in the atmosphere, for instance, carbon dioxide (CO 2 ), methane (CH 4 ) and nitrous oxide (N 2 O). The reason for the build-up of these gases is the burning of fossil fuels, the clearing of forests, and other activities of mankind (Union of Concerned Scientists 2003:7). A similar viewpoint is expressed by Arnold (2011) who states that: It is well understood that the Earth's climate is changing as a result of human activity. More specifically, the climate is changing because of the inefficient consumption of fossil fuels and rapid deforestation. A changing climate will place present and future human populations in jeopardy and the poor will be most adversely impacted. (p. 1) Some of the many consequences of climate change are global warming, deviations in rainfall, sea level rise, extreme weather events, droughts, and floods (International Panel on climate change [IPCC] 2001;Levy & Sidel 2014:33). Climate change threatens human health and well-being, for instance, infectious diseases, food insecurity and malnutrition and mental disorders (Levy & Sidel 2014:33). Collective violence because of climate change threatens basic human rights as it is written in the Universal Declaration of Human Rights (UDHR, art. 25). For example, it threatens the right to standard of living adequate for health and well-being, also rights to food, housing and https://theologiaviatorum.org Open Access social services, as well as the right to security (Levy & Sidel 2014:33).
The Intergovernmental Panel on Climate Change suggests that greenhouse gas emissions are changing the global climate, and that (IPCC 2001(IPCC , 2007(IPCC , 2012): Africa will experience increased water stress, decreased yields from rain-fed agriculture, increased food insecurity and malnutrition, sea level rise, and an increase in arid and semi-arid land as a result of this process. (n.p.) The IPCC 2007 report highlights that 'extreme weather events, notably flood, drought and tropical storms are also expected to increase in frequency and intensity across the continent' (IPCC 2007:n.p.). Climate change will affect human populations across the world. However, not every continent will be affected in the same way. Many reports showed that it will be the global poor who will face the effects of global climate change. It will be the poorest regions of the world with the least amount of resources to mitigate those negative effects (Mastaler 2011:66-67 Simultaneously, other critics observe that the rising temperatures increase diseases such as malaria. Mosquitoes reproduce rapidly with the increase in temperatures. This led to an increase of malaria in highland areas such as Nairobi, Kenya that have not experienced the disease. Today, malaria accounts for more than 80% of climate-related diseases in Africa (Anderko 2014:33).
Climate change will bring many consequences in the poorest regions of Africa. In his research, McMichael highlights that climate change will reduce food security, agricultural and fishery yields, especially in the sub-Saharan Africa region. The flooding and drying cycles will also increase the risks to agricultural productivity in sub-Saharan regions (McMichael, Barnett & McMichael 2012:647). This point is emphasised by other researchers who pointed out that these projections are consistent with recent climatic trends in southern Africa (Brown et al. 2012:1). The changes in climate are exacerbated by the high levels of sensitivity of the social and ecological systems in the region. There is also a limited capacity of civil society, private sector and government actors to respond to these threats (Brown et al. 2012:1). This situation is visible in the Congo Basin region, where the rainforest is threatened and governments are unable to provide an accurate answer to the threat that the Basin is facing.
The Congo Basin is the second largest tropical rainforest in the world after the Amazonian forests, covering 228 million ha (FAO 2011) which, represents approximately 20% of the world's remaining tropical forest (Nkem et al. 2012:514). These forests cover about 60% of the total land area of six countries of the central African region: Cameroon, Central African Republic, Gabon, Equatorial Guinea, Republic of Congo and the Democratic Republic of Congo (DRC) (Nkem et al. 2012:514). The Congo Basin forest covers about 30 million forest dwelling indigenous people, representing over 150 ethnic groups that are concentrated around forest margins (CBFP 2006). The cultural isolation of the indigenous communities poses a challenge of national integration. They are excluded in decisions concerning national development and their communities are most vulnerable to the global challenges of climate change (Nkem et al. 2012:514).
The exclusion of local communities remains an error, for these communities are people of the forest. Their livelihoods depend on the forest, they should be involved in all decisions concerning their lives. Their lives are linked to the forest like the human body is linked to the soul. Ignoring this fact may result in the loss of lives.
Lack of information represents a huge gap regarding the climate change threat in sub-Saharan Africa. The IPCC states that by 2020, about 75 million Africans will experience water stress, also agricultural production of cropland could drop by as much as 50% (Mathews 2017). Another aspect that the African continent will face because of climate change is the migration phenomenon. Streamleau (2016) points out that: If Trump forsakes support for the 2015 Paris climate Accord, endorsed by 193 members of the United Nations (UN), as well as Obama's bilateral climate agreement with China, the resultant rise of global warming and extreme weather events will wreak havoc throughout Africa. Global social media will amplify the human dramas and dangers of forced migrations, viral epidemics and related deadly conflicts as credible evidence of global warming's impact continue to accumulate. (n.p.) American researchers demonstrate that Zimbabwe is beginning to experience the effects of climate change, especially rainfall variability and extreme events (Brown et al. 2012:ii). The effects of global warming are expected to rending land marginal for agriculture, which poses a major threat to the economy and the livelihoods of the poor because of Zimbabwe's heavy dependence on rain-fed agriculture and climate sensitive resources (Brown et al. 2012:ii).
Climate change represents a huge danger to the environment. Therefore, if nothing is done to end the negative impact of climate change, the African continent will probably assist in the loss of its biodiversity. To secure lives and the future of this continent, the conservation of biodiversity should be implemented.
Biodiversity conservation
In Pearson's view, conservation is for people and nature. Throughout human history the conservation of nature is important because of the services provided for their benefit and because of intrinsic values of nature. For this reason, our immediate environment should be well managed.
Consequently, the management of protected areas and biodiversity conservation are achieved through the protection of ecosystem processes (Hopkins et al. 2015:526). From this perspective, we can say that biodiversity conservation means the protection or preservation of nature. If nature is well managed, mankind will be in peace.
One of the most important goals of biodiversity exploration is to help conserve the vast diversity of languages, cultures, peoples and other organisms that inhabit this Earth (Moran, King & Carlson 2001:520-521). The management of nature requires some techniques. According to Sonwa et al. (2005) biotechnology can contribute to the management and conservation of forest resources in the Congo Basin, for example. It is therefore a complementary tool to the traditional management programme, not a substitute. There is a pressing need to support human and material capacitybuilding linked to the application of biotechnology for forestry resource management in the Basin (Sonwa et al. 2005:62).
According to Paterson (2006), the ethical challenge is to find a reason for why nature should be protected from human actions. It is the conservationist's responsibility to prove that such value exists. Mankind and nature are seen as being in profound conflict with each other, and if we wish to describe the mutual relations that exist between human beings and the environment in these terms we would say that the living self depends upon the environment for its existence (Paterson 2006:149). Therefore, mankind depends on the workings of the environment or natural ecological conditions for their growth and development. And, conversely, as indicated by the statement above, 'without life there is no environment', the environment must wait for the activities of human beings in order to take on a particular shape or undergo changes. Mankind thus plays a key role in the creation of a particular environment, and must bear the responsibility for such creation (Paterson 2006:149).
As mentioned above, the conservation of nature needs some techniques; one of the techniques is the implementation of solar energy by mankind. Solar energy using direct sunlight is potentially the most powerful renewable energy source for electricity and heat. Researchers claim that renewable energy sources will provide up to 35% of the global energy supply and nearly half of the electricity production by 2050 (Destouni & Frank 2010:19). To conserve nature or the environment, mankind has the responsibility to reduce, reuse and recycle items where possible. Palliser (2011) states that: An effort to reduce, reuse, and recycle and considering your overall environmental footprint in your everyday actions will reduce waste, prevent pollution, use less resources, save money, and work toward a cleaner, healthier Earth for the next generation. (p. 17) The conservation of nature also implies the restoration of nature. This process has two aspects, which is mitigation and adaptation.
Nature restoration: Mitigation and adaptation
According to IPCC, 'mitigation is an anthropogenic intervention to reduce the anthropogenic forcing of the climate system: it includes strategies to reduce greenhouse gas sources and emissions and enhancing greenhouse gas sinks' (Hopkins et al. 2015:502;IPCC 2007:878). This can be defined as a human intervention to reduce the sources of greenhouse gases (IPCC 2014). Protected areas can help mitigate climate change by storing carbon and capturing carbon by sequestering carbon dioxide from the atmosphere in natural ecosystems. Adaptation is the process of adjustment to actual climate and its effects. In human systems, adaptation seeks to moderate harm or exploit beneficial opportunities (IPCC 2014:1) (Hopkins et al. 2015:502).
The researchers have found that to limit the impacts of climate change on economies, countries should mitigate emissions or adapt to climate change consequences (Shalizi & Lecocq 2010:298). Mitigation consists of reducing emissions or removing greenhouse gas (GHG) from the atmosphere at the beginning of the chain to minimise climate change. By contrast, adaptation consists of responding to climate change impacts at the end of the chain. For example, shifting from coal-to gasfired power plants, developing renewable energy, or reducing deforestation and associated emissions of carbon dioxide are mitigation actions. (Shalizi & Lecocq 2010:298-299).
Today, nature restoration has become a competing issue for the protection of nature. Restorationists claim that preserving nature won't save it; instead, we must restore nature if we need it. Among other factors, they point to damage to nature caused by global climate change as indicating the necessity of restoration for nature protection (Hettinger 2012:27).
Restorationists reject the conception of humanity as separate from nature and argue that restoration is a virtuous way for humans to be part of nature. In their view, restoration is relevant for nature and for human development (Hettinger 2012:27-28 Africa is one of the regions that remains vulnerable to climate change. Global warming because of increased atmospheric concentrations of greenhouse gases is inevitable. Therefore, it is urgent that policy makers in regions such as sub-Saharan Africa begin to consider what measures they should take to adapt to the consequences of climate change (Smith & Lenhart 1996:193). For instance, Smith and Lenhart suggest that the planning and management along water shed and ecosystem lines reduce the institutional fragmentation in the management of natural areas and focus on protecting a variety of species and natural systems. The impacts of climate change are difficult to predict; the preservation of a variety of species in a healthy ecosystem may be the most effective way to protect those species that will be able to adapt to climate change (Smith & Lenhart 1996:198).
Climate change is intensifying problems and creating new risks, particularly in Africa, where there is poverty and dependence on the natural environment. There is a growing need for proactive adaptation to climate change risks (Ziervogel & Zermoglio 2009:133). Therefore, society needs researchers to find real solutions to the risk that communities are facing. However, research must be supported within institutions if they are going to keep their role in the development of knowledge (Bardsley 2015:45). A good example of the above statement is the case of South Africa. To address the issue of climate change (Odeku & Meyer 2010): South Africa has establishing new focal points within government, developing partnerships with other governments or the private sector, or launching pilot projects, have contributed immensely to reducing GHG emissions in South Africa; the government is persistent in its quest to continue in this direction in the near future ... This is achieved by putting in place more stringent policies or by implementing new adaptation and mitigation measures. (p.183) Another technique to protect the forest is in the Congo Basin in Central Africa. Tropical forests in these areas are vulnerable to climate change representing a risk for indigenous peoples and forest-dependent communities. Therefore, mechanisms to conserve the forest, such as deforestation and forest degradation (REDD), could assist in the mitigation of climate change, reduce vulnerability and enable people to adapt (Peach Brown et al. 2014:759). Although researchers pointed out that adaptive capacity is currently low, it could be enhanced with further development of institutional linkages and increased coordination of multilevel responses across all institutions and with local people. It is relevant to build networks with indigenous people at the local level who can contribute knowledge that will build adaptive capacity (Peach Brown et al. 2014:759).
In this regard, the researchers have a different point of view, truly REDD has found mechanisms to conserve the forest in the Congo Basin, the situation is critical even worse in the DR Congo because of political context. Greenpeace Africa is denouncing the fact that the Congolese government is giving access to multinational companies to cut down the forest in the Basin. A petition was released in order to stop the Congolese government's decision (Greenpeace Africa 2017).
To conserve the forest in the Congo Basin, political stability and strong institutions are required. However, the researcher noticed that in countries such as South Africa, where institutions are strong, the implementation of mitigation and adaptation measures can be successful.
The vulnerability to climate change in Africa is the result of increasing temperatures. Sub-Saharan countries depend on natural resources, but there is a low degree of adaptive capacity. The Congo Basin forest of Central Africa has also become a focus for REDD, because of its carbon reserves which are of global importance for regulating greenhouse gas emissions. To achieve progress, particularly in DRC and Central African Republic (CAR), increased efforts need to be made to establish a context of political stability, security and good governance (Peach Brown et al. 2014:767).
Today, the majority of nations in the world are united in the view that greenhouse-gas emissions should be reduced. Only the United States and Australia, of all the industrialised industrialised nations, have said that they are not prepared to commit themselves to a binding treaty that will achieve this goal (Singer 2010:197-198). The reduction of GHG remains the right thing to do and for the time being, it is the only way of solving the problem of climate change. Without mitigation and adaptation to climate change, the planet will not be tranquil for mankind's survival.
Jamieson states that mitigating climate change by reducing GHG emissions is important for many reasons. Firstly, slowing down the rate of change allows humans and the rest of the biosphere time to adapt, and reduces the treat of catastrophic surprises. Secondly, the right mitigation allows those who have done the most to produce climate change to be responsible for their actions (Jamieson 2010:271).
Ethical virtues: Justice and equity
The impact of climate change will be felt by future generations, thus a theory of global environmental justice must provide guidance on what duties to future generations those living at present have (Caney 2010:123). Moreover, climate change requires an analysis of moral relevance of decisions taken by previous generations. The important question is who should be responsible for dealing with the negative impact that comes from earlier generations. In this regard Caney suggests that a 'theory of justice that is to apply to global climate change must address the question of how the intergenerational dimensions of the issue make a morally relevant difference' (Caney 2010:124). In other words, the key problem is that 'the polluter should pay' or the one who produced the harm should pay. This principle is also affirmed in a number of international legal agreements. The Organisation for Economic Cooperation and Development (OECD), for example, recommended the adoption of the polluter pays' principle (PPP) in Council Recommendations https://theologiaviatorum.org Open Access of 26 May 1972, and 14 November 1974(Caney 2010. Therefore, we need to know, 'Who is the polluter?' and what kind of entities are the polluters? Are they individuals, state? Firstly, as we know, individuals use electricity for heating, cooking, lighting, televisions, computers and driving cars. This is to say that individuals are responsible for carbon dioxide emissions. Should we say that individuals should pay? If so, we should say that each individual should pay his or her share. Secondly, it might be argued that the causes of greenhouse gas emissions are economic corporations that consume vast amounts of fossil fuels and allow deforestation. Thirdly, many commentators argue that states should cut back on GHG emissions and they are the primary cause of global climate change (Caney 2010:126). One problem with applying the 'polluter pays' principle to climate change is that much of the damage to the climate was caused by the policies of earlier generations. It is, for example, widely recognised that there have been high levels of carbon dioxide emissions for the last 200 years, dating back to the industrial revolution in Western Europe. This is a difficult problem for the 'polluter pays' principle: who pays when the polluter is no longer alive? And the proposal made by some researchers that the industrial economies of the First World should pay seems unfair, for it does not make the actual polluter pay (Caney 2010:127).
In this regard, the researcher points out that, even carbon dioxide emissions were in the atmosphere since the industrial revolution, the actual polluters inherited some power stations from their ancestors in Western Europe and they are responsible for the global warming that is affected the planet today. Moreover, we cannot say that those who caused the harm are no longer alive. The problem now is to slow GHG emissions in order to adapt to the climate change. Today, those who are harming the ... world are still ... alive and they are responsible for climate change. To protect the planet, we should slow down the greenhouse gas emissions. Unfortunately, some of the developed countries refused to ratify the resolutions of the Paris agreement (reduction of greenhouse gases to 2 °C).
Climate change is widely recognised as a global problem affecting the lives and well-being of millions of people, the stability of ecosystems and the existence of many natural species. Justice involves moral considerations regarding relationships between people or between people mediated by institutions and policies, and therefore this is the case with global justice as well. There are important moral questions regarding the effects of climate change on ecosystems, biodiversity, and species (Moellendorf 2012:131). Environmental justice recognises the integrity of local communities and their ability to subsist in the face of the consequences of climate change. Secondly, it offers normative legal frameworks for enabling a broader recognition of ecological values, as well as for linking these to functional ends. In these latter respects, environmental justice offers a means for encouraging wider engagement and participation within civil society (Stallworthy 2009:74).
By quoting Schlosberg, Fischer suggests that climate justice is used by different actors seeking to characterise their position as the equitable one. However, it is not only about the distribution of environmental goods between states but also about how such goods continue to be distributed at national and local levels under conditions of climatic change, as well as the importance of recognition and participation (Fisher 2015:73).
Climate justice work has focused on distributional aspects and the relationships between nations. Opening up scale as an object of enquiry allows climate justice to be considered as an ideal that has multiple sites both for injustices and for solutions towards more just livelihoods for the climate vulnerable poor. Marginalised communities experience a range of problems linked to development, environmental degradation and others. This requires an analysis and a policy approach that goes beyond a distributional focus (Fisher 2015:80).
Climate change sustainability
Sustainable development needs to become climate-sensitive. Efforts to reconcile economic development, equity and environmental protection need to be incorporated into climate change studies. In this sense, without an effort to integrate climate change into sustainable development, the effects of the former may paralyse the aspirations of the latter (Matthew & Hammill 2009:1127. Researchers point out that education for sustainable development (ESD) has two objectives that were originally diametrically opposed: environmental education and education to cooperate with the Third World. The environment has always been a topic in geography instruction, yet the emphasis on preservation of nature did not really develop until the 1970s in response to public discussions of issues such as those triggered by the Limits of Growth and other influential works about environmental concerns. Today, global aspects of development are frequently covered by the term global learning. The focus of environmental education was primarily on the preservation of nature (Böhn & Petersen 2007:141). The challenge against climate change will be successful at all levels: politics, power, economics, behavioural psychology and ideology, but politicians should play a key role. We will make them a reality if we create a new politics of climate change that persuades politicians to act. The vision and determination of leaders from outside the established structures of power and wealth were the driving force behind other successes. The nature of public mobilisation is different today, but public pressure for action can be a force of change. It will require a vast investment of leadership, imagination and money to make this a reality. But there is no other pathway to success (Hale 2010:273).
Conclusion
The article highlighted the climate change threat that sub-Saharan countries are facing as a result of human activities. It https://theologiaviatorum.org Open Access stated that the African continent is the continent that is most affected by the anthropogenic impact of climate change. However, to protect our own life and nature, the researcher recommends the following aspects: ethical norms such as justice and equity; mitigation and adaptation measures; and some mechanisms like the implementation of solar energy and renewable energy to prevent future generation against the impact of greenhouse gas effects. To achieve a sustainable development, the collaboration of everyone is relevant, especially of the politicians and economist. | 5,867.4 | 2020-10-21T00:00:00.000 | [
"Environmental Science",
"Philosophy"
] |
ADVANCING SLEEP MEDICINE: A COMPREHENSIVE REVIEW OF PORTABLE AND CONTINUOUS MONITORING DEVICES FOR ENHANCED DIAGNOSIS AND MANAGEMENT OF SLEEP DISORDERS
: Sleep disorders, particularly obstructive sleep apnea (OSA), have become increasingly prevalent, adversely affecting millions worldwide. Traditional diagnostic methods rely on in-laboratory polysomnography (PSG), which presents challenges such as high costs and accessibility. This review explores the evolution of portable and continuous monitoring devices as alternatives, emphasizing their benefits in diagnosis, patient comfort, and cost-effectiveness. Additionally, we discuss advancements in technology and their implications for personalized treatment approaches. The integration of these devices into clinical practice has improved access to diagnosis and management of sleep disorders, paving the way for future innovations in sleep medicine.
INTRODUCTION
Sleep disorders have become increasingly prevalent in modern society, affecting millions of individuals worldwide and significantly impacting their quality of life, health, and overall well-being (1).Among these disorders, obstructive sleep apnea (OSA) stands out as one of the most common and potentially serious conditions, characterized by recurrent episodes of upper airway collapse during sleep (2).Diagnosing and managing sleep disorders, particularly OSA, have traditionally relied on inlaboratory polysomnography (PSG), considered the gold standard for sleep assessment (3).However, the limitations of PSG, including its cost, inconvenience, and limited availability, have led to the development and adoption of portable and continuous monitoring devices in sleep medicine (4). 1.
Focused on portable or continuous monitoring devices for sleep assessment 2.
Addressed the diagnosis or management of sleep disorders, particularly OSA
3.
Provided data on device accuracy, efficacy, or patient outcomes 4.
Discussed technological advancements or future directions in sleep monitoring
The selected articles were critically appraised for their methodological quality, relevance to the review objectives, and potential biases.The information extracted from these studies was synthesized to provide a comprehensive overview of the current state of portable and continuous monitoring devices in sleep medicine, their impact on clinical practice, and future perspectives.
Evolution of Portable Sleep Monitoring Devices
The landscape of sleep medicine has undergone a profound transformation with the introduction and evolution of portable sleep monitoring devices.These technologies have not only emerged as viable alternatives to traditional in-laboratory polysomnography (PSG) for diagnosing and managing sleep disorders, particularly obstructive sleep apnea (OSA), but also opened new frontiers of possibility and innovation.• Airflow (using nasal pressure transducers or thermistors) • Respiratory effort (using chest and abdominal belts) • Oxygen saturation (using pulse oximetry) •
Body position
• Heart rate and ECG
Snoring intensity
Some advanced portable monitors also incorporate EEG sensors for sleep staging, although this remains a challenge for many home-based devices (9).
Accuracy and Reliability
Numerous studies have examined the accuracy and reliability of portable sleep monitoring devices over the past two decades.A systematic review and meta-analysis by El Shayeb et al. (10) found that Type III portable monitors demonstrated good diagnostic accuracy for moderate to severe OSA, with pooled sensitivity and specificity of 0.93 and 0.92, respectively, compared to in-laboratory PSG.
However, it is essential to note that the accuracy of these devices can vary depending on the specific model, the population being studied, and the severity of the sleep disorder.For instance, Masa et al. (11) reported that portable monitors tend to underestimate the apneahypopnea index (AHI) in patients with mild OSA and may miss cases of upper airway resistance syndrome.
Despite these limitations, the consensus in the sleep medicine community is that portable monitoring devices are sufficiently accurate for diagnosing OSA in patients with a high pre-test probability of the disorder and without significant comorbidities (3).
Patient Acceptance and Comfort
One key advantage of portable sleep monitoring devices is their potential to improve patient comfort and acceptance of sleep studies.Traditional in-laboratory PSG can be intimidating and uncomfortable for many patients, leading to poor sleep quality and potentially affecting the accuracy of the results (4).
Several studies have demonstrated high levels of patient satisfaction with home sleep testing.For example, Gagnadoux et al. (12) found that 98% of patients preferred home-based testing over in-laboratory PSG, citing comfort, convenience, and less disruption to their regular sleep routine.
Cost-effectiveness
The cost-effectiveness of portable sleep monitoring devices has been a significant driver of their adoption in clinical practice.A comprehensive cost analysis by Kim et al. (13) found that home sleep apnea testing (HSAT) was associated with lower costs than in-laboratory PSG, with potential savings of up to 25% per patient diagnosis.
However, it is crucial to consider these devices' long-term cost-effectiveness, factors such as the need for repeat testing, missed diagnoses, and treatment outcomes.Pietzsch et al. (14) conducted a decision-analytic model comparing HSAT to PSG.They found that while HSAT was less expensive in the short term, the long-term cost-effectiveness depended on the device's sensitivity and specificity.
Implantable Devices
Recent advancements in medical technology have led to the development of implantable devices for continuous sleep monitoring.One notable example is the CardioMEMS HF System, an implantable pulmonary artery pressure sensor that can provide insights into sleep-disordered breathing in patients with heart failure (18).
While still in the early stages of development and adoption, implantable devices hold promise for providing continuous, long-term data on sleep-related physiological parameters in high-risk populations.
Smart Home Technologies
The "smart bedroom" concept has emerged as a potential solution for non-invasive, Moreover, as sleep medicine becomes increasingly technology-driven, it is crucial to maintain a patient-centered approach.These advancements should aim to improve patient outcomes, quality of life, and overall health rather than simply accumulating more data.
In conclusion, portable and continuous monitoring devices have ushered in a new era in sleep medicine, offering unprecedented opportunities for understanding and addressing sleep disorders.As these technologies continue to evolve, their thoughtful integration into clinical practice and research can significantly improve our ability to diagnose, treat, and prevent sleep disorders, ultimately contributing to better health and well-being for individuals and populations.
1. 1
Historical ContextThe development of portable sleep monitoring devices has a rich historical context that can be traced back to the late 1980s and early 1990s.It was during this period that researchers and clinicians first recognized the need for more accessible and cost-effective methods of diagnosing sleep disorders, leading to the creation of the first generation of these devices.These early devices were primarily designed to detect and record respiratory events during sleep, focusing on parameters such as airflow, respiratory effort, and oxygen saturation.As technology advanced, portable monitors became more sophisticated, incorporating additional sensors and measurement capabilities.The American Academy of Sleep Medicine (AASM) has classified these devices into four types based on their complexity and the number of channels recorded (3): • Type I: Full attended PSG (≥ 7 channels) performed in a laboratory setting • Type II: Full unattended PSG (≥ 7 channels) • Type III: Limited channel devices (4-7 channels) • Type IV: 1 or 2 channels, typically oxygen saturation or airflow 1.2 Current State of Portable Sleep Monitoring Modern portable sleep monitoring devices have evolved to offer a wide range of capabilities, often approaching the comprehensiveness of in-laboratory PSG.These devices typically include sensors for measuring:
•
While portable sleep monitoring devices have revolutionized the diagnosis of sleep disorders, continuous monitoring technologies have opened new possibilities for long-term management and treatment optimization.These technologies enable the collection of sleep-related data over extended periods, providing insights into sleep patterns, treatment efficacy, and potential health risks.2.1 Wearable Sleep Trackers Consumer-grade wearable devices, such as smartwatches and fitness trackers, have gained popularity for their ability to track sleep metrics.These devices typically use a combination of accelerometry and heart rate monitoring to estimate sleep duration, stages, and overall sleep quality (15).While the accuracy of these consumer devices for diagnosing sleep disorders remains limited, they have shown promise in raising awareness about sleep health and potentially identifying individuals who may benefit from further sleep evaluation.A systematic review by Haghayegh et al. (16) found that some consumer sleep trackers demonstrated reasonable accuracy in estimating total sleep time and wake after sleep onset compared to PSG. 2.2 Advanced Continuous Positive Airway Pressure (CPAP) Devices Modern CPAP devices used in the treatment of OSA have incorporated advanced monitoring capabilities that allow for continuous assessment of treatment efficacy and patient adherence.These devices can record and transmit data on: • Usage patterns (hours of use per night) Pressure settings and adjustments Integrating these monitoring features with telemedicine platforms has enabled remote monitoring and adjustment of CPAP therapy, potentially improving treatment outcomes and patient adherence (17).
3. 2 4 . 3
Personalized Treatment Approaches Continuous monitoring technologies have enabled more personalized approaches to sleep disorder management, particularly in the context of CPAP therapy for OSA.The ability to remotely monitor CPAP usage, efficacy, and adherence has allowed clinicians to make timely interventions and adjustments to treatment plans.Pépin et al.(17) conducted a randomized controlled trial comparing telemedicine-based CPAP management with standard care.They found that patients in the telemedicine group had significantly higher CPAP adherence rates and more significant improvements in quality of life than those receiving standard care.3.3Early Detection of Treatment FailureContinuous monitoring of CPAP therapy has also improved the ability to detect and address treatment failures early in the management process.Woehrle et al.(22) analyzed data from over 200,000 CPAP-treated OSA patients-early identification of adherence issues through telemonitoring led to more effective interventions and improved long-term outcomes.3.4Integration with Chronic Disease ManagementThe use of portable and continuous monitoring devices in sleep medicine has facilitated better integration of sleep health into the management of chronic diseases.For example, in patients with heart failure, the ability to monitor sleep-disordered breathing alongside other cardiac parameters has led to more comprehensive and effective treatment strategies(23).3.5 Research Applications Portable and continuous monitoring devices have also opened up new avenues for sleep research.Collecting large-scale, real-world data on sleep patterns and disorders has enabled researchers to gain insights into the epidemiology, natural history, and impact of sleep disorders on overall health.For instance, the Sleep Heart Health Study, a large-scale epidemiological study, utilized portable sleep monitoring devices to assess sleep-disordered breathing in a community-based cohort, providing valuable insights into OSA's prevalence and cardiovascular consequences (24).4.Technological Advancements and Future DirectionsThe portable and continuous sleep monitoring field is rapidly evolving, driven by advancements in sensor technology, artificial intelligence, and data analytics.Several emerging trends and technologies are poised to shape the future of sleep medicine:4.1 Miniaturization and Non-invasive SensorsOngoing efforts to miniaturize sensors and develop non-invasive monitoring techniques will likely result in more comfortable and user-friendly sleep monitoring devices.For example, researchers at the University of Massachusetts Amherst have developed a small adhesive patch that can monitor multiple physiological parameters related to sleep, including brain waves, eye movements, and muscle activity(25).4.2 Artificial Intelligence and Machine Learning Applying artificial intelligence (AI) and machine learning algorithms to sleep data analysis holds tremendous potential for improving the accuracy of sleep staging, event detection, and predictive modeling.Fiorillo et al. (26) demonstrated that deep learning algorithms could achieve high accuracy in sleep stage classification using single-channel EEG data, potentially simplifying home-based sleep monitoring.Integration with Other Health Monitoring Systems The future of sleep monitoring is increasingly interconnected with other health monitoring systems, creating a comprehensive view of an individual's overall well-being.By integrating sleep data with wearable devices, fitness trackers, and health apps, users can gain insights beyond sleep patterns, including physical activity, heart rate variability, and stress levels.This holistic approach allows for personalized health recommendations, enabling users to optimize their sleep hygiene based on real-time data.For instance, patterns in sleep quality may trigger alerts for potential health issues, prompting proactive measures.Additionally, device interoperability facilitates seamless data sharing, empowering healthcare providers to offer tailored interventions.As artificial intelligence and machine learning advance, predictive analytics will further enhance sleep monitoring by identifying potential sleep disorders before they become significant problems.This interconnected ecosystem fosters better sleep health and contributes to overall physical and mental wellness, paving the way for a more integrated and proactive approach to healthcare.DISCUSSION The rapid evolution and widespread adoption of portable and continuous monitoring devices in sleep medicine have undoubtedly transformed the sleep disorder diagnosis and management landscape.These technologies have addressed many limitations associated with traditional in-laboratory polysomnography, offering improved accessibility, costeffectiveness, and patient comfort.However, as with any emerging technology, there are both opportunities and challenges that warrant careful consideration.One of the primary advantages of portable sleep monitoring devices is their potential to democratize access to sleep disorder diagnosis.The ability to conduct sleep studies in the home environment has reduced barriers to diagnosis, particularly for patients in rural or underserved areas where access to sleep laboratories may be limited.This increased accessibility is crucial given the high prevalence of undiagnosed sleep disorders and their significant impact on public health.Moreover, the convenience and comfort of home-based testing may lead to more accurate representations of patients' typical sleep patterns.The unfamiliar environment of a sleep laboratory can induce the "first-night effect," potentially skewing the results of a singlenight study.Portable devices allow multiple nights of testing in the patient's natural sleep environment, potentially providing a more comprehensive assessment of sleep patterns and disorders.However, it is essential to acknowledge the limitations of portable monitoring devices.While their accuracy has improved significantly over the years, they may still underestimate the severity of sleep-disordered breathing, particularly in patients with mild OSA or complex sleep disorders.Additionally, the lack of direct observation during home testing means that technical issues or patient compliance problems may go unnoticed, potentially leading to inconclusive results or the need for repeat testing.The integration of continuous monitoring technologies, particularly in CPAP therapy for OSA, has revolutionized the management of sleep disorders.The ability to remotely monitor treatment adherence and efficacy has enabled more proactive and personalized approaches to patient care.This is particularly important given the historically poor adherence rates associated with CPAP therapy.Telemonitoring and automated feedback systems have shown promise in improving CPAP adherence and patient outcomes.However, implementing these technologies raises essential questions about data privacy, security, and the potential for over-medicalization of sleep.The advent of consumer-grade wearable sleep trackers has brought both opportunities and challenges to sleep medicine.On the one hand, these devices have increased public awareness of sleep health and may serve as a valuable tool for patient engagement and selfmonitoring.They have the potential to identify individuals who may benefit from further sleep evaluation and to provide longitudinal data on sleep patterns that could inform clinical decision-making.On the other hand, the accuracy and reliability of these devices for diagnosing sleep disorders remain limited, and there is a risk of overreliance on potentially inaccurate data.Clinicians must be prepared to educate patients on consumer sleep trackers' limitations and interpret this data in the context of a comprehensive sleep evaluation.Integrating artificial intelligence and machine learning algorithms into sleep monitoring technologies represents a promising frontier in sleep medicine.These techniques can improve the accuracy of sleep staging and event detection, potentially rivaling the performance of human scorers.Moreover, AI-driven predictive models could identify individuals at risk for developing sleep disorders or associated health complications early, allowing for more proactive interventions.However, developing and validating these algorithms require large, diverse datasets to ensure their generalizability across different patient populations.As we look to the future of sleep medicine, the concept of "precision sleep health" is emerging as a guiding principle.This approach aims to tailor sleep assessments and interventions to individual patients based on their unique physiological, genetic, and environmental factors.Portable and continuous monitoring devices will be crucial in realizing this vision by providing detailed, longitudinal data on individual sleep patterns and their relationship to other health parameters.The potential integration of sleep monitoring with other health monitoring systems (e.g., continuous glucose monitors and activity trackers) offers exciting possibilities for understanding the complex interplay between sleep and overall health.For instance, combining sleep data with information on physical activity, diet, and stress levels could provide valuable insights into the bidirectional relationships between sleep and other health behaviors.This holistic approach to health monitoring aligns with the growing recognition of sleep as a fundamental pillar of health, alongside nutrition and exercise.However, as we move towards more comprehensive and continuous health monitoring, it is crucial to consider the ethical implications and potential unintended consequences.The constant monitoring of physiological parameters, including sleep, raises concerns about privacy, data ownership, and the potential for increased anxiety or sleep-related performance pressure among patients.Striking a balance between the benefits of continuous monitoring and preserving individuals' autonomy and well-being will be a critical challenge for the field.Developing closed-loop systems for sleep disorder management represents an exciting frontier in sleep medicine.These systems, which can automatically adjust treatment parameters based on real-time monitoring data, can potentially optimize therapy and improve patient outcomes.For example, auto-adjusting CPAP devices that can modify pressure settings based on ongoing assessment of upper airway patency and sleep stage could provide more personalized and effective treatment for OSA.However, implementing such systems will require careful validation to ensure their safety and efficacy across diverse patient populations.As portable and continuous monitoring devices become increasingly integrated into sleep medicine practice, standardization and quality control measures are needed to ensure the reliability and comparability of data across different devices and settings.The development of consensus guidelines for using and interpreting these technologies will be crucial for their effective implementation in clinical practice and research.Additionally, the growing reliance on technology in sleep medicine necessitates a shift in the training and education of sleep medicine professionals.Clinicians will need to develop expertise in interpreting data from a wide range of monitoring devices, understanding their limitations and potential biases, and integrating this information into clinical decisionmaking.This may require updates to sleep medicine curricula, and the development of continuing education programs focused on emerging technologies.CONCLUSION Integrating portable and continuous monitoring devices into sleep medicine has undoubtedly revolutionized the field, offering new possibilities for diagnosing and managing sleep disorders.These technologies have improved access to sleep assessments, enabled more personalized treatment approaches, and provided valuable insights into the complex relationships between sleep and overall health.As we look to the future, the continued advancement of these technologies, driven by innovations in sensor design, artificial intelligence, and data analytics, promises to transform sleep medicine further.The vision of precision sleep health, where interventions are tailored to individual patients based on comprehensive, longitudinal data, is becoming increasingly achievable.However, realizing these technologies' full potential will require addressing several key challenges.These include ensuring the accuracy and reliability of portable devices across diverse patient populations, developing standardized data collection and interpretation protocols, addressing privacy and ethical concerns associated with continuous monitoring, and effectively integrating these technologies into clinical workflows.
sleep monitoring.These systems typically incorporate a combination of environmental sensors (e.g., temperature, humidity, light levels) and non-contact (20)iological sensors (e.g., radio-frequency sensors, bed sensors) to assess sleep quality and patterns(19).For example, Hsu et al.(20)developed a non-contact, under-mattress sensor system that monitors heart rate, respiratory rate, and body movements during sleep.Such technologies offer the potential for long-term sleep monitoring without the need for wearable devices or adherence to specific measurement protocols.3.Clinical Applications and ImpactThe integration of portable and continuous monitoring devices into clinical practice has significantly impacted the diagnosis and management of sleep disorders, particularly OSA.3.1 Improved Access to DiagnosisOne of the most notable impacts of portable sleep monitoring devices is increased access to sleep disorder diagnosis.The American Academy of Sleep Medicine (AASM) now recommends HSAT as an acceptable alternative to PSG for diagnosing OSA in uncomplicated adult patients presenting with signs and symptoms of moderate to severe OSA (3).60% reduction in diagnostic wait times and a 40% increase in patients diagnosed with OSA. | 4,467.4 | 2024-08-22T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Implementation of TOF-PET Systems on Advanced Reconfigurable Logic Devices
The ability to resolve the Time-Of-Flight (TOF) of the gamma particles resulting after the positron annihilation until their absorption by the detector material has a strong impact on the performance of the Positron Emission Tomography (PET) systems. This occurs because, by reducing the noise level, it becomes possible to also reduce the total amount of data required to reconstruct the medical image to a given quality degree. This furthermore translates into a reduction of the time required for the image acquisition or into a reduction of the radioactive dose employed. Additionally, the capability to resolve the TOF is critical for image recon‐ struction in situations where the detectors cannot be completely deployed around the point of interest [1].
Introduction
The ability to resolve the Time-Of-Flight (TOF) of the gamma particles resulting after the positron annihilation until their absorption by the detector material has a strong impact on the performance of the Positron Emission Tomography (PET) systems. This occurs because, by reducing the noise level, it becomes possible to also reduce the total amount of data required to reconstruct the medical image to a given quality degree. This furthermore translates into a reduction of the time required for the image acquisition or into a reduction of the radioactive dose employed. Additionally, the capability to resolve the TOF is critical for image reconstruction in situations where the detectors cannot be completely deployed around the point of interest [1].
In Figure 1 it is shown the improvement on the image quality as a function of the TOF resolution and the solid angle covered by the detectors. As it can be seen from the figure, the importance of the TOF-PET measurement is greater as the solid angle covered by the detectors becomes smaller. According to this, the TOF capability is essential to any PET system that cannot completely surround the patient, like it could be the case of specific-application PET systems developed for particular applications, as for instance the approach for nuclear cardiology depicted on Figure 2.
Current PET scanners are built around analog subsystems implemented with discrete circuits. The electronic advances have allowed replacing the analog circuits by digital equivalents. Some of the reasons are that digital circuits present higher throughput; digital circuits also increase self-test and diagnostic capability; they present higher reliability and they also present higher security of intellectual proprieties. In contrast to these advantages, uncertainties on the time determination appear due to the discretization and the rounding effect of the digital systems. Moreover, the complexity of the design tools is considerably higher [2]. PET systems contain trigger units responsible to identify true coincidences. These units are typically based on Complex Programmable Logic Device (CPLD) or Application Specific Integrated Circuit (ASIC) devices combined with Digital Signal Processors (DSPs).
On one hand, DSPs are designed to support high-performance, repetitive and numerically complex sequential tasks. They are specialized on execution of repetitive algorithms, which involve multiplication and accumulation operations. The execution of several operations with one instruction are the features that accelerate the performance in state of the art DSPs [3]. Such a performance strongly relies on pipelining, which increases the number of instructions that can be executed in a time unit. However, parallelism in DSP is not very extensive; DSP is limited in performance by the clock rate and the number of useful operations that can be performed at each clock cycle. For instance, the TMS320C6202 processor, a well-known DSP, has two multipliers and a 200 MHz clock, so it can achieve at most 400 10 6 multiplications per second, which is much less than a programmable logic device counterpart.
On the other hand, CPLDs are very simple reconfigurable logic devices, with a few tens of input channels and quite small logic units for data processing. They have gradually been replaced for more complex devices with higher amount of resources. For instance, ASICs present a better optimization of logic size and power management. For many high-volume designs the cost-per-gate for a given performance level is lower than that of high speed CPLDs or DSPs. However, the inherently fixed nature of ASICs limits their flexibility, and the long design cycle may not justify the cost for low-volume or prototype implementation, unless the design would be sufficiently general to adapt to many different applications. Moreover, the development of very high performance reconfigurable logic devices, as Field Programmable Gate Arrays (FPGAs), has allowed its successful application in a wide number of areas.
First FPGAs lacked the gate capacity to implement demanding DSP algorithms and did not have specific tools well enough for implementing DSP tasks. They were also perceived as being expensive and with a relatively poor power management. But these limitations are being overcome with the introduction of new DSP-oriented products from Altera and Xilinx, the two leading companies for FPGAs. High throughput and design flexibility have positioned FPGAs as a solid silicon solution over traditional DSP devices in high-performance signal processing applications. FPGAs can provide more raw data processing power than traditional DSP processors by using massive parallelism.
Since FPGAs can be hardware reconfigured, they offer a complete customization while implementing various DSP applications. All these features are, nowadays, easy to implement by means of a new generation of specific tools. FPGAs also have features that are critical to DSP applications, such as embedded memory, DSP blocks and embedded processors. Current FPGAs provide more than 96 embedded DSP blocks, delivering at least 384 multipliers operating at 420 MHz. This results on over 160 billion multiplications per second, a performance improvement of over 30 times what is provided by the fastest DSPs. This configuration leaves the programmable logic elements on the FPGAs available to implement additional signal processing functions and a system logic, including interfaces to high-speed chips and external memory interfaces such as DDR2 controllers. Using high bandwidth embedded memory, FPGAs can in certain cases suppress the need for external memory.
Summarizing, FPGAs present a high speed data transfer; fast data processing capabilities; the ability to handle simultaneously a huge number of electronic signals; and the possibility to reconfigure itself to adapt to the very wide range of applications without the need of modifying the hardware design. They additionally include hardware (Xilinx PowerPC) or software (Xilinx MicroBlaze) processor cores, depending on the model; they offer a huge storage capacity with dedicated RAM blocks and look-up table memories; and large logic capacity with tens of millions of system gates. All these features make FPGAs to be great candidates to replace CPLD or ASIC devices on PET trigger units.
Besides the advantages of PET systems based on FPGAs, recent advances in digital electronic design allows to use FPGAs for TOF determination with very high accuracy, less than 100 ps [4,5]. This timing resolution opens the door to the development of trigger units for PETs systems with TOF capabilities built on them at a very competitive cost. Moreover, the reconfiguration characteristics of these devices allow to easily modify the PET setup (number of channels, detector coincidence map, etc) and to adapt it to different environments or physical requirements. In this chapter the main considerations for the design of TOF-PET systems based on advanced reconfigurable logic devices will be presented.
In the first section, the main advantages of TOF-PET systems will be highlighted and a historical review of these systems will be presented. In the second section, the requirements on the scintillation crystals and detectors suitable for TOF-PET designs will be described. Details of the electronic TOF implementation on FPGAs will be provided in the third section. In the fourth section, the impact of the TOF information on the reconstruction algorithms will be discussed and, finally, the conclussions will be pointed out in the fifth section.
Historical perspective
In this section, a brief description of the evolution of TOF-PET scanners from its origin to nowadays is presented.
The idea of TOF information for PET was already suggested by Anger [6] and Brownell [7] in the 1960s. However, it was rejected since the available scintillators crystals, photo-sensors and electronics were not fast enough. It was considered again when the type of crystals like CsF or BaF 2 appeared in early 1980s. Several TOF-PET scanners were built at that time by leading groups as by CEA-LETI in Grenoble [8,9], by Ter-Pogossian's group at Washington University [10,11] and by Wong's group at University of Texas [12,13]. This first generation of TOF-PET devices achieved time resolutions ranging from 470 to 750 ps [14][15][16]. The decay time of these scintillator materials (CsF and BaF 2 ) was very short (see Table 1 below), but their low density, low photoelectric fraction and low light output resulted on a poor spatial resolution and sensitivity.
At the same time, Bismuth Germanate (Bi 4 Ge 3 O 12 or BGO) began to also be used for PET designs. This scintillator has much better characteristics for PET systems, as high detector efficiency due to its increase effective atomic number (Z). However, its long decay time made it hardly suitable for TOF-PET systems. It is also remarkable in 1980s that is the time span in which two major companies (i.e., General Electric and Computer Technology Imagery) entered into the PET industry and gave credence to the clinical application of PET, because prior to this time (the late 1980s) most PET applications had been research applications [17,18].
The development improvement of TOF-PET systems was stopped until the discovery in 1990s of new scintillators based on Cerium-doped Lutetium Orthosilicate (Lu 2 SiO 5 or LSO). LSO quickly revolutionized PET imaging systems because it excelled in three fundamental detector material parameters: high density, high effective Z and a relatively high light yield with a short decay time of around 40 ns, allowing very narrow coincidence windows. The short decay time (LSO decays 7.5 times faster than BGO) permitted to decreased patient scan times and, thus, supposed an improvement that made patients more comfortable during the procedure and from a clinical standpoint increased patient throughput. The increase in patient throughput made the procedure accessible to more patients and subsequently increased the testing revenue for hospitals and PET imaging centers. The short decay time also lowered the level of random noise in these scans [5]. In terms of resolution, systems based on LSO scintillators permitted a new generation of TOF-PET scanners with timing resolutions as small as 300 ps [19]. The decade of the 1990s, thus, is known as the decade in which the extended use of PET progressed and made strong in the clinical sector. As more and more members of the medical community became acquainted with the utility of PET and its present and future benefits, PET imaging became increasingly popular and was available in more hospitals, diagnostic clinics, mobile systems, and physician practices.
Recently, the discovery of new materials as Cerium-doped Lanthanum Bromide (LaBr 3 ) with shorter decay time (16 ns) and excellent energy resolution has led to the development of TOF-PET systems also reaching time resolutions of 420 ps, and it is expected to reduce this resolution to 315-330 ps [20]. LaBr 3 present the drawback of being hygroscopic and, thus, requiring a tedious manipulation and montage.
Finally, from a commercial point of view, only two TOF-PET scanners have been introduced in the market by Philips and Siemens. The Gemini TF PET-CT is commercialized by Philips since 2006, it uses LYSO scintillator crystals (similar to LSO but with slightly lower density) and achieves a time resolution of 585 ps [21]. Recently, there has been presented results for the Siemens TOF-PET scanner, called mMR, showing a time resolution of 550 ps [22].
Currently, in parallel with advances in scintillator materials, new fast and cost-effective photosensors are being developed. Silicon Photomultipliers (SiPMs) are at the forefront of this development. They are almost unaffected by magnetic fields [23], are very fast and have high gain. SiPMs aim to improve TOF resolution due to their fast timing [24]. Single-photo-electro timing resolutions close to 50 ps root-mean-square have been reported [25]. It is expected that a new generation of TOF-PET scanners based on fast scintillators and SiPMs would be able to achieve unprecedented time resolutions.
For additional information about the historical development of TOF-PET systems, excellent reviews can be found in the literature, for instance in [26][27][28].
Crystals and detectors for TOF-PET scanners
The capability of PET systems to return highly accurate TOF performances strongly depends on the read-out electronics but also on the detector block itself. In this section, the main considerations about this block namely the type of crystal and the photosensor, will be presented paying special attention to its timing properties.
Crystals for TOF-PET systems
PET devices containing scintillators crystals must be as denser as possible since they have to stop the photons of 511 keV energy produced in the positron-electron annihilation. Such crystals need to generate high amounts of scintillation light to be detected with the photosensors. The crystal light yield is very important since it directly relates with the energy resolution of the system but also with the spatial resolution and later with the timing performance. To increase the photon emission probability in the visible range during the relaxation process, most of the crystals are doped with small quantities of impurities which generate intermediate states of energy.
In order to obtain fast output signals from the scintillation light, it is also important to account for a decay time of such light as short as possible. Moreover, the emission light wavelength should match the sensitivity of the photo-sensor utilized for electronic conversion. NaI(Tl) has been one of the first types of crystals used for PET design. It generates significant amounts of scintillation light providing a high energy resolution and, thus, allowing to distinguish for instance photons of similar energies. One of the drawbacks when using this material has been its hygroscopic property, which requires using it in dry environments. In contrast to NaI, as stayed before, BGO crystal have been the most used scintillation crystal for PET applications, especially due to its high density, but with the lack of a good light yield and, therefore, time response.
GSO (Gadolinium-orthosilicate) has also been considered for PET designs although the light yield is also low compared to others. In this ranking, LSO (Lutetium oxyorthosilicate) appears to be good positioned offering a similar stopping power than BGO but also generating a high light yield compared to NaI. Nowadays, a LSO variant, commercially named LYSO is being widely used since its performances are very similar to LSO but at lower prices.
We will focus now in the decay time of the scintillation light since it is the dominant property in order to accurately achieve a TOF determination. As shortly introduced above, the scintillation light is described by a fast increase of the intensity followed by an exponential decrease of this emission. Here, it is called scintillation decay time to the one reached after the light pulse intensity reduces to 1/e of its maximum.
The time resolution is conditioned by the rise time, decay time and absolute light output. The rise time is negligible compared to the decay time, only the decay time and light output determine the intrinsic limits of the time resolution. In particular, faster decay times and higher light outputs reduce, i.e. improve, the time resolution. The shortest the decay time the lower the sensor dead time to process more events. The high initial rate suggests that LSO should return excellent timing properties. However, the timing properties of a scintillator depend on both the energy deposited in the crystal and the geometry of the scintillation crystal.
Photosensor, detectors capable of TOF and signal types
The photosensors are the next part of the puzzle in order to reach high time resolutions. Two main groups of photosensors are currently under use in PET technology namely Photomultiplier Tubes (PMT) and solid state photo-diodes.
PMTs use the external photo-electric effect. The scintillation photon enters the PMT through the crystal window, deposits its energy in the photocathode, and excites the electrons in the photocathode coating. The photoelectrons are accelerated and focused to the first anode with the help of an electric field. The photoelectrons are multiplied after impacting the first dynode, and this structure is sequentially repeated. A typical PMT gain is of about 10 6 from anode to last dynode. It is possible to increase the gain with the high voltage difference and the number of stages or dynode sequence.
Most scintillators emit in the 400 nm range, allowing the use of Borosilicate glass-windowed PMTs. Many of the PMTs that are used in commercial PET cameras have transit times that vary significantly across the face of the PMT. Such a time corresponds to the interval between the light pulse striking the photocathode and the pulse signal at the anode. The transit time inversely depends with the square root of the supplied voltage. However, concerning the time resolution of the PMTs, this is better defined as the mean transit time. TOF measurements with PET scanners based on PMTs require a transit time variation very small among the different PMTs used in the design but also across the different PADs (anodes) of each individual device.
The coupling of PMTs and scintillation crystals permit to recover the photon impact position.
In the case of multi-anode PMTs it is somehow easier to derive such an incidence position. The location of the interaction is achieved by measuring the light detected on each anode. This is referred as Anger-logic. In the following Semiconductor detectors and, in particular, Avalanche Photodiodes (APDs) have proven to be suitable photosensors for PET detectors since the mid-1990s. These compact and reliable silicon-based devices have successfully been used to replace bulky photomultiplier tubes in high-resolution PET systems. Since arrays of small dimensions crystals are most commonly used as the scintillation block, these crystal pixels may be used individually coupled to single small area APDs. These sensors are very thin and, because of the high internal electric field and the short transit distances of the charge carriers, they are quite immune to magnetic fields. This characteristic allows them to be placed inside a magnet and to operate quite normally. APDs have been tested in high magnetic fields of up to 7 or 9.4 T without showing any performance degradation [29].
Although APDs are compact and insensitive to magnetic fields, they present limitations for optimal PET performance. In particular, they can be hardly used for TOF measurements due to their slow response time. They also show low gains in the order of a few hundreds, and therefore, require sophisticated preamplifiers. These drawbacks seem to be overcome by the so-called Silicon Photomultipliers (SiPMs). Note that they are differently named depending on the manufacturer.
A SiPM consists of multiple tiny (currently up to about 20 microns side length) avalanche photo-diodes (so-called microcells) connected to a common electrode structure. When a reverse bias is applied to the SiPM at a voltage higher than the breakdown, each microcell operates in Geiger mode providing single photon counting capability. However, one photoelectron saturates the microcell limiting the linear response of the device as a function of the quantity of photoelectrons to about half the number of microcells. Similarly to APDs, they are compact, exhibit good photon detection efficiency (PDE) and do not need high voltage power supply. In advantage to APDs, they require simpler electronics and provide a high gain (10 5 -10 6 ).
Due to their excellent timing resolution (hundreds of picoseconds), SiPMs are currently considered as the best choice for future TOF-PET applications. Their insensitivity to magnetic fields makes them ideal for the development of hybrid PET-MRI scanners. Moreover, their costs are expected to diminish rapidly in the near future due to increasing competition (there is no patent for the main invention) and automated massive production.
Design and implementation of a TOF-PET system based on a FPGA
In this section, several electronic techniques for TOF measurement will be described, also introducing the concept of FPGAs. The main features of this technology will be identified and their impact on the total system performance will be discussed. Once the use of FPGAs has been justified, the multiple implementation techniques, advantages and benefits for TOF measurements will be exposed.
Electronic techniques enabling TOF calculation
There exist several techniques for electronically measuring the photons TOF. In a first approach, TOF systems were based on analog circuits, using extremely uniform current sources and converting the electrical charge accumulated by a capacitor into voltage values, proportional to the charging time, that later were digitalized [30], as illustrated in Figure 4. This technique presents several drawbacks, mainly related with scalability, design complexity and static power dissipation. Nowadays, most of TOF measurement devices are based on digital circuits using delay lines in different configurations. These devices use the propagation delay across the individual digital blocks to measure TOF [31][32][33], so they are able to measure it with a resolution lower than the system clock period. Figure 5 represents a digital delay line used to measure TOF. The digital TDCs (Time to Digital Converters) overcome the inconvenient of the analog approach and, if properly designed, they can even compensate the effect of temperature and/ or power supply fluctuations. However, most of them are built on ASICs, so they are expensive, have a reduced number of available channels, and their functionality is limited. Here, the development in recent years of very sophisticated reconfigurable logic devices opens the possibility to integrate digital TDCs on high performance FPGA.
FPGAs overview
FPGAs are pre-fabricated silicon devices that can be electrically programmed to carry out multiple digital functions. Unlike Microprocessors or Computers in which programming means change the incoming instructions to the device, programming an FPGA consist of change the internal logic of the device.
Historically, their strongest competitors in the market were the ASICs. They are designed for specific application using CAD (Computer-Aided Design) tools. Developing an ASIC takes much time but they have a great advantage in terms of recurring costs as very little material is wasted due to the fixed number of basic elements in the design. With an FPGA, a certain number of basic elements are always wasted, as these packages are standard. This means that the cost of an FPGA is often higher than that of a comparable ASIC. Although the recurring cost of an ASIC is quite low, its non-recurring cost is relatively high and often reaching into the millions. Since it is non-recurring though, its value per IC (Integrated Circuit) decreases with increased volume. If the cost of production in relation to the volume is analyzed, it will be find that going lower in production numbers, using FPGA actually becomes cheaper than using ASICs [34]. Furthermore, it is hardly possible to correct errors after fabrication.
In contrast to ASICs, FPGAs are configured after fabrication allowing the user for further reconfigurations. This is done with a hardware description language (HDL), which is compiled to a bit stream and downloaded to the FPGA. The disadvantages of FPGAs are that the same application needs more space on chip and the application runs faster on the ASIC counterpart. Due to the size reduction of the basic components, FPGAs were getting more powerful over the years. Herein, the development of ASICs was decreasing and becoming more expensive. Figure 5 shows the design flow of the two mentioned devices.
From Figure 6 it is easy to observe the highest complexity involved in an ASIC design as for instance: • Design for Testability (DFT) Insertion. This technique is used to check whether the manufacturing process has added defects to the chip. DFT insertion means incorporating an additional logic to improve the testability of the internal nodes of the design.
• Hand-off to foundry. The process takes several months due to the "personalized" design.
• Equivalency checking. A system design flow requires comparison between a Transaction Level Model (TLM) and its corresponding Resistor-Transistor Logic (RTL) specification.
• Verification of 2 nd and 3 rd order effects. This stage is not included in the FPGA design flow because is carried out by the manufacturer.
An FPGA design flow eliminates the complex and time-consuming floorplanning (design and interconnection of the internal blocks), place and route, timing analysis, and other stages of the ASIC design project since the design logic is already synthesized to be placed onto an already verified, characterized FPGA device. However, when needed, manufacturers provide the advanced floorplanning, hierarchical design, and timing tools to allow users to maximize the performance for the most demanding designs. Furthermore, FPGA technologies are considered very competitive due to the wide specification ranges. Each manufacturer provides FPGAs with different capabilities that adapt to the desired application. There are families for high performance applications, for high volume of production and even radiation tolerant families.
CPLDs are, in some cases, a good alternative to FPGA. They have a similar internal architecture to the FPGAs, as shown in the Figure 6. CPLDs are composed of digital blocks, which implement digital functions, analogous to the FPGA, IOBs (Input Output Block) and Interconnection Matrices. In general terms, CPLDs have less internal resources than FPGAs but they are able to achieve better speeds. However, when a considerable number of resources such as memory blocks and multipliers are required, FPGAs are still the best choice. In fact most of the current FPGAs incorporate Digital System Processing blocks, which have internal Multipliers. FPGAs have become more popular and, thus, CLPDs have experienced a noticeable decrease in its production, which gives FPGAs more guarantee of continuity. Therefore, FPGAs are increasingly applied to high performance embedded systems.
FPGA internal architecture
In the following, a basic description of the internal blocks of an FPGA is presented. Its basic structure is composed of three main blocks: • CLBs (Configurable Logic Blocks). Generic blocks, which contain digital logic for implementing specific functions. • IOBs. They are used to connect the FPGA to other systems of the whole application.
• Programmable Interconnect. Enables the communication between CLBs and IOBs.
Additionally to these basic blocks, FPGAs incorporate: • Distributed memory blocks that store the user-programmed configuration.
• Clock blocks that are intended to additional clock signal generation for using either in internal blocks or external purposes.
• Other blocks that manage the proper coexistence of all the resources.
FPGA design for TOF measurement
As commented above, there are several alternatives for implementing the TOF determination, many of them based on ASICs, that are expensive, hardly reconfigurable, and they need to be produced in high volumes to be cost-effective. However, reconfiguration capabilities of FPGAs and their low cost compared to other solutions have made them the ideal candidates for the development of complex electronic equipment, as PET systems [35]. Additionally, it is technically possible to use FPGAs to measure TOF with a very high time resolution [36], much better than the resolution of current commercial PET systems whose resolution is around 600 ps. Thus, the electronic device responsible for the TOF measurement must be able to distinguish events between time periods in the order of few-tens hundreds of picoseconds to be competitive enough in the market. In this subsection, the main considerations for TOF calculation using an FPGA will be presented.
Time to digital converter
TDC is a well-known technique traditionally used for TOF determination [37]. The TDC goal is to recognize events and to provide a digital representation of the time they occurred. There are many TDC implementation possibilities. Focusing in digital TDCs and leaving aside the analog TDC, the simplest is a high-frequency counter, which value is incremented at each clock cycle. When an event occurs, the accumulated amount of clock periods are stored and presented. The drawback of this approach is that the stored counter is a number of integer clock cycles and, therefore, the resolution is restricted to the clock system. Thus, in order to get accurate resolution, the use of a faster clock is required. Thus, the larger the frequency the more the signal integrity problems, translating into a complex system design. Moreover, the stability of the clock system becomes critical.
Interpolation circuits emerged as a necessity to measure events below the clock period. These circuits measure the time between a clock event and the event being measured. One of the problems is the TDC time required to perform a measurement, blocking new measurements for a certain period of time. One of the most implemented structures based on interpolating circuits is the Vernier Delay line.
Until recently, TDCs were ASIC implemented either by companies which launched the product to the market or by owners who wanted a specific design. Nowadays, the use of FPGAs aimed at this purpose is getting more popular [36][37][38]. Low cost, fast development cycle and commercial availability are some of the motivations of this fact. Other trade-offs of using FPGAs compared to ASICs have been amply discussed in previous sections. Sometimes a TDC is completely included within an FPGA but, depending on the application, some parts may be outside FPGA. Beyond the delay line, current TDCs contain many other elements. An example of a TDC block diagram is depicted in Figure 8. Figure 8 represents a basic scheme of a modern TDC. The most complex block corresponds to the delay line, which will be deeply discussed below. A simplified description of a TDC follows: • A calibration signal is initially selected for the system calibration. This is a necessary task to determine the individual delay of the elements from which the delay line is composed by. The raw counter (not yet in terms of time) is stored into the histogram memory.
• Each raw time element previously booked into the histogram memory is converted into a real time value and booked again into a look up table (LUT).
• The system is ready to receive an event through the "signal" connection, which is selected by the "select" signal.
• When the time event is greater than the clock period, a number of entire clock cycles must be stored, performed by the coarse counter block.
• Then, when an event occurs the "signal" is bypassed into the delay line. The encoder counts the number of elements reached by the signal and provides this number to the LUT, which convert this number to time and, after this value is combined with the coarse counter value, a final timestamp is generated.
System architecture
TDCs may incorporate more than one channel. In the block diagram previously described, a multiple channel TDC is referred. In this case, the proposed TDC channels will share the histogram memory block and coarse counter block. At the time to get the final timestamp, the coarse counter block will store each coarse time associated to each channel number. Analogously, the histogram memory block will store, after the initial calibration, the time delay of each tap for each channel.
The importance of channels lays in the possibility of group in one single device the TOF measurement of a complete PET system [35]. The outputs of the detectors placed on the PET ring system have to be fed into a trigger unit, which will be the responsible of data processing. When a signal coming from one detector is received, the system waits certain time with the purpose of receiving another signal coming from an opposite detector (or a defined set of them). The block CFD (Constraint Fraction Discriminator) is in charge to adapt the voltage values of the signals from the detector to those required by the FPGA, without disturbing the timing information. A TDC measures the time difference between the events coming from the two detectors in order to estimate the TOF. Data will be transferred to the co-processor unit (see below) to be further sent to the acquisition control unit. Figure 9 represents the mentioned architecture. The selected FPGA must account for enough resources to accommodate the required channels. Key resources that have to be considered are those that are going to be part of the delay line. Depending on the total amount of channels needed by the application, it will be mandatory focusing on the resources, which delay elements will be placed, and a proper FPGA selection.
Concerning the channel implementation in FPGA compared to other devices, FPGAs offer flexibility at the time of providing high number of channel inputs. These channels can be dynamically defined by software and enable/disable some of them if required. This means that those resources that are now free can be used for other purposes.
Delay line
Basically, a delay line is a set of interconnected elements whereby a signal is passed through. It is normally used to count the time between two or more events. Each delay element (also referred as tap or bins) has a propagation delay (τ) and a storage block (see Figure 5). At certain time instant, the incoming signal is stopped and the total amount of reached taps is counted.
Since the propagation delay of each element was previously measured, the time interval from the input signal arriving to the delay line until the signal is halted, can be determined.
It is very important to be taken into account that the total delay of the delay chain must be equal or greater than the clock period. Additionally, when high accuracy in TOF measurements is required (below 100 ps), any change on the propagation feature of the delay elements or the delay line path (path which join the bins) becomes critical. There are three major issues that threaten them [37,38]:
a. PVT
The propagation features of the delay elements are temperature and voltage dependent. This means that the variation of the temperature inside the device and the variations of the supplied voltage have to be controlled. In ASIC-based TDCs is possible compensating the delay variation through analog method, more exactly, generating a control voltage internal circuit ad-hoc. In FPGAs, analog calibration is not suitable and a digital compensation is adopted [39].
The two more popular approaches already proposed are: • Double registration. In this approach the total delay time of the delay line is designed to be longer than the system clock period. After a random time, the incoming signal is stored twice in order to take the average time value. This solution presents a fast time response but the drawback of this configuration is that does not provide a calibration of every bin independently since the average is taken when the bins have different width.
• Statistical. In this other approach the calibration process provides a compensated delay to each bin. The calibration process is, in many cases, automatically designed through a specific feedback. For instance, a certain component, which is also affected by PVT (Process Voltage and Temperature induced variations), is implemented and placed close to the delay line in order to resemble the temperature and voltage variations. This component might be, for example, a ring oscillator whose oscillator frequency is temperature and voltage dependent. Initially, the ring oscillator frequency is measured and stored as well as the initial time delay of each tap. Then, once the system has been calibrated, it remains continuously checking if the ring oscillator frequency has changed. If it has, the time values of each tap are interpolated according to the ring oscillator frequency differences.
b. Delay line placement.
A design tool often places delay elements of TDC automatically, what sometimes triggers imbalanced delays [37]. FPGAs dispose of repetitive structures commonly known as chain structures. FPGA designers place these sorted structures for general-purpose applications. The benefit is the short path connection between them what makes their use appropriate for TDC delay line implementation. Some of the different kinds of chain structures that the vendors include in many FPGAs are carry chain structures, sum-of-products chain, cascade chains, etc. Figure 10 depicts a deployed carry chain structure. Figure 10 shows an internal view of a commercial FPGA. Red blocks corresponds to the delay line, which in this case if composed of carry logic structures. This is one possible placement of many. Depending on the length of the delay line, it is possible to locate the carry chain in multiples areas as long as the region contains carry elements (they are not present in all FPGA blocks). In this case, the carry chain occupancy is almost 400 slices. Often it occurs that there is not space enough to accommodate all the carry chains in a single column and additional columns are required. This fact will make the delay line less uniform. Therefore, in some designs a possible placement restriction must be taken into account.
c. Differential non linearity (DNL).
The problem of the non-uniformity of tap delays is the greatest disadvantage of the FPGA delay line implementation. Its origins come from the internal way whereby the delay taps are connected, which in some cases is made by a CAD tool. Moreover, the discordances relate to the special features of some FPGAs. More specifically when the input signal passes across Logic Array Block boundaries and extra delays added cause ultra-wide bins [38]. An example of this effect is depicted in Figure 11. It is easy to appreciate the DNL (Differential Non Linearity of the delay bins) effect. This effect deteriorates the time resolution of the TOF measurement system but, fortunately, there exist some techniques to reduce this negative effect if required [38].
Co-processor
An important part of the system intended to measure the TOF is the co-processor. The goal of this component is to manage the information coming from the TDC and to provide with a timestamp to the next part of the system. Traditionally, it is not included in the trigger system but it as an extra module.
Trigger systems have currently become more complex, integrating more sub-systems in it.
With the advent of modern devices, co-processors have been integrated into the main part of this trigger system, namely, FPGAs or ASICs. Either in ASIC or last decade in FPGAs, the coprocessor was hardware integrated, what meant that certain resources were already used and there were no chance to make user-defined architecture. However, new generation FPGAs provide software-defined co-processors, which are liable to be dimensioned according to the application requirements. This relatively new feature has given FPGAs even more advantages and, thus, more relevance when it deals with TOF calculation systems.
Impact of TOF information on reconstruction algorithms
To finalize this chapter, we will it will be described how the algorithms currently used for image reconstruction are affected by the TOF information.
Conventional PET (or non TOF-PET) reconstruction uses TOF only to determine if two detected photons are in the same time coincidence Δt and therefore belong to the same positron annihilation event. Here, a positron annihilation event would be registered along the line at which the event occurred, but it is unable to identify which voxel is the source of the event, thus all the voxels along the path are suggested to have the same probability of emission.
However, in TOF-PETs, the faster detectors are able to measure the difference in the arrival time of the two gamma rays, providing better localization of the annihilation event along the line formed by each detector pair. In fact, the position is blurred by a time measurement uncertainty named "time resolution", the time resolution of a detector is defined as the minimum time interval between two subsequent photon events in order for these to be recorded as separate events and depends on several instrumental factors. The smaller time resolution Δt, the smaller error on the localization of the source Δx. In fact the FWHM of the probability function is the localization uncertainty Δx (FWHM) = cΔt/2. This results in an overall improvement in signal to noise ratio (SNR) of the reconstructed image. In particular, the SNR in an image including TOF information improves with decreasing time resolution Δt (or the corresponding spatial uncertainty Δx). Therefore, such an uncertainty is larger for bigger patients (being related to the effective diameter D). The TOF SNR is proportional to the non-TOF SNR, through the following relationship: Nowadays, the image reconstruction problem for fully 3D TOF-PET is challenging because of the large data sizes involved. Thus, it produces a high degree of redundancy in 3D TOF-PET data which can be exploited in multiple ways as reducing data storage and thereby accelerating image reconstruction, or to reject missing or inconsistent data. These unmeasured data samples can be caused either by defective detectors, or incomplete angular coverage of the patient due to special PET scanner architectures like it could be the case of a dedicated ring PET with an aperture aiming to allow for biopsy procedures.
Mathematically, redundancy is expressed by consistency conditions which can be visualized in terms of the 3D Fourier Transform and employed for compensation of missing data, using Fourier rebinning of PET data from TOF to non TOF. Thus, TOF-PET systems require less data to provide higher quality images, so the doses to the patient could be reduced. Moreover, redundancy of information can be used to overcome missing data either from defective detectors or to special scanner architectures.
Current TOF-PETs timing resolutions of about 550-600 ps do not directly lead to an improvement in the spatial resolution of the reconstructed image. It actually reduces noise propagation by localizing events along segments of each Line of Response (LOR) rather than spreading statistical noise across the full length of each LOR. At the ultimate limit, TOF-PET could potentially localize annihilation events to within a single image voxel, effectively measuring the activity distribution directly and eliminating the need for tomographic reconstruction. However, this would require a timing-resolution of approximately 10 ps to isolate events to within a 3-mm voxel. With the current TOF-PET devices, inclusion of TOF information provides a degree of improvement similar to that obtained with the Point Spread Function (PSF) model. Moreover, TOF information can lead to an artifact-free image reconstruction when the number of angular samplings is reduced. This fact is important if PET devices with limited angle coverage are considered. Partial ring PET devices can have advantages over full ring geometries in future dedicated PET systems designed for imaging specific organs. However, partial ring design leads to an incomplete sampling of the polar angles, producing artifacts in image reconstruction. Nevertheless, the number of angular views necessary for an artifact-free image reconstruction is reduced as TOF-PET timing resolution improves (i.e. the additional TOF information can recover some of the missing information and reduce or eliminate the artifacts). In this sense, with TOF information, the angular sampling requirements are reduced. [40] TOF-PET approaches put challenges in the field of image reconstruction algorithms. The first challenge is to make the reconstruction time clinically viable, as TOF-PET implies a nonnegligible increase on the image reconstruction computational cost. A variety of reconstruction methods already exist for TOF-PET data. These image reconstruction procedures can be divided in two groups: analytical and iterative algorithms. This division is normally made whatever the tomography technique is considered (computed tomography, Single Photon Emission Computed Tomography (SPECT), and PET).
Analytical methods
Analytical (i.e. Filtered Back Projection, FBP) reconstruction methods were the only reconstruction methods available at the beginning of TOF-PET development and were originally described in the 1980s for 2D data [41,42]. In an analytical TOF-PET approach, the image is reconstructed by using a one dimensional time-of-flight weight along the time-of-flight line [43]. In this reconstruction, the TOF response kernel k(l) is usually taken to be a Gaussian where l is a scalar variable [44]: whose spatial FWHM, Δx=(2σ2(4ln2))1/2, is related to the FWHM time resolution Δt as described above. The convolution of the function describing the "unknown" emitter distribution e(r) with the kernel function k(l) is directly related to the TOF projection data d(θ,r) as: where û is the unit vector in the projection direction at angle θ. It can be demonstrated [44] that the function describing the emitter in the frequency domain, E(ν), can be obtained from: where D(θ,ν) is the Fourier Transform (FT) of the projection data at angle θ, and , û). The CW reconstruction TOF filter has been shown to be optimal in terms of minimizing image noise variance when working with Poisson data from an infinite uniform source distribution [43], but could not be optimal in other situations. In the above discussion we have considered the 2D tomography problem and the continuous domain. These expressions can be discretized for practical implementation on real TOF-PET data. The 2D approach has been also extended to 3D data. Axial single-slice and Fourier rebinning approaches followed by 2D reconstruction have been described [45][46][47]. Moreover, techniques based on rebinning the TOF data into non-TOF arrays have been also developed [48].
Iterative Methods
Although analytical reconstruction methods are generally faster than the iterative ones, these last generate higher quality images, in terms of spatial resolution and image noise [49]. Iterative reconstruction methods such as the Ordered Subsets Expectation Maximization (OSEM) algorithm have to be modified in order to take into account TOF information. This is done by including a PSF along the LOR in the projector, with a width directly related to the time resolution of the scanner. Despite of the high computational cost of the iterative algorithms with respect to the analytical ones, current iterative reconstruction methods are the standard in clinical PET, and also appear to be the natural choice for TOF-PET in both present and future clinical TOF scanners [50]. Moreover, TOF-PET adds complexity to data organization and computation time to the reconstruction algorithm. If the reconstruction is sinogram based, TOF information adds a "4th" dimension to the 3D sinogram representation, changing data storage and dynamic memory requirements. In contrast to these drawbacks, if the reconstruction is list-mode based, the data are stored as a list of detected events [51]. However, 3D list-mode iterative TOF reconstruction allows for the modeling of all physical effects of the scanner system, thus retaining the resolutions of the data in the spatial and temporal domains without any binning approximation. In this sense, this approach is much more flexible and powerful than the sinogram approach at the cost of a computationally effort, being slower, since backand forward-projections are independently executed for each event of the list. In this case, the reconstruction time depends not only on the length of the list, i.e. the number of detected events, but also on the sizes of the spatial and TOF kernels. Fully 3D implementations of the TOF-OSEM algorithm from list-mode data have been described in [52,53].
In order to get image reconstruction times compatible with the daily clinic routine, 3D listmode TOF-OSEM algorithms use multiple (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20) processors and non-optimized reconstruction parameter choices (e.g., stopping criteria determined by the reconstruction time rather than convergence and use of a truncated TOF kernel to speed up the forward-and backprojection steps) [54]. However, great effort has been put in optimizing timing requirements for TOF-PET iterative reconstruction algorithms. In reference [55] a new formulation for computing line projection operations on graphics processing units (GPUs) using the compute unified device architecture (CUDA) framework, is described. When applied to 3D list-mode TOF-OSEM image reconstruction this procedure is >300 times faster than the single-threaded reference CPU implementation [51].
Recently [56], a new TOF-PET list-mode based algorithm has been developed (DIRECT, direct image reconstruction for TOF) to speed up TOF-PET reconstruction that takes advantage of the reduced angular sampling requirement of TOF data by grouping list-mode data into a small number of azimuthal views and co-polar tilts. In terms of computing time, the total processing and reconstruction time for the DIRECT approach seems to be about 25%-30% that of list-mode 3D TOF-OSEM for comparable image quality. In addition, the total processing and reconstruction time is roughly constant with DIRECT, regardless of the sizes of the TOF and LOR resolution kernels, while the times for list-mode TOF-OSEM strongly depend on these kernel sizes. The reconstruction time per iteration for DIRECT is also independent of the number of events, while the per-iteration time for list-mode TOF-OSEM is almost linear with the number of counts [57].
Data corrections concerning randoms, attenuation and possibly also normalization for TOF-PET devices seem not to have a TOF structure. Thus, the current approach is to apply conventional non-TOF corrections to the new TOF data. However, scatter correction is clearly identified as the component that definitely has a TOF structure and requires an appropriate TOF computation [58].
Finally, it should be pointed out that TOF reconstruction is much less sensitive to errors and improper approximations. The redundant information present in TOF data naturally corrects the data inconsistencies during the reconstruction. It has been observed that TOF reconstruction reduces artifacts due to incorrect normalization, approximated scatter correction, truncated attenuation map, to name but a few [59].
Conclusion
In this chapter a complete review of the main design characteristics of TOF-PET systems based on reconfigurable logic devices has been performed. These systems have been presented from a historical perspective, and the main advantages of recovery timing information have been discussed. The goodness of the application of reconfigurable logic devices for TOF-PET systems have been described as well as digital electronics designs that would allow to accurate measure the timing information. Finally, the impact of timing information on image reconstruction algorithms has also been discussed.
As a conclusion, the implementation of the electronic hardware of PET systems on reconfigurable devices, including the TOF measurement capability, seems to offer several advantages over conventional approaches based on ASICs or CPLDs, mainly in terms of cost-effectiveness, time-to-market and re-configurability. Modern programmable logic devices present the necessary features to compete with the traditional used devices in terms of TOF calculation as technology of fabrication reaches high speeds and smaller sizes. For the time being, time resolutions in FPGAs are limited by the propagation time of the digital gates that conform the digital internal blocks of the device. But, due to the fast advances in fabrication processes, it is envisaged that these limitations will be overcome in the near future. | 11,621.4 | 2013-12-18T00:00:00.000 | [
"Physics"
] |
Blockchain-Enabled Transaction Scanning Method for Money Laundering Detection
: Currently, life cannot be imagined without the use of bank cards for purchases or money transfers; however, their use provides new opportunities for money launderers and terrorist organizations. This paper proposes a blockchain-enabled transaction scanning (BTS) method for the detection of anomalous actions. The BTS method specifies the rules for outlier detection and rapid movements of funds, which restrict anomalous actions in transactions. The specified rules determine the specific patterns of malicious activities in the transactions. Furthermore, the rules of the BTS method scan the transaction history and provide a list of entities that receive money suspiciously. Finally, the blockchain-enabled process is used to restrict money laundering. To validate the performance of the proposed BTS method, a Spring Boot application is built based on the Java programming language. Based on experimental results, the proposed BTS method automates the process of investigating transactions and restricts money laundering incidents.
Introduction
Currently, monetary transactions can be committed easily by using bank transactions. Shopping, money transfers, and ordering services are all services that are available to clients anywhere in the world [1,2]. However, these transactions open many opportunities for third parties to commit illegal activities with money without punishment [3]. One of these activities is called money laundering. Money laundering is a process whose purpose is to hide illegal sources of profit [4,5]. The need for money laundering arises in three cases. The first case is if the origin of income is criminal, e.g., illegal drug trafficking, racketeering, or corruption. Criminals receiving such income are forced to launder the money to be able to spend it freely [6,7]. The second case is if an entrepreneur or firm hides a portion of their legal income from increased taxation by underestimating revenue or overcharging, using unaccounted cash, etc. [8,9]. The third case is if the recipients of the money do not want to show their real source for security, ethical, or political reasons [10].
Anti-money laundering (AML) includes a number of measures intended to counter the legalization of proceeds of crime and curb financial flows intended for terrorist activities [11,12]. Most people who have participated in these activities [13,14] proposed methods that are based on several rules. These methods are applied to identify suspicious activities in transaction histories. During the process of reading and analyzing a significant amount of data, the human factor has an effect. Therefore, the contribution of this paper is the automation of the process by creating a Java-based application, where the rules of recognition of suspicious activity are implemented with a blockchain-enabled transaction scanning (BTS) method. AML teams suffer from false alerts that result in significant additional costs [15]. As the human resources of investigators are spent checking many false warnings, real money laundering actions continue to happen.
There are fundamental problems with existing AML systems that need to be addressed. First, they use detailed rules for each scenario, resulting in many warnings that are not actually suspicious. Second, they only check a fraction of the available data, which limits the number of signals they can use to detect money laundering. Third, they have strict data format requirements that require a painful process of data integration, which often leads to poor data quality [16,17].
The application solves these problems as follows: First, it uses only the data that the user provides. Second, it checks the user's data in full, meticulously checking each value. Third, it allows the user to initially assign keys that will be exposed during the suspicious activity search process.
The contributions of this paper are summarized as follows: • Novel rules have been formulated for anomalous transaction detection that support the fight against money laundering. • A blockchain transaction scanning method is employed that involves the rich features of data mining and the blockchain for deciding between confirmed malicious and legitimate transactions.
For defining money laundering applications, rules such as "outlier detection" [18] or "rapid mvmt funds" are used to check transactions by comparing actions with a template. If the sequence of actions matches the template, a case can be created for the beneficiary. While other applications use difficult methods such as machine learning and hardcoding the rules into the architecture, our application will use simple methods to easily add other rules and scale the system in the future.
The remainder of the paper is organized as follows: Section 2 identifies the problem. Section 3 briefly analyzes the state-of-the-art related work. Section 4 describes the system model. Section 5 discusses the problem formulation rules for transaction anomaly detection. Section 6 defines the proposed BTS methods. Section 7 provides experimental result. Section 8 presents a conclusion of the entire paper.
Problem Identification
Money laundering is considered a criminal process that allows illegally earned money to flow into the basic cash flow of society. Given that financial products and services around the world fall under AML regulations, the international community believes that money laundering is a threat to the world economy. As a result of these activities, "dirty" money becomes "clean". The point of this activity is that the origin of the illegally obtained money becomes impossible to determine, and criminals can spend it with impunity.
From a social point of view, the largest problem of money laundering is that it finances and creates favorable conditions for organized crime. Often, dirty money initially appears because of drug trafficking, tax evasion, the sale of illicit goods or trafficking, and support for terrorist acts. According to the calculations of The United Nations Office on Drugs and Crime, approximately two to five percent of global gross domestic product (GDP) (between $800 million and $2 billion) has been integrated into the global banking system through money laundering. With this in mind, it is a global problem.
This problem is so urgent that to counter it in Europe, European Union (EU) law requires companies to hire financial services to conduct checks on their clients' AML to prevent this practice. AML measures include verifying the identity of each client by a financial service or agency and monitoring their operations. As part of the fight against money laundering, the financial institution may also request additional information from the client if it discovers any suspicious activity. The financial institution can ask the client that is depositing a large sum of money into their account to provide documents confirming the origin of the funds.
AML organizations try to define money laundering by using rules. Rules are a template of a sequence of actions that may be defined as a money laundering process. If, during an investigation, a specific beneficiary receives many alerts based on the rule matching, a case can be created for this person. The only difficulty of this entire process is that it requires a significant amount of human resources and time. Customer verification, data analysis, and mathematical calculation of revenue/waste are all the responsibility of specialists. To make the overall process easier, the most optimistic option is to automate some of these processes.
Related Works
The main characteristics of existing approaches are discussed in this section. The prototype application AML2ink provides a visualization of relations between accounts in transactions to further identify suspicious activity [19]. An SQL query is responsible for data processing, while GraphViz renders and produces the visualization. By using this output, the process of investigation is simplified. However, it is impossible to determine the maximum amount of data that GraphViz can render because it produces only one image file, where all entities are presented as nodes and relations as links; light data with 100 rows can require significant time to be rendered.
Kolhatkar et al. [20] fully introduced the process of a multichannel data-driven, realtime AML system, providing detailed schemas. In this approach, some methods and algorithms for defining money laundering are described. However, the paper provides little information about automatization of the entire process and no information concerning the technical aspects. Raza and Haider [21] proposed the SARDBN tool, which identifies abnormalities in the sequence of transactions. As a basic algorithm, a dynamic Bayesian network is used, which generates output by filtering transactions with outlier detection rules. Although the data of the results are diverse, they provide no useful information.
Weber et al. [22] proposed deep learning, which works with a massive amount of graph data. Then, they used graph learning to display available information to the users. Although the application provides visual data, it is impossible to manipulate the database structure. Luo [23] introduced a framework with a data mining system for detecting suspicious transactions. In this paper, specific rules such as attribute filtering and a correlation matrix between trade accounts are provided. Although the data of the results are diverse, they are useful.
Colladon and Elisa [24] proposed a social network analysis to determine money laundering. Based on network metrics, this paper presents predictive models showing the risk profiles of clients. However, this model cannot provide complete information because not everyone uses social networks. After examining these approaches, we decided that many of the proposed approaches are essential. All these approaches are aimed at improving data collection, identifying money laundering, and visualizing the outcome. Our approach will not be inferior to them; additionally, it will combine all of these advantages.
System Model
This section addresses the main values in transaction data for further use in rules, which are originator, beneficiary, transaction committed date, and amount of money. These four modules can be defined as follows: An example of bank transaction data with columns that include these modules is shown in Figure 1. Figure 2, the transaction committed date and amount of money are marked in blue and green, respectively. The originator and beneficiary, as mentioned before, consist of multiple lines.
Originator
The account that is the source of the transaction flow. This can be described differently in the transaction history depending on the bank. For the record of the originator, the bank can use the address of the automated teller machine (ATM) from which the transaction was performed. Often, for better integration into the bank system, the address of the ATM can be divided into multiple columns. For example, the name of the country, the city, and the address can be inserted into four separate columns.
Beneficiary
The account that is the destination of the transaction flow. Its structure is the same as the originator's. It also has columns by which it is possible to identify the location where the person received the money. It is not possible to say if the beneficiary can be the main suspect in the money laundering just because he or she receives all the money. The beneficiary can also be the originator, which means that he or she acts in the long sequence of the money laundering process.
Transaction Committed Date
The date the transaction flow occurred. The outlier detection rule identifies transaction anomalies by checking how much money the originator spends in one month, by week. The rapid mvmt funds rule uses a two-week range to check whether the originator sends or the beneficiary receives a specific percent of the money. Therefore, it is important to identify when the transaction was committed to perform an investigation.
Amount of Money
The amount of money that is used in the transaction flow. This information alone is useless. Even if the amount is very low or very high, it does not define any suspicious behavior because income is an individual element. Only by using a combination of the above elements is it possible to perform an investigation and identify anomalies in account transactions. In the money laundering process, generally, a small amount of money is used to avoid additional attention.
Problem Formulation Rules for Transaction Anomaly Detection
The transaction-scanning algorithm uses two rules to define anomalies in transaction data tables. The following rules are described below: • Outlier detection; • Rapid mvmt funds.
Outlier Detection
An outlier is a data point that is significantly different from the others. Conversely, inliers are data that are within a stable distribution. It is not easy to define outliers because they are highly diverse and unpredictable; however, inliers are often stable, which can help define outliers.
Outlier detection in our case works with a monthly income scenario. Most people receive a wage at the beginning of the month. Then, every week until the end of the month, the person spends a fixed amount of money. This information can be considered an inlier and can be written as follows: where T is the data set of transactions with fixed expenses in one month; x is the data set of transactions for entire month; i is the sequence number of the transactions for the entire month; and n is the number of transactions within a one-month period.
Therefore, the main datum that can be useful to the rule is the expense of the last week, which can be recorded as follows: where T l is the data set of transactions with anomaly data in the last week of the month; i is the sequence number of the transactions for the last week of the month; and n is the number of transactions for the last week of the month.
Data can be defined as anomalous only if the expense of the last week is extremely high compared to the expenses of the other weeks of the month. This type of activity can be considered suspicious because most people try to save money in the last week of the month to sustain them until the next wage payment. This outlier can be defined by using the density ratio given by where p (T) is the inlier density, and p(T l ) is the test sample density. The density ratio is close to 1 when W is an inlier, and it is close to 0 when γ is an outlier.
Theorem 1. The highest expenses occur at the beginning of the month.
Proof. As a rule, at the beginning of the month, people receive their wages. Before spending this money, first, they identify their necessary costs. These required costs can be divided into the following three categories: • Investment or savings; • Mandatory payments; • Variable costs.
Investment or Savings
A certain percentage is immediately separated from the earned amount and set aside for a predetermined overall goal. Depending on the level of wealth, this percentage may vary. In most cases, people save 10% of their wages [25].
Mandatory Payments
After the savings funds are removed, payments that cannot be avoided will follow. First, people repay money borrowed from friends or payments on bank loans. Then, the funds required to pay for housing and communal services are calculated. Finally, the necessary costs for public transport, payment for kindergarten, medicines, gasoline for a car, etc., are deducted.
Variable Costs
This includes all other family expenses, such as food, shoes and clothing, household expenses, spouses' personal expenses, entertainment, holidays, birthdays, vacations, and unexpected expenses.
The formula of the weekly expense can be written as where n identifies the week number (1 ≤ n ≤ 4), and E inv , E mand , and E costs are expenses for investment, mandatory, and variable costs, respectively. As mentioned before, only in the first week, after receiving wages, do people primarily pay investment and mandatory payments because they are necessary. Thus, for the next three weeks they are not considered, as follows: Corollary 1. As a result, people spend more money in the beginning of the month because of expenses for needs and budget allocation. The outlier detection process is described in Figure 3. The outlier detection process is explained in Algorithm 1. Algorithm 1 explains the detecting anomaly transaction process by using outlier detection. At the beginning of the algorithm, the input and output are shown, respectively. In step 1, the initialization process of given variables is explained. Steps 2 through 6 check the size of transactions for odd/even and according to their set median transaction expense. Steps 7 through 11 check the median at the expense of the last week. If the expense exceeds the median, then the entity's state is set to suspicious; otherwise, it is not.
Rapid Mvmt Funds
Most people spend their money by buying something or lending to someone. The rapid mvmt funds rule, for the most part, is related to the second scenario. This rule is based on transferring money from one account to another. To launder money, scammers divide it and send it to multiple accounts. These accounts can also repeat this action. As a result, many accounts will have a portion of the money; then, a reverse process will begin in which accounts will collect the money by sending it to one account. In the end, the last beneficiary will receive the laundered money.
The rapid mvmt funds process uses two weeks of income and outflow of a specific entity. The main rule of this method is that if income is between 80 and 120% of outflow during two weeks' transactions, then the entity can be suspected to be an actor of money laundering. This can be written as follows: where n t is the number of transactions over a two-week period, and T o and T b are transactions where the entity acted as an originator and beneficiary, respectively. Total inequality can be written as follows: Theorem 2. Defining money laundering by inspecting remittance transactions is difficult.
Proof. The money transfer industry is currently growing rapidly. In 2018, more than $689 billion in money transfer transactions were made. Hence, we can conclude that this is a good platform for money laundering [26].
Key risks associated with remittances are the following: • Digital services: Internet money transfer services are not only more difficult for authorities to control but also allow criminals to bypass identity verification processes. The probability P md definition of money laundering can be calculated as where N t is the number of transactions, and N a is the number of accounts. The number of accounts can be converted as where m is the number of unique entities, C are the columns in the transaction table that are defined as originator or beneficiary, and ρ is the number of columns of C. Thus, the number of transactions can be converted as where r is the number of records in the transaction table, and d is the column "id" in the table that must be unique for every row. Thus, the entire probability of detecting the money laundering is written as As observed, N a is inversely proportional to probability, which means that a large number of accounts cause a small probability of defining money laundering. In Algorithm 2, the rapid mvmt funds algorithm for detecting suspicious transaction activity is explained. The input and output are shown at the beginning of the algorithm.
Step 1 explains the initialization process of the given variables. Steps 2 through 4 check if the amount of sent money ranges between approximately 80 and 120% of the amount of received money; if true, then the entity's state is set to suspicious. Steps 5 and 6 check if the amount of received money ranges between approximately 80 and 120% of the amount of sent money; if true, then the entity's state is set to suspicious. Steps 7 and 9 set the entity's suspicious state to false because no anomaly is detected.
Hypothesis 1. Transaction scanning processes require little processing time because of filtered columns.
Proof. The abstract useful information that can be used in TS methods can be written as follows: where T O e is the entity's transactions where he or she acted as originator, and T B e is the entity's transactions where he or she acted as beneficiary.
Set S False 9: end if
In the bank database, there will be a table where all transaction histories are stored. The structure of this table contains many columns because a bank system usually uses an NoSQL structured database, which means that data have no relations; thus, it is not necessary to divide data into a table. The rows of a table T r can be recorded as where R is the total number of rows, and C r is the column of the row. The transaction scanning method does not require many columns C r . It primarily uses the following process for columns: where D is the transaction committed date, A is the amount of money, O is the account of the originator, and B is the account of the beneficiary. Therefore, the rows of the table can be rewritten as follows: As a result, removing data about originators and beneficiaries in transactions where entities acted as originators and beneficiaries, respectively, the above equation can be described as Corollary 3. By reducing the number of rows in the table, we obtain as: Thus, it is possible to increase the performance of transaction scanning.
Proposed BTS Method for Money Laundering Detection
Money laundering poses a severe threat to financial bodies that leads to national impairment. Thus, detecting doubtful transactions concerning money laundering is of paramount significance. The BTS method has a capability to scan each transaction before processing. The BTS method is depicted in Figure 5, consisting of the following steps.
Content Pre-Processing
The leading role of content pre-processing is to excerpt datasets from different locations. Subsequently, it merges different datasets into an integrated database, and then extraction, transformation, and loading (ETL) processes are applied as employed in [27]. This process experiences content quality issues because the financial institutions possess a different set of quality issues at the content level. Most of the problems are associated with the customer information in our case. These problems include: • Null or dummy values: This happens in most of the data fields of the databases except the identity of the user, the user type (individual, joint, or company), and fund name. • Misspelling: Usually phonetic and typo errors. Additionally, banking datasets are mostly organized in a distributed fashion to maintain security and flexibility. The heterogeneity of the contents can pose a threat to the content quality, particularly when an integrating process is required. Therefore, the basic content quality issues can be addressed using pre-processing. Numerous datasets possess different variations between features ranging from minimum to maximum: for example, 0.001 and 10,000. If such variations occur, then there is need for scaling down to make the attribute acceptable and appropriate. This process supports various classifiers to be compatible for content processing. The scaling process for determining the new feature F n is given by where F o : the original features of the contents; V M ax: maximum value of the features; and V M in: minimum value of the features. It is highly important to determine all of the features from all datasets. Thus, the scaling process of identifying the new features from the entire contents is given by where βV Max : new maximum features, and βV Min : new minimum features.
Once new feature values are obtained, then there is need for standardization of the feature value, given by σ Substitute M v and σ, and we obtain where V ci : data content value; n: number of values; M v : mean value; σ: content standard value; D si : dataset size; a i : each value from the data content; and µ: data content mean value.
Content Mining and Blockchain-Enabled Features
This process collects and stores the information obtained from AML experts and uses case studies and past money laundering cases. Based on the collected and stored information, the rules for anomaly detection are generated to identify the malicious activities of the outlier and rapid mvmt funds. This process is responsible for controlling the entire data mining process by employing the rules to obtain better performance. It matches the data obtained from the warehouse with the associated rules. The rules provide three types of classifications (warning, probable, and suspicious) for each transaction to money laundering depicted in Figure 5. When the customer makes the transaction, it goes to the "Rule-matching process", which is part of the content-mining process. The Rule-matching process consists of several rules, which are matched against each transaction. Each transaction is initially marked as a "warning transaction" and sent for further investigation. Based on the investigation, if the transaction matches more than 60% of the rules, then the transaction is considered as a "probable transaction"; if the transaction matches less than 60% of the given rules, then the transaction is determined to be a safe transaction. The transactions which match more than 60% of the given rules are sent for final investigation. If the transaction matches with the rules ≥95, then it is considered to be a "suspicious transaction". Finally, the report of the suspicious transaction is forwarded to the blockchain-enabled feature server.
The blockchain-enabled features are stored on the blockchain-enabled server that is responsible for blocking the suspicious transactions and releasing the legitimate transactions. When the blockchain-enabled server receives a message from the content-mining process component regarding a suspicious transaction, then it declares it as a "confirmed malicious". On the other hand, if the transaction is received as "non-suspicious", then the blockchain-enabled server declares it as a "legitimate transaction". Finally, the legitimate transaction is allowed for further processing. The Rule-matching process is given in Algorithm 3.
Algorithm 3: Rule-Matching Process Using Content Mining
Input: t in Output: S t ; P t ; Su t out 1: Initialization: S t : Safe transaction; t: Transaction; P t : Probable transaction; Su t : Suspicious transaction; W t : Warning transaction; g : Rule-matching process 2: Set g 3: Set W t ∼ = t 4: if g ≥ 60 then The probability model Pr m is generated, which reports the confirmed malicious state. Let us assume that if the transaction t i is reported as "confirmed malicious" to the authorities, it has a value of 1; otherwise, its value is 0. The variables associated with the transaction i are denoted as Ψ i , which can be written as Minimizing the money laundering M m process is derived by The content-mining process component takes the features from the iterative local search and random methods explained in [28] that help to develop the final predictive model FPr m for the confirmed malicious transaction: where ∂∀(t i ): the matching rules which do not match with the suspicious transaction. From the above equation, we deduce that ≥95 matching rules match with the suspicious transaction, and the remaining rules do not match with the transaction. Based on the result of the predictive model, the transaction is either considered to be confirmed malicious or a legitimate transaction determined by the blockchain-enabled server.
Experimental Results
To validate the quality of the BTS method, the complete model is written using the Spring Boot application based on the Java programming language. To store and retrieve data, the PostgreSQL relational database version 9.6.14 was used. Application Netdata is used for the monitoring system. The laptop configuration on which the application is executed is described in Table 1. Mock transaction backup data provided by HSBC are used as input data. The dataset contains 21,602 rows and every row has 37 columns. The columns that are defined as originator and beneficiary are "orig_line1", "orig_line2", "orig_line3", and "orig_line4" and "bene_line1", "bene_line2", "bene_line3", and "bene_line4", respectively. The column "amount" contains the amount of money in the transaction, and "TXN DATE" describes the date when the transaction was committed. The proposed BTS method is compared with the state-of-the-art methods-extracting and exploring blockchain data from eosio (Xblockeos) [29], convolutional neural network biometric cryptosystem (CNNBC) [30], detecting cryptocurrency transactions (DCT) [31], and detection of illicit accounts (DIA) [32]-to ensure the validity of the proposed algorithm. Based on the testing, interesting results are determined for the following: • Outlier detection; • Rapid mvmt funds.
Outlier Detection
For the outlier detection method, transactions with role originators and committed dates between 1 March 2021 and 30 April 2021 are used. All these data are retrieved by using the SQL script. Further calculations are performed in the Java language, and the results are stored in the database. The outlier detection scanning process takes 138,606 ms, and as a result, 581 rows are generated. The method requires 10% of processor usage and 2 GB of RAM for processing. By using the provided results of the transaction money-laundering state, the accuracy of outlier detection is determined. As shown in Figure 6a, the method provides high accuracy because it uses Algorithms 1-3, which can define only specific actions in transactions; the chances that innocent entities could unintentionally trigger this rule are minimal. Additionally, it should be noted that the accuracy of the method is more stable than contending methods (Xblock-eos, CNNBC, DCT, and DIA), as depicted in Figure 6a. While our accuracy plateaus at approximately 98.7%, in the contending methods, the accuracy begins at 60% and grows slowly. The contending methods Xblockeos, CNNBC, DCT, and DIA produce outlier accuracy of 83.3-92.2%. When the number of transactions increases, this greatly affects the accuracy of the contending methods depicted in Figure 6b, while the proposed BTS method remains stable and produces 99.4% outlier accuracy. Thus, it is proven that even if the number of transactions increases, this does not affect the accuracy of the proposed BTS method. On the other hand, the contending methods are not appropriate to deal with the number of increased transactions.
Rapid Mvmt Funds
For rapid mvmt funds, the same date span as the outlier detection method is used, but SQL scripts are used to retrieve transactions with originator and beneficiary roles. The rapid mvmt funds scanning process took 139,701 ms. It is required to be slightly longer than the outlier detection process because in this method, entities with originator and beneficiary roles are checked. As a result, 143 rows are generated. This process uses 2 GB of RAM and 20% of processor usage. The accuracy of rapid mvmt funds is shown in Figure 7a,b. Based on the result, it is observed that the proposed BTS and contending methods produce a lower accuracy than outlier detection. This occurs because of frequent money transfers. People often give and receive money, especially when lending and borrowing. This creates anomalies that are followed by the rapid mvmt funds rule, which triggers the algorithm. Therefore, multiple false alarms are generated. In summary, it can be observed that the application generated sufficient data. The comparison of the data with the output of the bank transaction history is depicted in Figures 6b and 7a.
According to Figure 7a, the contending methods started with a 60-69% accuracy; with an increase in the transactions, the accuracy increased, reaching a maximum of 65.3-77.7%. All these processes are automated and executed within a monolith application, which provides better performance than the DCT method, where a real-time AML system is described that has no automation and a poor technical description. Although Xblock-eos and CNNBC ignore the process of creating the information, they provide visual data that are better than our method's provided raw text. However, it seems that Weber hardcoded the process of retrieving transaction tables, which means that these methods are flexible, and for other ATMs, transaction table application editing is needed. DIA attempts to use a social network to predict the risks of money laundering; however, in reality, not every ATM stores information about originators or beneficiaries, which makes this approach useless, while our proposed BTS method uses transaction-specific columns to define originators, beneficiaries, and money laundering risks. In Figure 7b, a comparison of our proposed method is shown with the contending methods. Data are generated for every method for comparison. Scaled with our result, CCNBC provides a good example of data generation, using SQL for processing; however, it shows the worst result because GraphViz takes a large amount of computer resources for data visualization. Our proposed BTS method provides better performance than other methods, and at the end of the process, the accuracy of the proposed BTS method is approximately 19.8 to 29.1% higher than contending methods. The main reason for obtaining better performance is the use of blockchain technology. When the number of transactions increases, this does not affect the BTS because each transaction is checked by the blockchain technology server before allowing it to proceed. The blockchain technology server only permits legitimate transactions to be processed, and malicious transactions are blocked. On the other hand, the contending approaches do not distinguish between malicious and legitimate transactions. As a result, the performance of the contending approach is decreased.
Based on the results, the performance of the BTS method and machine performance consumption are demonstrated. The overall results show that during the BTS process, the application constantly uses 2 GB of RAM, but the CPU is loaded differently according to the specific method. Outlier detection uses only transactions of an entity where he or she acts as an originator, whereas the rapid mvmt method needs both originator and beneficiary transactions to look for an anomaly. The processing requirements of the methods are as follows: outlier detection uses 10%, while rapid mvmt sometimes consumes an additional 5 to 10% more than outlier detection. Outlier detection takes 138,606 ms to process 21,602 transactions and generates 581 rows, which means that approximately every 37th transaction is suspected of involving money laundering. Additionally, the rapid mvmt method is processed five times more quickly than outlier detection because of the easy implementation of the algorithm. Table 2 shows a summary of the experimental results. Based on the above, it is possible to say that the application can process a large amount of data and provide results in a minimum amount of time. Moreover, the provided results are more useful than the solutions proposed in [9][10][11]. The application records only the record's generated date, the rule of a detected anomaly, and the entity that is suspected. However, this paper does not include ordinary user visualization, which is provided in [7][8][9][10] and can be the first priority. However, this can be easily fixed by including the frontend side of the application. We have elaborated the perils of automation and discussed the flaws of the BTS approach when it fails and misses the ML. The proposed BTS method inherits the shortcomings of blockchain technology because the blockchain fails to accomplish the targets due to energy issues. The miners of blockchain technology are provoked to deal with complex transactions. As a result, additional energy consumption occurs that does not make BTS ideal for the real world. The ledger is streamlined with the scanning of new transactions, which results in the energy consumption by the miners. This issue can be addressed by using consensus algorithms and permissioned networks. In the future, we will try to use the consensus algorithm and permissioned networks to avoid the failure of BTS.
Conclusions
This paper introduces a blockchain-enabled transaction scanning method that uses outlier detection and rapid mvmt funds rules to detect anomalies in bank transaction history. To validate the accuracy of the approach, the methods of the algorithm are executed by using a mock HSBC transaction history. Outlier detection works with one month's income and checks if the income of the last week is suspicious. Rapid mvmt funds works with two weeks' income and outflow and checks if income is between 80 and 120% of the outflow. Based on the simulation results, it is discovered that the outlier detection method works more accurately than rapid mvmt funds, and the algorithm in this paper does not require a super machine to execute machine learning (ML) to define the method. Outlier detection uses an algorithm to define anomaly actions that are not easy to unintentionally commit, while rapid mvmt selects transaction actions that may be innocent because of simple money transfers between accounts. Therefore, the accuracy of the second rule is low. However, a combination of these two rules provides good results that investigators could use for further cases. The proposed BTS method is compared with other methods: Xblock-eos, CNNBC, DCT, and DIA. Based on the testing process, we observed that the proposed BTS method produces better results than contending methods from the accuracy perspective. The proposed BTS method could be used by financial firms, governments, non-government organizations, and banking sectors to fight against money laundering.
Author Contributions: A.O. and A.R., conceptualization, writing, idea proposal, methodology, and results; A.T., data curation, software development, and preparation; M.A. and B.A., conceptualization, draft preparation, and visualization; C.Z. review and editing. All authors have read and agreed to this version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest:
The authors declare no conflict of interest. | 8,544.2 | 2021-06-07T00:00:00.000 | [
"Computer Science"
] |
Randomised clinical trial comparing concomitant and hybrid therapy for eradication of Helicobacter pylori infection.
BACKGROUND
The primary objective of this study was to compare concomitant and hybrid therapy in the first line eradication treatment of Helicobacter pylori infection in Split-Dalmatia County, Croatia, in which clarithromycin resistance is above 20%. The secondary objective of the study was to determine and compare compliance and adverse events rate between these therapeutic protocols.
MATERIALS AND METHODS
In an open-label, randomised clinical trial 140 patients total with H. pylori infection were randomly assigned to either concomitant (esomeprazole 40 mg, amoxicillin 1 g, metronidazole 500 mg, clarithromycin 500 mg, twice daily for 14 days) or hybrid (esomeprazole 40 mg and amoxicillin 1 g twice daily during 14 days with adding metronidazole 500 mg and clarithromycin 500 mg twice daily, in the last 7 days,) treatment group.
RESULTS
Eradication rates for concomitant group and hybrid therapy group were 84.1% (58/69) and 83.1% (59/71) respectively in the intention-to-treat analysis and 96.7% (58/60) and 95.2% (59/62) in per-protocol analysis. There was no significant difference between the groups (ITT analysis: P = 0.878; PP analysis: P = 0.675). Adverse events were more frequent in the concomitant group (33.3% vs 18.3%, P = 0.043). There was no difference among groups regarding compliance rate.
CONCLUSION
Hybrid therapy has similar eradication rate as concomitant therapy, with lower adverse events rate. In the era of increasing antibiotic resistance, eradication regime with less antibiotic's usage, as hybrid therapy, should be reasonable first line treatment choice for H. pylori infection. Clinical Trials, gov: NCT03572777.
Introduction
Although introduced as a first-line carcinogen years ago, H. pylori is still a clinical challenge, due its association with gastritis, gastric and duodenal ulcer, MALT (mucosa-associated lymphoid tissue) and gastric cancer [1][2][3][4].
The 2015 Kyoto Consensus defined H. pylori gastritis as an infectious disease, requiring treatment regardless of symptomatology [5]. In this regard, the choice of appropriate eradication therapy is important, as eradication can prevent above mentioned complications [6,7]. However, an increase in H. pylori resistance to antibiotics has been reported worldwide, with a concurrent decline in the success of eradication therapy, necessitating the need for a modification of the therapeutic approach [8,9]. This is further supported by the fact that traditional triple therapy is no longer considered the therapy of choice in areas of high resistance to clarithromycin (>15%) [3]. Therefore, a model of quadruple therapy was proposed by the H. pylori Working Group (Maastricht V): sequential, concomitant, hybrid, and quadruple bismuth-based therapies. Hybrid therapy, proposed in 2011 by Hsu, is a combination of sequential and concomitant therapy [10]. Few clinical studies so far showed the same or even higher eradication rate of hybrid therapy compared to sequential and concomitant regimes [11,12].
Considering the fact that there is no defined optimal eradication therapy for H. pylori infection that would be equally effective in all regions, it is advised to first determine primary resistance to commonly used antibiotics in eradication of H. pylori infection in each region [3,13]. To our knowledge, the efficacy of hybrid therapy in the treatment of H. pylori in Croatia has not been investigated to date. Given that the choice of eradication therapy is primarily based on local antibiotic resistance, we consider it is essential to examine the efficacy of hybrid and concomitant therapy in H. pylori eradication in the Split-Dalmatia area, knowing that the clarithromycin resistance is above 20% in our region [14]. In this study we compared concomitant and hybrid therapy for H. pylori infections, in terms of efficacy, compliance and adverse events rate.
Design overview
We conducted a prospective, open-label, randomised, controlled trial at the University Hospital of Split in Croatia. Between July 2018 and August 2019, all patients who presented with dyspeptic symptoms or had endoscopic finding (peptic ulcer, gastritis) have been tested for H. pylori infection. Patients with H. pylori infection were initially recruited in study and were followed up until October 2019. H. pylori infection was proven with one of the following methods: positive stool antigen assay (based on monoclonal antibody, ELISA); positive rapid urease test; H. pylori evidence in histologic specimen; positive urea breath test, all in accordance with the recent Maastricht V guidelines. Exclusion criteria were: age less than 18 years; previously unsuccessful application of empirical H. pylori eradication therapy; malignant disease of the stomach or any other site; taking proton pump inhibitors (PPI), H2 antagonists, bismuths or antibiotics (amoxicillin, metronidazole, clarithromycin) during the last month; associated comorbidity (renal insufficiency, mental illness); drug allergies: proton pumps inhibitors or antibiotics (amoxicillin, metronidazole, clarithromycin); pregnancy and lactation; refusal to participate in the survey.
All participants provided written informed consent. The study was performed in accordance with the principles of good clinical practice from the Declaration of Helsinki, approved by the ethic committees of the University Hospital of Split (as from April 2018, approval number 500-03/18-01/13) and University of Split School of Medicine (as from April 2018; approval number: 003-08/18-03/0001) and registered as clinical trial (Clinical Trials, gov: NCT03572777). The authors confirm that all ongoing and related trials for this drug/intervention are registered.
Therapy
The eligible participants were randomly assigned, using computer generating sequence in two groups. First group was given concomitant therapy: esomeprazole 40 mg, amoxicillin 1 g, clarithromycin 500 mg and metronidazole 500 mg, which were all administered orally twice daily for a total of 14 days. The second group was given hybrid therapy: esomeprazole 40 mg and amoxicillin 1 g, which were administered orally twice daily for a total of 14 days and clarithromycin 500 mg and metronidazole 500 mg, which were administered orally twice daily for the last seven days. Written instructions on the dose and timing of treatment were provided to each subject individually.
One month after the end of therapy, all subjects were tested for H. pylori antigen in the stool using a monoclonal antibody (ELISA) test to evaluate eradication success. Eradication failure was defined as a positive result of this test. During the follow-up, compliance and adverse events were evaluated. The compliance was defined by the amount of medications taken (compliance was considered good if � 80% of therapy was taken), based on the remaining pill count and patient's self-reported questionnaire that included information regarding compliance and adverse events.
The adverse events were divided into groups according to the degree of tolerance: no adverse events; mild (without limitation in daily activities); moderate (partly limited daily activities); and severe (completely limited daily activities). Patients were instructed to report immediately in case of any severe adverse events.
The primary outcome of the study was to compare H. pylori eradication rates in patients receiving concomitant and hybrid therapy. Secondary outcomes were assessment of compliance and adverse events in the both groups.
Statistical analysis
The total number of participants was calculated based on the effect size parameter (w = 0.5), statistical significance (P = 0.01), and power of 0.90. Based on the input parameters, a sample size of 60 subjects per group was required. Sample size calculations was made using power analysis statistical package in the R interface (ver. 3.4.3, 2017).
Statistical software SPSS ver. 25 for Windows (IBM Corp, Armonk, NY, USA) was used for statistical data analysis. Data were expressed as mean ± standard deviation (SD) or as whole numbers and percentages with 95% confidence intervals (CIs) calculation for categorical variables. The major outcomes were analyzed by the Chi-squared test with Yates' correction or Fisher's exact test for categorical data and the Student's t-test for continuous variables. Binomial logistic regression analysis, with age and gender variables as covariates, was used to determine adjusted odds ratios (aOR) for adverse events of the hybrid therapy group with concomitant therapy group set as a reference group. Analysis was performed by intention-totreat (ITT) and per protocol (PP). The ITT population included all randomised patients who received at least one dose of study drugs. The PP analysis excluded the patients with unknown H. pylori status following therapy and patients with poor compliance to the therapy. All assumptions for the use of statistical tests have been fulfilled. The statistical significance was defined as P < 0.05.
Study group characteristics
Among 159 patients infected with H. pylori 19 were excluded due to screening failure. A total of 140 patients were randomly assigned to either concomitant therapy (n = 69) or hybrid therapy (n = 71) group. Table 1 shows baseline characteristics of the included patients. There were no statistically significant differences between the two groups in terms of age, sex, history of smoking, alcohol use, or endoscopic finding. Six patients total in the concomitant group and six patients in hybrid group were lost to follow-up. In each group, three patients consumed less than 80% of prescribed medications. A flowchart of the recruitment of study participants is shown in the There were no significant differences in the eradication rate between the two groups, according to the ITT and PP analyses ( Table 2).
Compliance and adverse events
There was no significant difference in the compliance rate between the two groups. Nine patients in both concomitant and hybrid group had compliance rate below 80%. Adverse events occurred significantly higher in concomitant than in hybrid group (33.3% vs 18.3%, P = 0.043). Furthermore, hybrid group had significantly lower adjusted odds of adverse events (aOR 0.45, 95%CI 0.21-0.96, P = 0.044) as shown in Fig 2. Nausea was the most frequent adverse event in both groups (20.3% and 11.3% respectively), as it is shown in the Table 3. According to the degree of severity, most of the adverse events were mild in both groups (19/69 in concomitant and 13/71 in hybrid group). However, four patients in the concomitant therapy group experienced moderate adverse events, but without need for special intervention or hospitalization (Table 4).
Discussion
The primary objective of this study was to determine the optimal therapeutic option in the treatment of H. pylori infection since it is not clearly defined in Split-Dalmatia region, Croatia. According to previously established data, clarithromycin resistance in Split-Dalmatia County is above 20%, with relatively low metronidazole resistance rate of 10.2% [14]. Therefore, standard triple therapy is not recommended as a first line treatment [3]. As stated in Maastricht V guidelines, in areas with high (>15%) clarithromycin resistance, bismuth quadruple or nonbismuth quadruple therapies, primarily concomitant, are recommended [3]. Concomitant therapy is now often regarded as the first line eradication treatment, due its high eradication rate, exceeding 90% in some areas [15][16][17]. However, standard duration of concomitant therapy is from 10 to 14 days, that includes PPI and three antibiotics-amoxicillin, metronidazole, clarithromycin, which are used for total period of treatment. This can lead to increase of antibiotic resistance and abuse of antibiotic use. Furthermore, as suggested by Maastricht and Toronto guidelines, concomitant therapy is duration dependent, with preferable 14-day duration in the first attempt, especially in areas with high clarithromycin resistance [4,13]. On the other hand, results of Kapizioni et al. study suggested that 10-day concomitant therapy could replace 14-day therapy with equal result [18]. Secondly, significant limitation of concomitant therapy can be lower efficacy in areas with high dual resistance or high metronidazole resistance, when bismuth-based therapy is recommended [19,20]. Results of one meta-analysis demonstrated that eradication rate of concomitant therapy was only 33.3-66.7% for strains with dual clarithromycin-metronidazole resistance [21].
To overcome these problems, other quadruple therapies, such as sequential and hybrid were proposed. Hybrid therapy was introduced as a novel non-bismuth quadruple therapy in 2011, with excellent first results: eradication rate was 99.1% by PP and 97.4% by ITT analysis [10]. So far, the effectiveness of hybrid therapy since its introduction has been investigated in few studies, and there are even less studies with comparison of concomitant and hybrid therapy [22][23][24][25][26][27].
Meanwhile, sequential therapy, first introduced as an alternative to triple therapy, was a common first line treatment in Croatia [27]. Soon, few studies showed that hybrid can be more effective than sequential therapy [12,28]. However, usage of sequential therapy showed limitations. In areas with high clarithromycin resistance sequential therapy can be less effective than concomitant therapy [29]. Efficacy of sequential therapy drops down significantly when H. pylori strains were clarithromycin-resistant, even down to 70%, as presented by Liou et al. [30]. There is also evidence that sequential therapy is affected by metronidazole resistance [3].
On the other hand, hybrid therapy showed better eradication rate than sequential therapy in areas with high antibiotic resistance, as showed in Sardarian et al. study [28]. Thus, in our region, we have chosen hybrid therapy as an alternative option to concomitant therapy.
In the current study we have demonstrated similarly high eradication rates for concomitant and hybrid therapy and these findings are consistent with results of few other studies in areas with high clarithromycin resistance [24-26, 28, 31, 32].
Given the fact that the therapy is time-dependent, we have chosen 14 days duration for both therapy groups, similarly to other authors [29,32]. This is in contrary with previous studies that used 10-day hybrid, or 10-day concomitant therapy [24,26,33]. However, one prospective Greek study showed high eradication rate using 14-day hybrid therapy, in a region with similar antibiotic resistance rate which was the main reason for us to use 14-day therapy [25,34]. Hybrid therapy includes 7 days less taking metronidazole and clarithromycin, but with equal eradication success. It seems that hybrid therapy would be more reasonable approach, having in mind increasing antibiotic abuse and antibiotic resistance. This is strengthened by the fact that eradication of H. pylori can be associated with changes in gut microbial ecology and structure [35,36]. In addition, hybrid therapy is more cost-effective than the concomitant therapy. When we compare costs of both therapies, hybrid first line treatment is less expensive, primarily because of 7-day shorter antibiotic (metronidazole and clarithromycin) usage. The secondary objectives of the study were to determine the tolerability of these therapeutic protocols and to evaluate patients' quality of life during treatment based on adverse events occurrence.
In all therapeutic regimes, compliance rate could be another potential factor for eventual failure of eradication treatment. In our study, in both groups compliance rate was more than satisfactory, with no significant difference, although we expected better compliance in hybrid group, regarding a smaller number of antibiotics, as some studies showed [24,31].
As we expected, less antibiotic usage resulted in less adverse events. We demonstrated significantly higher adverse events rate in concomitant than in hybrid group, with nausea being the most common adverse event in both groups. There were no differences in specific adverse events among groups. Adverse events were mild according to the degree of severity, and four patients who had moderate events were in concomitant group. Furthermore, hybrid group had significantly lower adjusted odds of adverse events.
Similar finding was in one study, with less adverse events in hybrid group and nausea being dominant complaint [24]. Few other studies proved there was no difference regarding adverse events [22,26,31,32].
Although, this is the first randomised clinical trial comparing hybrid and concomitant therapy in Croatia, our study has few limitations. Main limitation is lack of antibiotic resistance data for included patients, however, current guidelines recommend antibiotic susceptibility test after second line treatment failure [3]. Still, tailored therapy in the era of personalized medicine should be regarded as a potential future approach in clinical practice. Secondly, this study was designed as open-label one, which may increase potential risk of bias. Although majority of similar H. pylori clinical trials are open-label, blind-design studies are necessary for avoiding potential bias [24,25,27,35]. Finally, this study was not designed as non-inferiority one, that may affect its conclusiveness. Thus, a non-inferiority trial should be conduct for further comparison of these two protocols, with greater sample size.
We used 14-day concomitant and 14-day hybrid regime. However, further studies need to be performed to investigate the potential benefit of 10-day hybrid regime in both eradication efficacy and compliance. In that manner, potential risk of increase in antibiotic resistance would be even more avoided. Regarding the fact that our region has clarithromycin resistance rate above 20%, results of this study may be applicable with regions with similar problem.
In conclusion, both concomitant and hybrid therapy achieved very high but similar eradication rates. The scientific contribution of this clinical research is clarifying efficacy of therapeutic protocols (ITT> 90%) in the treatment of H. pylori infection in patients in Split-Dalmatia County. Regarding the lesser number of antibiotics, less adverse events and similar eradication rate, we suggest that hybrid therapy should be the first line treatment option in areas with high clarithromycin resistance. Further studies are needed to investigate potential usage of 10-day hybrid therapy compared with 14-day regime. | 3,857 | 2020-12-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
A Novel Technique for Secure Data Cryptosystem Based on Chaotic Key Image Generation
: The advancements in Information and Communication Technology (ICT), within the previous decades, has significantly changed people’s transmit or store their information over the Internet or networks. So, one of the main challenges is to keep these information safe against attacks. Many researchers and institutions realized the importance and benefits of cryptography in achieving the efficiency and effectiveness of various aspects of secure communication.This work adopts a novel technique for secure data cryptosystem based on chaos theory. The proposed algorithm generate 2-Dimensional key matrix having the same dimensions of the original image that includes random numbers obtained from the 1-Dimensional logistic chaotic map for given control parameters, which is then processed by converting the fractional parts of them through a function into a set of non-repeating numbers that leads to a vast number of unpredicted probabilities (the factorial of rows times columns). Double layers of rows and columns permutation are made to the values of numbers for a specified number of stages. Then, XOR is performed between the key matrix and the original image, which represent an active resolve for data encryption for any type of files (text, image, audio, video, … etc). The results proved that the proposed encryption technique is very promising when tested on more than 500 image samples according to security measurements where the histograms of cipher images are very flatten compared with that for original images, while the averages of Mean Square Error is very high (10115.4) and Peak Signal to Noise Ratio is very low (8.17), besides Correlation near zero and Entropy close to 8 (7.9975).
Introduction:
Cryptography provides secure communication among individuals, governmental organization, corporations through the utilizing of codes and it can be used effectively to nullify the value of cyber threats in the presence of malicious third-parties, phishing, interception of information being stored in database or sent via networks. Thus, only those intended parties for whom the information was sent can interpret and analyze it. Certainly, this information relates to multiple areas, such as: financial accounts, civil or military aspects, scientific inventions, medical, educational, business applications, live-broadcast, or any sensitive data that someone wants to keep private … etc [1][2][3][4] .
Many encryption techniques were developed in the past decades to be adopted as a national standard for secure communication, like: DES, RSA, 3-DES, Two-fish, IDEA, AES. Yet, they are not suitable options for real-time multimedia cryptosystem of large size [4][5][6] . On the other hand, cryptanalysts are trying to develop their expertise and capabilities to find shortcuts to break the security of these cryptosystems. The AES is considered one of the foremost broadly utilized encryption methods today by the public whether they are individuals, institutions, companies. However, it suffers from longtime calculations in encryption/decryption processes (i.e., low-level efficiency with large multimedia). Therefore, there is a need to develop a fast and strong cryptographic cryptosystems with pseudo-random behavior. Dynamic systems theory and chaos demonstrated promising area within the field of cryptography for research and applications [5][6][7] . Chaos-based schemes are dominant techniques due to high randomization, simple cryptographic processes, and its sensitivity to initial control parameters, such that a minor deviation in its input parameters can lead to a large change in the output range values. This will guarantee that the plain-data and/or secret key can't be easily reconstructed 5, 8, 9 . Image encryption techniques acquire the attention of scientists and researchers in order to meet the growing demand for real-time information security. The following survey includes various efforts related to our paper objectives for secure data cryptosystems: In 1 , a scrambled plaintextrelated image cryptosystem based on improved Josephus traversing and pixel bits permutation is presented to enhance the abilities of resistance against different types of attacks. A combination of image segmentation of chaotic system, bitwise XOR and crossover operations are made to attain higher effects of randomness. In 5 , the authors worked on the scrambling of text according to a novel 2D chaotic function that exhibits a uniform bifurcation upon large parameters range. Genetic algorithm is used to optimize the parameters of the map and hence improve the security of textual data. In 9 an encryption algorithm is proposed to improve the security of a cryptosystem by solving the key management problem through a combination of an elliptic curve and chaotic system. In 10 , a novel architecture of image encryption algorithm' permutation and diffusion based on DNA rule matrix operations and 2D-LASM chaotic systems is proposed using 256-bit hash value. Security analyses are performed on multiple samples which prove the efficiency of the scheme and it robustness against known attacks. In 11 , a perturbed highdimensional chaotic system is investigated for image encryption according to Devaney conjugate definition to enlarge cycle and manage secure problems. The confusion/diffusion structure is designed based on separated Cat map. The test results prove high security of the technique with fast algorithm. In 12 , a framework and an algorithm are introduced based on dual Arnold and logistic chaotic maps for lightweight image encryption scheme. Miscellaneous groups of images have analyzed and produce very good results according to security measures. This paper aims to build a secure cryptosystem for the transfer of data among entities, whether individuals, companies, or state institutions, and for various military, medical, banking or other fields. A non-standard technique was adopted that aims to generate a two-dimensional chaotic key matrix image that includes enormous number of probabilities based on 1-D range chaos values for given control parameters (selected randomly by the system in order to ensure higher level of information security), which is considered the main target of any cryptosystem to make the mission of any cryptanalyst impossible, those numbers are processed in some way to produce non-repeated set of numbers. Successive processes are then performed on the 2-D key matrix including permutation, repetition, and diffusion. Initially, the values of numbers in both rows and columns are permuted for specified number of stages. After that, XOR operations are carried out between original image data and key matrix values. The resulted chaotic key image with flatten histogram represents the key to encrypt any secret data file.
Chaotic Theory
The security of information becomes a significant concern for all internet users. Hence, the data to be shared between any entities must be secure using an encryption technique prior to transmission.
Actually, a faster and secure technique that offers good ciphers is a chaoticsystem that has many attractive features for secure communications. Chaos theory has been used widely in modern cryptosystems. Its behavior exhibits/creates a random sequence, which is considered as a benefit of using chaos efficiently for real-time multimedia encryption instead of the traditional DES, RSA, AES, … etc [13][14][15] . Chaos sequences are generated through the use of iterative equations, +1 = f ( ), that will be produced randomly, aperiodic and unpredictable. Furthermore, it is important to mention that any little changes in the starting conditions will cause widely varying outcome sequences (i.e., very sensitive to initial conditions). The equations below are used to express it mathematically: , = 0, 1, 2, … ………….. 1 A property of the logistic map can be exhibited when the chaotic parameter (r) value ranges from 3.5 to 4, such that its behavior depends on r's value; otherwise, the values will instantly leave the interval [0, 1] when r exceeds 4. Eq.2 -Eq.4, are the results of substituting the values of n into Eq.1 13-16, 17, 18 :
The Proposed Cryptosystem
The proposed cryptosystem is composed mainly of sender and recipient components. Fig.1 displays the block diagram of the sender part. Conversely, the recipient reconstructs the original secret file by performing the decryption processes in reverse order. The system is implemented using Microsoft Visual C#.net 2019 and its performance was
Figure 1. The Block diagram of the proposed Encryption Technique
The idea is based originally on generating a cipher image by exploiting the properties of randomness from chaotic-maps, as indicated in Eq.1, which will be considered as a key to encrypt the plain data (i.e., secret file). The system selects randomly the initial parameters of the logistic map ( 0 , ), such that: 0 ∈ [0, 1] and ∈ [0, 4] in order to create the random sequence that was used in some manipulation to produce the values and ranges whose length is the same as image rows and columns dimensions, which will constitute the 2-D key matrix that will be generated after the extending of key Row and key column to a full keys of non-repeated values. Then, there is a re-permute to the 2-D key matrix according to the number of stages prior to XOR operations that are performed between pixels image data and its equivalent 2-D key matrix so as to obtain an encrypted key image. It is worthwhile that this step is performed in "offline" mode by the system which saves computational time and avoid delay. To convert the selected secret file (plain-form data) of any extension into a meaningless cipher form, initially the sender must follow the sequence of steps indicated in Algorithm 1 to generate the key image.
Algorithm 1. The Proposed Key Image Generation
Input : Image Img[H, W], H refer to height and W for the width; Number of Stages (nStages). Output: Encrypted Key Image Generation file whose data denoted by EncK[H, W]. a) The proposed system randomly select initial chaotic parameters ( 0 , ) such that 0.5 ≤ 0 ≤ 1.0 3.5 ≤ ≤ 4.0. b) Chaotic sequence values (VL) and ranges (RG) are generated according to a pre-specified range length. c) Convert VL & RG into a set of 1-D Vectors of non-repeated values after processing their floating-numbers (i.e., 4
908
The resulted key image will be used to encrypt the selected secret file through another bitwise XOR processes in order to make the confusion more powerful. Then, the 4-tuples header information (keyimageID, x0, r, nStages) which is composed of 13-bytes (i.e., 4-bytes represent the ID of selected key image from a set of common database images between sender and recipient, 4bytes is dedicated for each floating-point chaos initial parameters (x0, r) respectively, and the last byte for the number of stages allocated for permutation in the encryption phase) will be concatenated with the encrypted secret file to be sent to the intended recipient. The security of the suggested cryptosystem approach depends on a set of factors (most of them are produced arbitrarily), as indicated in the following points: The random selection of initial chaotic parameters ( 0 , ) by the computerized program. The window-form program for the proposed encryption system relating to the sender is illustrated in Fig.2, with all parameters utilized to convert a secret image into non-interpreted form.
Figure 2. The Window-Form of the proposed Encryption system
In order to reconstruct the plain form of original secret file by the intended recipient over the communication channel, decryption processes must be performed in reverse way of what has been implemented during the stages of encryption; such that the 13 bytes header information will be extracted first from the encrypted secret file to obtain the 4-tuples (keyimageID, x0, r, nStages). Then, the same key image, chaotic sequence values besides ranges, and 2D key matrix permutation are all generated by the system accordingly. Consequently, the encrypted key image will be generated by following the same procedure stated in Algorithm 1 which will be diffused with the encrypted secret file to produce the original secret file.
Security Measurements
In this work some of security measurements will be applied to compute efficiency of the proposed encryption algorithm. where H represents the height and W is the width of the image.
b) Peak Signal to Noise Ratio (PSNR):
this measurement is utilized to assess an enciphering scheme, which points to the changes in pixel values between plain and cipher images. The lower value of PSNR represents better enciphering quality 10,13,[18][19][20][21] .
) / log (10) …….. 6 c) Correlation Coefficients: in a plain image, pixels are usually very correlated with their adjacent pixels in any direction, while the relationship between neighboring pixels in an enciphered image should be as low as possible in order to resist correlation analysis. The correlation coefficients are calculated as follows: where Corr. refer to the correlation, X is the plain image and its mean is X ̅ , while Y is the cipher image and its mean is Y ̅ . The association image values range from -1 to 1, such that the good encryption should have a correlation value near to 0 10, 13, 21-26 .
d) Information
Entropy: it is one of the most significant properties of randomness for an encryption planner analysis. It is the average amount of information content generated by a stochastic source of data. It is computed as shown in the following formula: where p(Xi) is the probability of symbol (Xi) and the entropy is expressed in bits. Ideally, the entropy value of the cipher message should to be close to 8 10,13,[19][20][21][22]24 .
e) Histogram Analysis: to avoid the information leakage for an intruder when encrypting an image, it must be made sure that the original and cipher images do not have any statistical similarity. Histogram distribute pixel values in a 2D graphical representation, the vertical axis represent the frequency of each color intensity level (L range from 0 to 255) occurred in image. The more flatten histogram indicates a higher encryption technique 13,[20][21][22][23][24] .
Results and Discussion:
To evaluate the efficiency of the proposed encryption technique according to the above security measurements, a number of experimental tests were performed, as shown in Tables.1-3, on more than 500 color/gray image samples collected from different color maps data sets (including: CSet8, CBSD68, Set12, kodak24 …etc) of various dimensions and format types. From the experimental tests in the above Table, it is shown that the optimal results are obtained when the number of stages equal 3 that produce higher MSE = 9032.596 and lower PSNR = 8.632. Therefore, it will be relied upon in the following tests.
The efficiency of the proposed encryption algorithm needs to be investigated according to different image samples of various types and dimensions, as illustrated in Table.3 From the results demonstrated in the above Table, it is clearly indicated that the histograms of cipher images are very flatten compared with that for original images, and the average of other measures are very promising, such that the average MSE is very high (10115.48) and the average PSNR is very low (8.17), while correlation near zero and the average Entropy of the cipher image become near 8 (7.9975). The overall randomness of cipher images can be revealed by information entropy measure, which is used to compare between the efficiency of the proposed method with that in previous related work 1 for the same image samples, as stated in Table .4.
Conclusions:
This research introduces an extremely secure data cryptosystem based on chaotic logistic map which is used to generate a 2D key-matrix image to encrypt any types of files and information stored in database or sent through a communication channel, and thus nullifies the value of interception. The originated key matrix offers an unbounded number of probabilities that make the process of brute-force attacks impossible. The security of algorithm depends on many factors: the initial control parameters selected randomly by the system to generate the chaotic ranges, the applied mechanism to derive set of unrepeated numbers from range values, the adopted map that convert the 1D chaotic values into 2D key matrix, advancing the number of stages to some extent on 2D key matrix transposition made significantly the key more secure, furthermore the diffusion processes that are performed with a private image selected by the sender.
The experimental results prove the efficiency of the proposed system when tested on more than 500 image samples of various dimensions and types according to many valuable indicators, in terms of hiding the distinctive marks in the histogram of ciphered images (become flatten) when compared with that of their original context, beside that the other security measurements are very hopeful where a highly average of MSE for cipher images is obtained greater than 10000 (i.e. the greater MSE indicators lead to higher quality and strength of the encryption method), whereas the mean PSNR is very low and appropriate (8.17), with correlation near zero and optimal entropy close to 8 (7.9975). Furthermore, the average entropy value of the proposed algorithm method achieves better results when compared with previous related works for a unified data set. | 3,872.4 | 2022-01-20T00:00:00.000 | [
"Computer Science"
] |
Data on step-by-step atomic force microscopy monitoring of changes occurring in single melanoma cells undergoing ToF SIMS specialized sample preparation protocol
Data included in this article are associated with the research article entitled ‘Protocol of single cells preparation for time-of-flight secondary ion mass spectrometry’ (Bobrowska et al., 2016 in press) [1]. This data file contains topography images of single melanoma cells recorded using atomic force microscopy (AFM). Single cells cultured on glass surface were subjected to the proposed sample preparation protocol applied to prepare biological samples for time-of-flight secondary ion mass spectrometry (ToF SIMS) measurements. AFM images were collected step-by-step for the single cell, after each step of the proposed preparation protocol. It consists of four main parts: (i) paraformaldehyde fixation, (ii) salt removal, (iii) dehydrating, and (iv) sample drying. In total 13 steps are required, starting from imaging of a living cell in a culture medium and ending up at images of a dried cell in the air. The protocol was applied to melanoma cells from two cell lines, namely, WM115 melanoma cells originated from primary melanoma site and WM266-4 ones being the metastasis of WM115 cells to skin.
Specifications
Physics, Chemistry, Biology More specific subject area
Biophysics, Biochemistry
Type of data Figures How data was acquired Atomic force microscope (AFM): XE120 Park Systems. Surface images (topography and deflection modes and 3D representation of topography) recorded by Xe120 system (saved originally in tiff format) Data format Analyzed Experimental factors Living melanoma cells were cultured on the glass coverslips, placed in a Petri dish, and immersed in a culture medium. Such samples were further used for the AFM measurements. Experimental features AFM images of surface topography of single melanoma cells were measured after each step of the applied preparation protocol. The AFM topography was acquired for the same cell.
Data accessibility
Data are provided with this article
Value of the data
The data presented here essentially show changes on a cell surface upon application of the ToF SIMS specific sample preparation protocol.
Recorded images enable to estimate the effect of fixation, salt removal, dehydration, and air-drying on surface topography and morphology of single cells.
The proposed protocol for sample preparation can be applied to prepare single cells cultured on bare silicon surface for ToF SIMS measurements delivering mass spectra of single cells.
Data
The data presented below, show an exemplary sequence of the surface topography and deflection ("error") images (a & b) recorded using an atomic force microscope (model: Xe120, from Park Systems, Korea) for a single cell after each step of the applied ToF SIMS specific sample preparation protocol, accompanied by a 3D representation of a cell surface (c). The protocol involve four main stages, i.e. fixation, salt removal, dehydration and air-drying. The recorded data show changes on the The AFM images presented below, show surface changes of single WM266-4 cells acquired after each step of the ToF SIMS specialized sample preparation protocol. The presented protocol enables to record mass spectra (ToF SIMS) on single cells [1,2] (Figs. 2.1-2.13).
Cell culture
WM115 (skin primary tumor site) and WM266-4 (metastasis to skin) melanoma cells were cultured in the RPMI-1640 medium (Sigma-Aldrich) supplemented with 10% fetal bovine serum (FBS, Sigma-Aldrich). The cells were grown in 25 cm 2 culture flasks (Sarstedt) in the incubator (NuAire) (37°C and a 95% air/5% CO 2 atmosphere). After several passages (6-8), cells were seeded on clean and sterile glass coverslips or silicon substrates, placed in the Petri dishes (Sarstedt), and further cultured for 48 h in the corresponding media and culture conditions. For the AFM topography measurements cells were cultured directly on a bare glass coverslips while for ToF SIMS experiments they were grown on bare silicon substrates.
ToF SIMS specific protocol
The protocol for single cell preparation for ToF SIMS experiments follows through the following steps (step number correlates with image number): 1. Living cell cultured on bare silicon surface (for AFM measurementson glass surface). 2. Fixation with paraformaldehyde. 3. Rinsing with 50% phosphate buffered saline solution (PBS). 4. Rinsing with 25% PBS.
Atomic force microscopy
Topography of cells' surface was recorded after each step of the sample preparation protocol using a commercial AFM set-up working in contact mode (model Xe120, Park Systems, Korea). Images were obtained using V-shaped silicon nitride cantilevers, characterized by nominal spring constant of 0.03 N/m (PNP-TR-customized, Nanoworld). The set point ranged from 0.2 nN to 0.8 nN (adjusted during the topography acquisition) while the scan rate range was set between 0.3 Hz and 1 Hz depending on the scan size. In the presented measurements, glass coverslips with living cells, placed in the Petri dish and immersed in a culture medium (RPMI-1640 culture media supplemented with 1% HEPES (pH 7.5)), were mounted on the top of a piezoelectric scanner. First, an image of a single alive cell was recorded in the culture medium. Then, images of the same cell were collected after each steps of the proposed protocol for single cells' preparation for ToF SIMS spectroscopy. In total, a sequence of thirteen images was collected. Images were analyzed using XEI (dedicated AFM software provided by Park Systems) and WSxM 5.0 software [3], only the contrast and slope of the images were adjusted when necessary. | 1,209.6 | 2016-08-03T00:00:00.000 | [
"Biology",
"Materials Science"
] |
Design of experiments to assess the effect of culture parameters on the osteogenic differentiation of human adipose stromal cells
Background Human adipose-derived stromal cells (hASCs) have been gaining increasing popularity in regenerative medicine thanks to their multipotency, ease of collection, and efficient culture. Similarly to other stromal cells, their function is particularly sensitive to the culture conditions, including the composition of the culture medium. Given the large number of parameters that can play a role in their specification, the rapid assessment would be beneficial to allow the optimization of their culture parameters. Method Herein we used the design of experiments (DOE) method to rapidly screen the influence and relevance of several culture parameters on the osteogenic differentiation of hASCs. Specifically, seven cell culture parameters were selected for this study based on a literature review. These parameters included the source of hASCs (the different providers having different methods for processing the cells prior to their external use), the source of serum (fetal bovine serum vs. human platelet lysate), and several soluble osteoinductive factors, including dexamethasone and a potent growth factor, the bone morphogenetic protein-9 (BMP-9). The expression of alkaline phosphatase was quantified as a readout for the osteogenic differentiation of hASCs. Results The DOE analysis enabled to classify the seven studied parameters according to their relative influence on the osteogenic differentiation of hASCs. Notably, the source of serum was found to have a major effect on the osteogenic differentiation of hASCs as well as their origin (different providers) and the presence of L-ascorbate-2-phosphate and BMP-9. Conclusion The DOE-based screening is a valuable approach for the classification of the impact of several cell culture parameters on the osteogenic differentiation of hASCs. Electronic supplementary material The online version of this article (10.1186/s13287-019-1333-7) contains supplementary material, which is available to authorized users.
Background
Human adipose-derived stromal cells (hASCs) are an attractive candidate for a large variety of cell-based therapies in regenerative medicine [1]. One of their most promising applications is in the regeneration of bone tissues [2]. This application stems from their ability to undergo osteogenic differentiation, in which they exhibit osteoblast-like function leading to the deposition of an extracellular matrix with its subsequent mineralization [3]. Accumulating research has shown that hASCs, like many other mesenchymal stromal cells, are highly sensitive to culture conditions which affect their various cell functions, including osteogenic differentiation [1]. This is particularly important as researchers are paying increasingly close attention to the origins of biochemicals used in order to comply with Good Manufacturing Practices (GMP) and facilitate clinical translation. For this reason, significant efforts are being dedicated to replacing xenogeneic cell culture products with their allogeneic counterparts. One example is the replacement of fetal bovine serum (FBS) with human platelet lysate (hPL) [4]. Given the sensitivity of the cells to the culture conditions, such replacements must be studied in detail to ensure that they do not alter cellular functions. Significant advantages conferred on hASCs by the use of hPL, e.g., increased proliferative potential [5] and improved chromosomal stability [6], have already led to its use as the serum supplement, including in our previous study [7]. However, some reports have indicated that hPL might lead to the spontaneous expression of alkaline phosphatase (ALP), an important marker of the osteogenic differentiation [8]. This induction occurred in the absence of any other osteoinductive supplementation, resulting in compromised negative controls.
In order to elucidate the effect of hPL on the osteogenic differentiation of hASCs, in addition to other culture parameters such as the source of stem cells (different providers having different processing methods), seeding density, and various medium components, we applied the concept of design of experiments (DOE). This statistical approach allows to describe and model the variation of a set of readouts based on input variables. One of its advantages is the ability to provide descriptive assessments using the minimally required number of experimental conditions [9]. Importantly, by targeting several variables at a time, DOE can help in identifying important interactions that may have been missed when analyzing these variables separately. Besides, the methodology provided by DOE ensures that the experiments are made in a statistically equilibrated manner within the selected working domains of the variable parameters. After specifying the range of variability for each variable, which may be discrete or continuous, it generates an experimental matrix with the minimum number of experimental conditions to be analyzed. Regularly used in chemical and mechanical engineering for process optimization and predictive modeling, DOE is still relatively uncommon in the fields of cell biology and bioengineering. Very few studies use DOE applied to biological questions. DOE was used to optimize the culture medium composition for the expansion of human pluripotent stem cells [10] and the design of hydrogel substrates for their neurogenic differentiation [11]. To the best of our knowledge, this approach has not yet been applied to analyze culture parameters for the osteogenic differentiation of mesenchymal stromal cells.
The goal of the present study was to apply the DOE approach to rank the aforementioned variables for their contribution to maximizing the osteogenic differentiation of hASCs. Given that the main interest was in ranking individual variables, a fractional factorial design, which omits intervariable interactions in order to further reduce the number of required conditions, was employed. One such design, known as Plackett-Burman design [12], uses a Hadamard matrix to define this minimal number of runs. This number is the minimal number of experimental conditions tested in order to determine a linear model without interaction between the variables. The aim of this model is to assess the impact of the different variable parameters on the outcome (e.g., measured biochemical signal) and to verify that the variation induced by a single parameter is larger than the experimental uncertainty.
As negative and positive controls are essential for an accurate analysis, extra conditions corresponding to such controls were added to the matrix. To allow for the rapid assessment, the expression of ALP, a common early marker for osteogenic differentiation, was set as the sole target response for both analyses. It was analyzed using a standard colorimetric assay and quantified using absorbance detection [13].
Methods
All reagents and products were purchased from Sigma-Aldrich and Thermo Fisher Scientific unless stated otherwise.
Selection of hASC culture parameters
Following a literature review, the focus was given to eight variables that are considered to wield the most influence on the osteogenic differentiation of hASCs ( Table 1). The first of these is the source of stem cells as several publications have shown that hASCs comprise heterogeneous populations with differential capacities for osteogenic differentiation [14]. To this end, we used hASCs supplied by Zen-Bio (hASC-ZB), Inc. and Établissement Français du Sang (hASC-EFS). The surface markers reported for both types of cells are described in Additional file 1: Section 1. The second variable is the seeding density as it has been shown to affect the proliferative capacity of hASCs [5]. Given the importance of culture medium on resulting hASC function, both the base medium and the serum [8] were chosen as the other variables. Particularly, we focused on Dulbecco's modified Eagle's medium (DMEM) with and without Ham's F-12 for the base culture medium and xenogeneic FBS and allogeneic hPL for the serum. As hASCs normally require supplementation [15] to undergo osteogenic differentiation, most frequently with dexamethasone, L-ascorbate-2-phosphate, and ß-glycerophosphate [16], they were included in the list of variables as well. Finally, bone morphogenetic proteins (BMPs) represent an important class of osteoinductive growth factors to drive the osteogenic differentiation of hASCs [17]. While BMP-2 and BMP-7 are commonly used for this purpose, a recent study has shown that BMP-9 may be more osteoinductive toward hASCs [18]. Thus, including BMP-9 as one of the variables might help in further elucidating its effect on the osteogenic differentiation of hASCs.
Construction of the experimental matrix and the DOE analysis
The 12 × 12 Hadamard matrix was used to accommodate the eight target variables, which allows to reduce the total number of conditions from 256 (2 8 , full factorial design) to 12 (fractional factorial design) [19]. This number of conditions is sufficient to obtain the coefficients of the linear model without interaction between the variables, in order to assess the impact of the different variables and to rank them according to their impact. Four additional conditions were added to the matrix to include the negative and positive controls for each of the hASC source. The resulting experimental matrix contained 16 conditions (Table 2) in total with an assigned value for each of the eight variables. The range of variabilities, particularly for numerical variables such as the seeding density or the concentrations of medium components, were approximated to those most commonly used in literature. Two types of variables were used in the construction of the experimental matrix: discreet variables such as hASC source, base culture medium, and serum source and continuous variables such as different medium supplements and osteoinductive factors, whose concentration can be controlled in a continuous fashion. Table 1 Target variables used for the screening of culture parameters for the osteogenic differentiation of hASCs. hASC source and seeding density were chosen as factors related to the stromal cells; base medium, serum, L-ascorbate-2-phosphate, ßglycerophosphate, dexamethasone, and BMP-9 were chosen as factors related to the medium. The factor variability was set between a certain range based on the values commonly used in literature Table 2 The parameters used to generate the experimental table. In total, 16 conditions were investigated to screen culture parameters for their effect on osteogenic differentiation. hASC human adipose-derived stromal cell, AP L-ascorbate-2-phospate, ßGP ß-glycerophosphate, DEX dexamethasone, BMP-9 bone morphogenetic protein-9. + , − refer to the four added positive and negative controls Differentiation assays involved two steps: hASC expansion in a growth medium (GM) to obtain their sufficient numbers and the subsequent differentiation in a relevant osteogenic medium (OM). The aforementioned controls were used to confirm two things: the capacity of hASCs in question to undergo osteogenic differentiation (positive control, usually carried out through osteogenic induction in OM) and the fact that such differentiation is not spontaneous or self-induced (negative control, carried out by maintaining hASCs in GM).
hASC expansion and seeding
The influence of the cell source was studied by using hASCs from either Zen-Bio, Inc. (hASC-ZB) or the Établissement Français du Sang (hASC-EFS). Both cell lines were used before the 5th passage and cultured in FBS-based GM [DMEM + 10% FBS + 1% penicillin/streptomycin] with medium changes every 2 days. Upon seeding within 96-well cell-adherent microplates (Greiner Bio-One), hASCs were left undisturbed for a day to allow their attachment.
Pre-screening of the serum used during hASC expansion for the fidelity of negative and positive differentiation controls Prior to launching the DOE analysis, the effect of FBS and hPL during hASC expansion on the ALP expression was analyzed to assess the fidelity of negative and positive controls. For this purpose, hASC-EFS were expanded in either FBS-based GM or hPL-based GM [DMEM + 5% hPL (Cook Regentec) + 1% penicillin/streptomycin]. 10,000 hASCs/well were seeded, followed by their osteogenic differentiation with the corresponding OM [GM + 100 nM dexamethasone + 50 μM L-ascorbate-2-phosphate + 10 mM ß-glycerophosphate]. hASCs expanded and maintained in FBS-and hPL-based GM served as negative controls.
hASC differentiation according to DOE conditions hASC-EFS and hASC-ZB, expanded in FBS-based GM, were seeded at 2000, 6000, or 10,000 cells/well within the microplates with subsequent induction with the corresponding medium for osteogenic differentiation based on the conditions defined in Table 2. In total, 16 different medium formulations were prepared with the BMP-9 (PeproTech, 95% purity) added to the medium at the last minute to achieve the final concentration.
Analysis of the osteogenic differentiation of hASCs via ALP staining
The alkaline phosphatase (ALP) expression was analyzed after the osteogenic induction. In general, the analysis of ALP expression can be carried out through two complimentary methods, ALP staining [13] and ALP enzymatic activity [20]. Both methods which are colorimetric in nature were tested (Additional file 1: Section 2); however, the results of the analysis show that the ALP staining generated more reproducible readings as judged by the standard deviation (error bars) between technical replicates (Additional file 1: Figure S1). Moreover, ALP staining requires less sample manipulation and fewer steps than the enzymatic assay, which makes it attractive for the purposes of rapid and high-throughput analyses [13]. It is also much better adapted to the small working volumes of a 96-well cell culture microplate generating more reproducible readings (Additional file 1: Section 2). Therefore, ALP staining method was used to measure the ALP expression after osteogenic induction. Moreover, day 7 was chosen as ALP staining was sufficiently pronounced at this time point with cell layer detachment that was observed for later time points at 10 and 14 days (Additional file 1: Section 3 and Additional file 1: Figure S2). For the ALP staining, hASC-containing wells were washed twice with phosphate-buffered saline (PBS), fixed with formaldehyde (3.7% in PBS) for 20 min at room temperature and rinsed twice with PBS. The fixed cells were then incubated with the ALP staining solution (Leukocyte Alkaline Phosphatase kit, 120 μL/well) for 30 min at 37°C and rinsed with PBS. Once dry, the images of the stained wells were taken with a scanner (Epson V600) and their absorbance at 570 nm was quantified by taking measurements over the entire area of each well at 11 × 11 positions using a microplate reader (TECAN Infinite M1000). Data were pooled from three biological experiments with three technical replicates for each experimental condition.
Statistical analysis
Design-Expert® 11 (Stat-Ease), a versatile and commonly used software for DOE analyses, was used to encode the experimental table generated (
Results
Effect of the serum used during hASC expansion on the fidelity of negative and positive differentiation controls To study whether the serum used during hASC expansion might inadvertently affect the subsequent osteogenic differentiation, hASC-EFS were expanded for 2 passages in either FBS-or hPL-based GM and then either maintained in these media or osteogenically induced in FBS-or hPLbased OM (Fig. 1). As expected, FBS-expanded hASC-EFS that were either maintained or osteogenically induced in FBS-based media confirmed the fidelity of both positive and negative controls with a sparse coloration in FBSbased GM and a strong coloration in FBS-based OM. When the same cells were maintained or osteogenically induced in hPL-based media, both controls showed an equally strong coloration, revealing a compromised negative control. hPL-expanded hASC-EFS that were maintained in FBS-based GM failed to validate the fidelity of the negative control as in both replicates the cell layer became completely detached prior to the ALP staining. A cell layer was similarly compromised when the same cells were maintained in hPL-based GM and the remaining adherent cells showed a strong coloration, thus invalidating the negative control. Both of the positive controls for hPL-expanded hASC-EFS, FBS-and hPL-based OM, showed strong colorations as expected. Thus, due to the false negative controls in hPL-expanded hASC-EFS, further DOE analyses were performed exclusively with FBSexpanded hASCs.
hASC differentiation according to DOE conditions FBS-expanded hASC-ZB and hASC-EFS were seeded at a given density and induced with one of the corresponding induction media (Table 2) for a week before the ALP staining (Fig. 2a). Visually, there were marked differences between different conditions within the same biological replicate which was further confirmed by the quantification of the ALP staining using absorbance (Fig. 2b). While the absolute values for the quantified ALP staining differed between the biological replicates, the overall trends were consistent across all three. The average experimental error between them was calculated to equal 4.1%.
Influence of hASC culture parameters on osteogenic differentiation via DOE analysis
The quantified results from the ALP staining (Fig. 2b) were encoded into Design-Expert® and analyzed to decode the Fig. 1 Pre-screening of the effect of the sera used during hASC expansion on the validity of controls in the subsequent osteogenic differentiation. hASC-EFS were expanded in either FBS-(blue) or hPL-based (orange) GM, followed by their differentiation in respective OM. For differentiation, both the expectations for the ALP staining and the actual staining results are shown. The latter are representative of 1 biological with 2 technical replicates for each condition. "*" denotes partial or complete cell layer detachment relative contributions of the variables toward maximizing the ALP expression (Fig. 3). As mentioned previously, only single-variable contributions were considered. This is the main reason why the total sum of these contributions does not add up to 100% as intervariable interactions have not been accounted for. The analysis revealed that the variables had either a net positive or a net negative contribution toward maximizing the ALP expression. hASC source, seeding density, dexamethasone (DEX), and BMP-9 showed a net positive contribution, i.e., their (+) variability led to a higher ALP expression. On the other hand, the medium, serum, L-ascorbate-2-phospate (AP), and ßGP showed a net negative contribution, i.e., their (+) variability led to a lower ALP expression. To provide a statistically relevant ranking of these variables, the previously estimated experimental error between the biological replicates (4.1%) was used as the significance limit. In this case, the decreasing order of the contributions to the ALP expression was as follows: (1) BMP-9, (2) serum, (3) hASC source, (4) AP, (5) DEX, and (6) seeding density.
Discussion
Compromised negative controls for hASC differentiation due to hPL To study whether hPL used during hASC expansion affects the eventual fidelity of both negative and positive controls for their osteogenic differentiation, hASC-EFS were expanded for 2 passages in either FBS-or hPLbased GM and then induced to undergo osteogenic differentiation. The ALP staining results point to the fact that the use of hPL, either during expansion or differentiation, likely results to compromised negative controls (Fig. 1). Even at a relatively low supplementation of 5%, hPL was able to induce spontaneous ALP expression of FBS-expanded hASCs maintained in hPL-based GM.
While further assays and more biological replicates are needed to derive definitive conclusions, these results nevertheless provide further support to the previous studies that reported similar findings [8,12,13]. Surprisingly, human bone marrow mesenchymal stromal cells, cultured in a similar medium, did not show the false negative controls reported for hASCs, which highlight differences between these two cell types. In our study, the intensity of the coloration was visibly higher in hASC-EFS that came into contact with hPL as compared to those that were expanded and differentiated in FBSbased media. One likely explanation for the osteoinductivity of hPL is its composition. hPL, namely human platelet lysate, is obtained by lysing platelets found in human blood and is full of bioactive proteins and growth factors, including osteogenic ones such as BMPs and platelet-derived growth factors (PDGFs). While its precise composition is unknown, it may contain sufficient amounts of the latter to induce the osteogenic differentiation of hASCs. It may be interesting to further investigate the precise role of hPL on hASC cell differentiation, since hASC are currently explored in clinical trials as alternative to mesenchymal stromal cells. A full characterization including quantitative polymerase chain reaction (qPCR) of relevant marker genes like ALP, runtrelated transcription factor 2 (RUNX2), and osterix (OSX) up to matrix mineralization may be needed prior to their use for in vivo assays.
Confirmation of the differential importance of culture parameters via DOE analysis
Given that hPL-expanded hASC-EFS led to false negative controls, only FBS-expanded hASCs were used for the subsequent DOE analysis. Its results confirmed the high sensitivity of hASCs, namely the expression of ALP, toward different culture parameters. The intensity of the ALP staining varied widely and some conditions like n°1 1 even led to the detachment of the cell layer (Fig. 2a).
The quantified values of the ALP staining for each of the conditions (Fig. 2b) were then encoded within the generated table (Table 2) using Design-Expert®. While it allows many different modes of analysis, this work focused on the individual ranking of single variables for their effect in maximizing the ALP expression (Fig. 3). BMP-9 had the largest contribution to the ALP expression in hASCs, which confirmed its potential as an osteoinductive factor for their osteogenic differentiation. Previous studies have identified its role in adipogenesis [21] while others pointed to its potentially higher osteoinductivity to hASCs compared BMP-2 and BMP-7, which are more commonly applied for such a purpose. The results of this study corroborate the latter findings. The second most important variable for maximizing ALP expression was the serum. In accordance with the previously mentioned results (cf. Section "Compromised negative controls for hASC differentiation due to hPL"), hPL induced higher ALP expression than FBS, which is confirmed by the net negative contribution of this variable. hASC source was the third most important factor, with hASC-EFS [our (+) variability for this variable] contributing positively to higher ALP expression compared to hASC-ZB. It is difficult to make any definitive conclusions for why this is may be due to the fact that hPL is used in hASC-EFS isolation and processing protocols as the serum supplement, in large part to avoid the disadvantages of the FBS. hASCs may become pre-differentiated when processed and passaged in an hPL-based medium, thus leading to higher ALP expression. Interestingly, AP was the fourth important variable with its absence from the medium leading to a higher ALP expression. AP is an integral part of standard OM formulations [7], mainly responsible for the deposition of the extracellular matrix. Why its presence had an inhibitory effect on the extent of ALP expression is unknown but warrants a further study. DEX, another common OM component, was the fifth variable in terms of its influence. It is considered to be highly osteoinductive [22]; however, its effect on hASCs is varied as it makes part of the media used for their adipogenic differentiation as well [21]. This might explain why its contribution was lower than that of BMP-9. Finally, the seeding density was the sixth most important variable for maximizing ALP expression. This makes intuitive sense as higher seeding densities mean higher numbers of cells leading to higher amounts of ALP to stain in terms of its absolute quantities. While our analysis was focused on these several culture parameters, the application of DOE might be used to study also how hASC function depends on the known variabilities related to donor characteristics, collection method, and passage number [23].
Conclusions
The DOE approach was applied to identify and rank important culture parameters that affect the ALP expression in order to optimize the osteogenic differentiation of hASCs. Our preliminary results show that, among the selected culture parameters, BMP-9, serum, AP, hASC source, DEX, and seeding density are important culture parameters that affect the osteogenic differentiation of hASCs. However, hPL may compromise hASCs stemness which merits a further in-depth investigation with qPCR for the expression of relevant genes and other established markers for multi-lineage differentiation. Thus, DOE is a versatile tool for the rapid pre-screening of culture parameters to identify the most important parameters in order to judiciously optimize culture conditions for a specific purpose. The combination of DOE with automated high-throughput screening methods such as high-content analysis could further improve the rapidity and fidelity of DOE analyses as they would allow to expand the analyzable parameter space and concurrently focus on several readouts. | 5,391.4 | 2019-08-14T00:00:00.000 | [
"Biology",
"Medicine"
] |
Modelling large timescale and small timescale service variability
The performance of service units may depend on various randomly changing environmental effects. It is quite often the case that these effects vary on different timescales. In this paper, we consider small and large scale (short and long term) service variability, where the short term variability affects the instantaneous service speed of the service unit and a modulating background Markov chain characterizes the long term effect. The main modelling challenge in this work is that the considered small and long term variation results in randomness along different axes: short term variability along the time axis and long term variability along the work axis. We present a simulation approach and an explicit analytic formula for the service time distribution in the double transform domain that allows for the efficient computation of service time moments. Finally, we compare the simulation results with analytic ones.
Introduction
Service speed variability is a problem that has been observed in many practical application scenarios. For example, in Kimber and Daly (1986), it has been observed for vehicular traffic. More recently this problem has been recognized in data centers (Guo et al. 2014). The effect of variability was also studied in Anjum and Perros (2015) with application to video-streaming. Most of the previous literature, however, focused only on large-timescale variability, where Markov-modulating models represent the random fluctuations of the environment. These set of models are commonly referred to as reward models and have been studied for a long time (Howard 1971).
The variation in the service speed can be modelled by dividing the jobs into "infinitesimal quantities of work to be done" and considering the "speed at which this infinitesimal work is performed", i.e., the random amount of time needed to execute the infinitesimal amount of work. Then, once a model defines how speed changes over time, the complete system can be modelled in a straight-forward way where the amount of work increases gradually along the analysis and the time required to execute the given amount of work is a random process.
If the service process depends on a time-dependent random process, e.g., on a modulating background continuous time Markov chain (CTMC) representing the environmental state, whose "clock" evolves according to the time, then the natural performance analysis is based on the gradually increasing time and the randomly varying time dependent environment state.
However, in many real applications, variability is not easily predictable and works at different timescales. Modulating CTMCs (whose "clock" evolves according to the time) works very well to model variability where the parameters of the job execution remain constant for a longer random period of time, and there are few jumps during the execution of one job. Apart from this large scale variability, in this work, we also focus on variability that occurs at much smaller timescales, where the execution speed changes thousands, if not millions, of times during the execution of the main job, and combine it with the more classical modulation that works on a larger timescale.
The remainder of this paper is structured as follows. In Sect. 2 we start by considering only the small timescale variability. In Sect. 3 we additionally introduce also the large timescale variability. Section 4 is devoted to the mathematical analysis of the obtained small and large timescale system. The effects of the considered variability is studied in Sect. 5 through numerical examples, and Sect. 6 concludes the paper.
Small timescale variability
In this section, we omit the large timescale variability and instead focus only on small timescale variability. So we assume that the environmental state is unchanged for now.
We introduce a second order fluid model for the short timescale variability: assuming that a job is composed of quantums of size x, each such quantum is served in a random amount of time with distribution N (μ x, σ 2 x) (with μ > 0). Assuming that the service times of the different quantums are independent, the progress of service is modeled by a Brownian motion X (w) with parameters μ and σ 2 . We emphasize that in this model, the Brownian motion corresponds to the time required to service a job as a function of the size of the job (see Fig. 1). A job of size x thus requires a random time T with distribution N (μx, σ 2 x), whose probability density function is The assumption that T may take negative values does not make sense physically. However, due to μ > 0, for macroscopic job sizes, the probability of T < 0 is negligible, so the proposed mathematical model is a close approximation of the physical system. In the mathematical analysis, T < 0 does not cause any issues, and the performance measures of interest can be calculated accurately. The can be expressed as a polynomial in xμ and xσ 2 where the coefficients u k, j are such that u k, j = 0 if k + j is odd. Note that a Brownian motion may take negative values as well, which does not make sense physically, but, since μ > 0, for macroscopic values of w, the probability that T is negative is negligible.
We focus on the service of a job in a queue whose work requirement, W , is generally distributed according to probability density function f W (x).
Using the second order fluid model assumption, the probability density function of the service time of a job, denoted by f T (t), can be computed as: (2)
Moments of the scaled distribution
There are also some interesting relations between the moments of W , the moments of T and the parameters μ and σ 2 . In particular, the k-th moment of T can be expressed as: Since E[N (xμ, xσ 2 ) k ] can be expressed as a polynomial in xμ and xσ 2 with coefficients u k, j according to (1), we can compute the moments of T as: Since u k, j = 0 only when k + j is even, E[W k+ j 2 ] is always an integer moment of W . For example, for the first and second moment of T we have:
Combining large and small timescale variability
Large scale variability can be considered using a discrete state continuous time Markov modulating process (MMP), denoted by M(t). We assume the MMP is a continuous time Markov chain (CTMC) on a finite state space with infinitesimal generator matrix Q. In state i, the service is characterised by rate μ i and variance σ 2 i . Only considering large scale variability (that is, assuming σ i ≡ 0, ∀i) would lead to a standard Markov reward model. However, including small-scale variability makes for an interesting and complex model.
Assume that a job of size W = u starts service at time t = 0, with the MMP M(t) in state i. Then the evolution of the service time X (w), 0 ≤ w ≤ u as a function of the job size is the following: -Let a 1 denote the time of the first transition of M(t). As long as X (w) is smaller than a 1 , X (w) evolves according to a Brownian motion with parameters μ i and σ 2 i [denoted by BM(μ i , σ 2 i )]. -At time a 1 , M(t) changes to some state j. Accordingly, assuming that the first passage of X (w) to a 1 occurs at work amount w 1 , for w ≥ w 1 , X (w) evolves according to a BM(μ j , σ j ) (starting from the point w 1 and from level a 1 ). -This is repeated for further possible transitions of M(t) at times a 2 , a 3 , . . . , up to the point u.
Note that in visualization, the horizontal axis denotes the job size, and the vertical axis denotes time, see Fig. 2. Thus for X (w), the behaviour can be described as a type of level- This model is essentially different from second order Markov-modulated fluid models (also referred to as Markov modulated Brownian motion) (Breuer 2012;Karandikar and Kulkarni 1995). The main difference between the two approaches is that in second order fluid models, it is the amount of work performed per unit of time that is assumed to have normal distribution; in the present paper, it is the amount of time required to perform a unit of work that is assumed to have normal distribution.
Keeping in mind that M(t) is a CTMC, the entire distribution of X (w) is determined by the initial points t = 0 and w = 0 and the initial state of the modulating process M(0) = i. The process X (w) can be simulated as follows: -If W is random, generate the value of W , denoted by u.
-X (w) runs as a BM(μ i , σ 2 i ) until either the value of X (w) reaches a 1 or w reaches u, whichever occurs first.
-If u occurred first, then the simulation is finished.
-If X (w 1 ) = a 1 for some w 1 < W , then we generate the next state j and also the next transition time a 2 according to the MMP M(t), then continue X (w) as a Brownian motion with parameters (μ j , σ 2 j ) starting from the point (w 1 , a 1 ) until either the value of X (w) reaches a 2 or reaches u, whichever occurs first.
-We keep generating new transitions and new Brownian motion sections until we reach u. The service time of the job is T = X (u) = X (W ).
The main question, similar to Sect. 2, is the distribution of T and performance measures derived from T . The main contribution of this paper is the analytical evaluation of the distribution of T in the double transform domain. Several related performance measures can be obtained based on this transform domain description numerically.
The analytical problem can be formulated as the cumulative distribution type functions of the service time (for fixed w) which include information about the initial and final background state of M(t) along with the distribution of the service time. In accordance with the mathematical model, G i j (x, w) is defined for both positive and negative values of x, but G i j (0, w) is typically negligible.
Based on G i j (x, w), the corresponding cumulative distribution function in case of a random W with probability density function f W (w) is The next section provides the mathematical analysis of G i j (x, w).
Job completion in small and large timescale variable environment
Let X (w) denote the time needed to service a job of fixed size w. We aim to analyse the entire process {X (w), w ≥ 0}, and, based on that, derive performance measures for X (w) (for a fixed job size w), and also for X (W ), where W is possibly random.
The system operates in a random environment characterized by the MMP M(t), t ≥ 0, which is a Markov chain with generator Q (and the variable t denotes the time of the MMP). The process X (w) starts from 0 at w = 0. When the MMP is in state i, the main process, X (w), is a Brownian motion with parameters μ i > 0 and σ i > 0 (given for each state i). Whenever the MMP makes a transition at time t = a, i.e., when X (w) reaches level a, the MMP switches to a new state k and the main process continues as a Brownian motion with parameters μ k > 0 and σ k starting at level a. Then the same procedure continues until the job of size u gets completed.
The main process X (w) starts from level 0 at w = 0, i.e., X (0) = 0. We are interested in the distribution of X (u), where u is the size of the workload, and introduce the notation We aim to compute where refers to the double sided Laplace transform and * refers to single sided Laplace transform. (7) is convergent when Re(s) > 0 and |v| is small enough (depending on Re(s), that is, |v| < ε(Re(s)) for some positive function ε(.). Convergence of the inner integral for Re(s) > 0 follows directly from the fact that g i j (x, w) is a probability density function. Convergence in v will be addressed during the proof of Theorem 1, and the function ε(.) is made explicit in (16). We remark that calculating g * i j (v, s) in a region where Re(s) > 0 and |v| is small enough is sufficient for the further calculation of performance measures of interest.
where Q is the generator of the MMP, I is the identity matrix and where furthermore (Re(s) > 0 and |v| is sufficiently small in all formulas.) Proof Let W a be the first passage point along the horizontal axis, where the BM(μ i , σ i ) starting from level 0 reaches level a (a > 0). The CDF and PDF of W a are denoted by f i (a, w) is given explicitly [using Girsanov's theorem and mirror principle, see e.g. Theorem 6.9 in Schilling and Partzsch (2012)] as When the process starts in state i, two things may happen (c.f. Fig. 3): the main process will either reach level a (along the vertical axis) before w (on the horizontal axis), i.e. W a < w, or not. If the main process reaches level a before w, then the MMP switches from state i to another state k at W a , and the main process continues similarly with parameters (μ k , σ k ), albeit starting from level a.
If the main process does not reach level a before completing w amount of work, i.e., W a > w, then we need the conditional distribution of the level at w assuming that X (w) < a for ∀u < w. To obtain it, we introduce the notation is a CDF type function, and it describes an incomplete distribution concentrated on (−∞, a), and it satisfies where the first term corresponds to the probability that the BM(μ i , σ i ) hits level a before w (W a < w), while the second term corresponds to the probability that the BM(μ i , σ i ) hits the vertical line at w without reaching level a (W a > w). To calculate the density b i (x, a, w), we first note that the position of a BM(μ i , σ i ) at point w has normal distribution with parameters (μ i w, σ i √ w), so its probability density function denotes the PDF of normal distribution with parameters μ and σ 2 , i.e., To compute b i (x, a, w), we need to subtract the density that the BM hits level a first. We calculate it using total probability according to the first passage time at level a, W a . Altogether, b i (x, a, w) can be calculated as The level at which the MMP changes its state, a, is exponentially distributed with parameter −q ii . Using that and the probability of moving from state i to state k at a state transition of the MMP, −q ik /q ii , we have where x + = max(0, x) and δ i j denotes the Kronecker delta. Remarks: -The first term in (14) is the probability that the main process reaches u = w before hitting level a averaged out according to the distribution of a. -In the second term, the MMP switches to state k at u < w, with the main process at level X (w) = a. -Even though the general idea is that the main process is increasing, using a second order approach means that in the short term, the main process may decrease as well. Hence we should care about negative values of x if possible. The formula (14) is consistent with the possibility that the process X (w) may decrease and the formula is valid for negative values of x as well.
The second integral in (14) is essentially convolution in both variables x and w; to simplify it, we will take Laplace-transform in the variable w, and double-sided Laplace transform in the variable x.
We look to take the Laplace-transform of the variable w in all of the important functions in (14). Denoting the transform variable by s, the Laplace-transform of f i (a, w) in (11) is explicit: where z i (s) is given in (9). Similarly, we have an explicit formula for the Laplace-transform of φ Then from (13) and from (14), we have To transform the level variable x as well using two-sided Laplace transform, we will use the functions φ − * (v, s, μ, σ ) and φ + * (v, s, μ, σ ) as defined in (10): where convergence in either integral holds when the real part of the associated denominators are positive. For Re(s) > 0, we have μ < Re μ 2 + 2sσ 2 , from which the denominators are positive when from this, we have that (7) and (8) To compute g * i j (v, s) = ∞ x=−∞ e −vx g * i j (x, s)dx, we start by investigating the transform of first term a * i (x, s) on the right hand side in (15): The first term on the right hand side of (17) is The second term on the right hand side of (17) is Altogether, expressing (17) as the total of (18) and (19), we get Now we focus on the second term on the right hand side of (15).
from which we get (22) can be written as whose solution is (8).
Theorem 1 provides an explicit expression (involving a matrix inversion) for the double transform domain description of the service time distribution. For the s → w one-sided inverse Laplace transformation, we applied different approaches depending on the distribution of the work requirement W . To simplify calculations, we decided to avoid doing a v → x two-sided inverse Laplace transformation, instead calculating the moments of X (w) explicitly, which can be obtained from where L −1 s→w denotes inverse Laplace transformation from parameter s to parameter w. To compute these moments it is important to note that the eigenvalues of Z(s) − Q are all positive, which provides a symbolic derivative according to v also for the matrix inverse, e.g., to compute the mean of X (w) we have The explicit formulas for ∂ k ∂v k A * (v, s) v=0 with higher k values are omitted here, but symbolic mathematical packages can compute them easily. For deterministic work requirement (W = w) we applied the numerical inverse Laplace transformation method from Horváth et al. (2018) with order 24. This numerical inverse Laplace transform procedure evaluates the Laplace transform function only in points with a positive real part.
For exponentially distributed work requirement (W is exponentially distributed with rate ϑ) we use an explicit inversion formula based on Consequently, the kth moment of the service time for an exponentially distributed work requirement (W ) with rate ϑ can be computed explicitly as 5 Numerical examples
Simulation results
To study the effects of variability, we have applied the procedure outlined in Sect. 3 to simulate the behaviour of the queue with short and long scale variability. In particular, to find the intersection between the Brownian motion and the level determined by the time at which the modulating process changes state, we have discretised the work with a quantum x, and during the period when the MMP stays in state i, for each quantum we have set the evolution of the time according to a normal distribution N (μ i x, σ 2 i x) (following the procedure outlined at the beginning of Sect. 2). The MMP leaves state i at the first time instant in which the discretised BM crosses the level T n , where T n is the time of the nth state transition of the MMP. When the nth state transition occurs in state i, then T n = T n−1 + τ i , where T n−1 is the time of the previous state transition and τ i is exponentially distributed with parameter −q ii (the ith diagonal element of the generator matrix Q of the MMP). This simulation approach is indeed an approximation, but it can be made arbitrarily precise by choosing appropriately small values of x (at the cost of simulation time). Simulations were run for several choices of x to examine the error of this approximation.
In our numerical experiment, we have considered a two-state modulating process with jump rates q 12 and q 21 , and studied the effects of different service speed and variability parameters μ i and σ i (i = 1, 2). Apart from computing performance measures related to the service time distribution, we also included simulation results for the response time in an M/G/1 queue, where jobs arrive according to a Poisson process of rate λ and are served by a single server subject to short and long term variability according to a first-come-first-served discipline. Job sizes may be either deterministic or random. λ is set so that the queue is stable.
For the first batch of simulations, we examine the effect of short and long term variability of the server by changing μ and σ while leaving the other parameters fixed: We compare the following cases: -Base no variability, μ 1 = μ 2 = 2.4848 and σ 1 = σ 2 = 0.
(For this set of simulation results, the discretization interval x is set to 0.05 ms so that on average, the BM for each job requires 2000 samples. Each simulation considers the execution of N = 10,000 jobs.) Figure 4a shows the service time distribution for the different server variability configurations. For the Base case, as it is expected, all the probability mass is centered along Fig. 4a at 200 ms and 400 ms are associated with the cases when the MMP stays in state 1 (2, respectively) for the whole period of the service. The cases when the MMP experiences state transition during the service are represented by the continuously increasing part of the Large curve. The case that combines both small and large scale variability (Small + Large) further smooths the curves, and the effect is more evident near the two probability masses at 200 ms, and 400 ms. Figure 4b shows the response time distribution of the corresponding queuing models. It is interesting to see that in the cases where small variability is considered there are no jumps due to its perturbation effect.
The second batch of simulations focuses on the MMP by changing the speed of the background MMP. We set Figure 5 shows simulation results for the second batch of simulations. When the sojourn times are very large, the service time distribution tends to concentrate the probability mass near the times required in either state (200 ms and 400 ms). On the other hand, when the MMP changes rapidly, the distribution tends to concentrate on the average case, producing results very similar to the one seen in Fig. 4 for the cases with small variability only: in this case, there is almost no difference between large scale and small scale variability, because the quick alternation of the modulating process eliminates the large scale effect. As a final remark, to consider the case with sojourn times 1.25 ms and 0.8 ms, the sampling time was reduced to x = 0.01 to allow a sufficient number of samples during the sojourn in a modulating state. As for the response time (Fig. 5c) and we examine the following job size distributions for W : -Deterministic W = 100 ms, -Exponential with mean 100 ms, -Erlang with 4 stages with mean 100 ms, -Hyper-exponential with probability density function with parameters λ 1 = 1/(100(1 + √ 0.6)), λ 2 = 1/100((1 − √ 0.6)) -Pareto with probability density function x 9 4 x > 20, 0 x < 20 (To make the results easily comparable, E[W ] = 100 ms is identical in each case.) In particular, Fig. 6a shows the service time distribution for each job size distribution. The effect of service variability is more evident on job length distributions with a lower coefficient of variation. Figure 6b shows the effect on response time: indeed, combining the effect of service variability with a heavy tailed distribution, as for the Pareto case, can create very long queues which can lead to extremely long response times.
Comparison of analytical and simulation results
For the last batch of simulations, we compare empirical moments from the simulation to moments calculated using the double transform method of Sect. 4.
The system parameters are the same as in (25). Two different job size distributions are examined: deterministic and exponential. To test the inaccuracy of the simulation with finite discretization steps, we run each simulation with two different choices of x: x = 0.05 ms and x = 0.005 ms. Table 1 presents the moments of the service time distribution obtained from the simulator and the transform domain description. x = 0.05 ms corresponds to sim. 1 and x = 0.005 ms corresponds to sim. 2.
From Table 1, we observe increasing relative error for higher moments. For the mean, the relative error is around or smaller than 2%. The relative error of the mean decreases as x is refined from x = 0.05 ms (sim. 1) to x = 0.005 ms (sim. 2). We note that for the exponential case, the service time moments were calculated using an analytic formula, while for deterministic job size, some inaccuracy might also come from the numerical inverse Laplace transformation method.
Conclusions
In this work, we have introduced a queue with a service model where a modulating background Markov process models the large timescale variability, and a second-order fluid process models the service capacity on small timescale. The resulting service model can be interpreted as a certain type of level-dependent Brownian motion. | 6,174 | 2018-07-25T00:00:00.000 | [
"Computer Science",
"Engineering",
"Mathematics"
] |
Free-Energy Profile Analysis of the Catalytic Reaction of Glycinamide Ribonucleotide Synthetase
The second step in the de novo biosynthetic pathway of purine is catalyzed by PurD, which consumes an ATP molecule to produce glycinamide ribonucleotide (GAR) from glycine and phosphoribosylamine (PRA). PurD initially reacts with ATP to produce an intermediate, glycyl-phosphate, which then reacts with PRA to produce GAR. The structure of the glycyl-phosphate intermediate bound to PurD has not been determined. Therefore, the detailed reaction mechanism at the molecular level is unclear. Here, we developed a computational protocol to analyze the free-energy profile for the glycine phosphorylation process catalyzed by PurD, which examines the free-energy change along a minimum energy path based on a perturbation method combined with the quantum mechanics and molecular mechanics hybrid model. Further analysis revealed that during the formation of glycyl-phosphate, the partial atomic charge distribution within the substrate molecules was not localized according to the formal charges, but was delocalized overall, which contributed significantly to the interaction with the charged amino acid residues in the ATP-grasp domain of PurD.
Introduction
De novo purine nucleotide biosynthesis pathway is evolutionarily conserved in all organisms, including plants [1,2], microorganisms [3], and mammals [4]. Purine nucleotide synthesis starts with the formation of 5-phosphoribosyl-1-pyrophosphate from simple molecules such as carbonates, amino acids, and tetrahydrofolate, which yields inosine monophosphate (IMP) as an intermediate precursor. After IMP is produced, the pathway splits into two branches to produce adenosine monophosphate (AMP) and guanosine monophosphate (GMP). AMP and GMP are eventually phosphorylated to form adenosine triphosphate (ATP) and guanosine triphosphate.
The purD-encoded glycinamide ribonucleotide (GAR) synthetase, also known as PurD, catalyzes the second step of the de novo synthesis of purine nucleotides, which yields GAR from phosphoribosylamine (PRA) and glycine by consuming an ATP molecule [5][6][7]. This reaction consists of two steps, as shown in Figure 1. First, glycine reacts with ATP to generate a glycyl-phosphate intermediate, which is activated for nucleophilic attack. Then, glycyl-phosphate reacts with PRA, forming a carbon-nitrogen bond to yield GAR. In addition to PurD, six more enzymes involved in the subsequent steps of the purine nucleotide biosynthetic pathway of prokaryotes are also dependent on ATP. They are PurT, PurL, PurM, PurK, PurC, and PurM, where the name of each protein is derived from its gene name [8]. PurD, PurK, PurT, and PurP show high structural similarity and are classified into the ATP-grasp superfamily of proteins, which share a unique ATP-binding site, called ATP-grasp fold, despite dissimilarities in their amino acid sequences [9]. These four enzymes catalyze the coupling of amino and carboxylate groups of the substrates, where the use of an acylphosphate intermediate is a common feature. PurD is found in all organisms. However, PurK, PurT, and PurP are found only in prokaryotes. PurD, PurK, and PurT consist of three domains labeled N, ATP-grasp, and C. The ATP-grasp domain is further divided into three subdomains labeled A, B', and B [7]. Although the substrates are different for each of these enzymes, these ATP-grasp enzymes might share a common mechanism to catalyze the coupling between an amino group of one substrate and a carboxylate group of the other, which includes ATP cleavage, formation of an acylphosphate intermediate, and nucleophilic attack by the amino group.
The similarities in structure and catalytic strategies among the ATP-grasp enzymes of the purine nucleotide biosynthesis pathway suggest an evolutionary relationship between them. These enzymes evolved from a common ancestral ligase that can catalyze multiple ATP-dependent steps on several substrates [7,8]. Over time, each enzyme evolved to acquire a new function with a higher catalytic activity in a specific step. Enzymes with promiscuous catalytic activities and broad substrate specificities have been an interesting topic of molecular evolution [10,11]. It has been suggested that enzymes showing high specific catalytic activities divergently evolved from a progenitor enzyme with broad substrate specificities and low catalytic activities. These facts make the ATP-grasp enzymes attractive targets for understanding the molecular evolution of the purine nucleotide biosynthetic pathway [12,13]. To elucidate the evolutionary process of the purine metabolic pathway from the functional relationship of the ATP-grasp superfamily enzymes, knowledge of the reaction process at the molecular level is necessary. However, the details are unclear. Additionally, despite the importance of the acylphosphate intermediate in these reactions, it has not yet been trapped in an enzyme crystal structure.
In this study, we analyzed the molecular mechanism of the glycine phosphorylation process, Reaction 1 shown in Figure 1, the elementary process catalyzed by PurD in the de novo biosynthetic pathway of purine nucleotides. The reason why we focused only on Reaction 1 is that the binding structure of PRA to PurD, which is necessary for the analysis of Reaction 2, has not yet been clarified. The minimum energy pathway (MEP) and freeenergy profile were determined based on multi-scale models using the quantum mechanical and molecular mechanical (QM/MM) method, which can quantitatively handle the large systems necessary for biomolecular modeling [14]. The focus of this study is PurD In addition to PurD, six more enzymes involved in the subsequent steps of the purine nucleotide biosynthetic pathway of prokaryotes are also dependent on ATP. They are PurT, PurL, PurM, PurK, PurC, and PurM, where the name of each protein is derived from its gene name [8]. PurD, PurK, PurT, and PurP show high structural similarity and are classified into the ATP-grasp superfamily of proteins, which share a unique ATP-binding site, called ATP-grasp fold, despite dissimilarities in their amino acid sequences [9]. These four enzymes catalyze the coupling of amino and carboxylate groups of the substrates, where the use of an acylphosphate intermediate is a common feature. PurD is found in all organisms. However, PurK, PurT, and PurP are found only in prokaryotes. PurD, PurK, and PurT consist of three domains labeled N, ATP-grasp, and C. The ATP-grasp domain is further divided into three subdomains labeled A, B', and B [7]. Although the substrates are different for each of these enzymes, these ATP-grasp enzymes might share a common mechanism to catalyze the coupling between an amino group of one substrate and a carboxylate group of the other, which includes ATP cleavage, formation of an acylphosphate intermediate, and nucleophilic attack by the amino group.
The similarities in structure and catalytic strategies among the ATP-grasp enzymes of the purine nucleotide biosynthesis pathway suggest an evolutionary relationship between them. These enzymes evolved from a common ancestral ligase that can catalyze multiple ATP-dependent steps on several substrates [7,8]. Over time, each enzyme evolved to acquire a new function with a higher catalytic activity in a specific step. Enzymes with promiscuous catalytic activities and broad substrate specificities have been an interesting topic of molecular evolution [10,11]. It has been suggested that enzymes showing high specific catalytic activities divergently evolved from a progenitor enzyme with broad substrate specificities and low catalytic activities. These facts make the ATP-grasp enzymes attractive targets for understanding the molecular evolution of the purine nucleotide biosynthetic pathway [12,13]. To elucidate the evolutionary process of the purine metabolic pathway from the functional relationship of the ATP-grasp superfamily enzymes, knowledge of the reaction process at the molecular level is necessary. However, the details are unclear. Additionally, despite the importance of the acylphosphate intermediate in these reactions, it has not yet been trapped in an enzyme crystal structure.
In this study, we analyzed the molecular mechanism of the glycine phosphorylation process, Reaction 1 shown in Figure 1, the elementary process catalyzed by PurD in the de novo biosynthetic pathway of purine nucleotides. The reason why we focused only on Reaction 1 is that the binding structure of PRA to PurD, which is necessary for the analysis of Reaction 2, has not yet been clarified. The minimum energy pathway (MEP) and free-energy profile were determined based on multi-scale models using the quantum mechanical and molecular mechanical (QM/MM) method, which can quantitatively handle the large systems necessary for biomolecular modeling [14]. The focus of this study is PurD because this enzyme is a typical member of the ATP-grasp superfamily and is associated with purine nucleotide biosynthesis in all organisms. The theoretical insights provided by this study can also be useful in understanding the reaction mechanisms of other ATP-grasp superfamily enzymes, such as PurK and PurT. In this study, our results provide insights not only from the energetic aspect based on the free-energy profile but also from the viewpoint of the interactions between the substrate molecules and the enzyme during the reaction. Analysis of PurD is thus a starting point for understanding the functional relationship among the ATP-grasp superfamily enzymes, which gives a molecular-level insight into the evolutionary process of the de novo biosynthetic pathway of purine nucleotides.
Initial Structure
The three-dimensional structure of PurD has been determined in several organisms. In this study, we investigated the catalytic activity of PurD from Aquifex aeolicus, a thermophilic bacterium that is thought to be one of the oldest species of bacteria. We focused on the phosphorylation of glycine, as shown in Figure 1, which is the first step of the GAR biosynthetic reaction to produce the glycyl-phosphate intermediate. PurD binds to four substrate molecules involved in the reaction: ATP, two Mg 2+ ions, and glycine. The initial structure of PurD was obtained from the X-ray crystal structure of Aquifex aeolicus PurD in an ATP-bound form (PDB ID: 2YW2) [7], after the removal of the crystallographic water molecules. Two crystal structures (PDB IDs: 2YW2 and 2YYA) of Aquifex aeolicus PurD have been deposited in PDB by two of the authors. In this study, we selected the ATP-bound structure, 2YW2, as the initial structure. No structure that is exactly equivalent to the initial and final states of the reaction has been identified. However, a few structures with parts of the substrates bound to PurD have been determined. Thus, plausible structures of the PurD complex with substrates for the initial and final states of the reaction were prepared according to the analyses reported by Sampei et al. [7].
QM/MM Hybrid Model
The hybrid model combining the QM and MM models of the QM/MM theory has been utilized as one of the most efficient tools for analyzing chemical reactions in large molecular systems such as enzyme reactions [14]. N-layered integrated molecular orbital molecular mechanics (ONIOM) is a type of QM/MM methods [15,16]. The QM/MM-ONIOM method was used to compute the energy and force of the system, where the total system was divided into active and environmental parts. In this study, 55 atoms in the active part, including all substrate molecules of ATP 4− , two Mg 2+ ions, and glycine form a QM sub-system, which is described at the density functional theory level of calculation with the B3LYP functional and the 6-31G(d,p) basis sets. The remaining 6717 atoms in the environment part modeling the enzyme structure formed an MM sub-system, which was described by the AMBER parm96 force field [17]. The QM and MM subsystems in the model are not combined with covalent bonding. However, they interact with each other through van der Waals and electrostatic interactions. The electron embedding method was used to quantitatively determine the electrostatic term in the intermolecular interaction since the partial charges on atoms in the QM-subsystem can change during the chemical reaction. All QM/MM-ONIOM calculations were performed using the Gaussian 16 Revision C.01 program package [18]. The geometries of the PurD complex with substrates for the initial and final states of the reaction were determined using the QM/MM-ONIOM method, which uses microiterations for the optimization.
MEP Calculation
An MEP connecting the initial and final states of the reaction was determined using the string method [19,20] which has been used in various studies to determine a representative reaction path of the chemical processes, including enzyme reactions [15,16]. According to the string method, first, we prepare a string of states, called images, and move each image in the direction of the force derived from a potential energy gradient. Then, we reparametrize the string to enforce approximately equal arc lengths between neighboring images. The MEP was determined by iteratively repeating this process.
In this study, we determined the MEP connecting the reactant and product states of the substrates by relaxing the enzyme structure. PurD enzyme catalyzes the chemical recombination of all atoms in the substrates involved in the QM part of the QM/MM model, where the conformation of the enzyme involved in the MM part can change during the reaction. However, its chemical structure remains unchanged. Thus, each path optimization was carried out as follows. First, the QM part was evolved according to the string method by fixing the MM part, and then the MM part was relaxed by fixing the QM part.
The string method, combined with the QM/MM-ONIOM method, was carried out using an in-house program. The technique for finding an MEP using the string method can be briefly summarized as follows [19,20]. Let V denote a potential energy of the system, and let ζ be a curve connecting points A and B in configuration space. If curve ζ * is an MEP, the potential energy gradient ∇V perpendicular to ζ * is zero everywhere along the curve, which satisfies LetP denote a projector on a plane perpendicular to the curve: whereÎ is the unit matrix andτ is a normalized tangent vector to the curve. Equation (1) can be rewritten asP ∇V(ζ * ) = 0.
Let an arbitrary curve ζ be represented by a number of images as where z(0) = A and z(1) = B. A sequence of images defines a string. The reaction coordinate parameter σ ∈ [0,1] was used to parameterize MEPs. The basic idea of the string method [19,20] is to find an MEP by evolving the images according to the potential force at that point: where . z denotes the time derivative of z. Equation (5) can be integrated with respect to time using any suitable ordinary differential equation (ODE) solver. If the forward Euler method is used, the new image after an ODE step is given by where ∆t is the size of each ODE step. After a certain number of evolution steps, the images are redistributed along the string to enforce equal arc lengths between adjacent images, which are needed to avoid image clustering. An initial path was prepared by linearly interpolating the molecular geometries of the reactant and product states of the substrates in Cartesian coordinates with 16 discrete points. The update step size in the path optimization was set to 1.0 Bohr. The convergence of the path optimization was examined using the average value of the potential energy change while updating the path ∆V. After 514 path-optimization cycles, ∆V converged at a value less than 1.0 × 10 −10 Hartree. The resultant MEP was linearly interpolated with a total of 45 discrete points between the two end points in the Cartesian coordinate space to improve the accuracy of the free energy calculations.
Free-Energy Calculation
Free energy is a fundamental energetic property of chemical reactions. The freeenergy change along a reaction coordinate, called the free-energy profile, is helpful for understanding the reaction mechanism. Many approaches for calculating free-energy changes based on the QM/MM method have been proposed. In this study, the free-energy changes along the MEP were examined using a perturbation method combined with the QM/MM model [21,22]. Recently, one of the authors of this paper applied the QM/MM free-energy perturbation (QM/MM-FEP) method to analyze the free-energy profile for chemical reactions of excited state molecules in aggregates [23,24]. From the QM/MM-FEP theory, the free-energy difference between two adjoining states, A and B, in a QM/MM system can be defined as The ∆E (A→B) qm term is given by where · · · R (A) represents the ensemble average over the MM subsystem at the A-th state.
Here, the perturbation corresponds to the forward or backward movement of the QM atoms while fixing all MM atoms. We performed canonical molecular dynamics (MD) simulations by fixing the QM atoms and obtained the required ensembles. The QM/MM single-point calculations were performed using the ONIOM method [15,16]. The ∆F term is related to the average of the function of the energy difference between the A-th and B-th states evaluated by sampling for the A-th state: where β is a reciprocal temperature, and The first term of Equation (10) . The E int term consists of additive contributions originating from van der Waals and electrostatic interactions: Here, the van der Waals interaction can be described with the pairwise Lennard-Jones potential function: With the parameters of αβ and σ αβ between the α-th QM and β-th MM atoms, the electrostatic interaction can be approximately expressed by an effective classical representation as follows: where Q α is the point charge on the α-th QM atom in the solute molecule determined at the B-th state on the MEP, which can be determined by the electrostatic potential fitting Life 2022, 12, 281 6 of 13 method according to the Hu-Lu-Yang scheme [25] based on the electronic wavefunction of Ψ (B) , and q is the classical charge on the β-th MM atom in enzyme and solvent molecules, which were determined from the parameter sets given in the Amber force field. The QM/MM-FEP calculations were performed using an in-house developed program.
MD Simulations
Free-energy calculation based on the QM/MM-FEP method requires ensemble averages over the MM subsystem while fixing all QM atoms. In this study, canonical MD simulations of PurD in an explicit water box were performed to obtain the necessary ensembles at 358 K, which is the optimal temperature for Aquifex aeolicus, a thermophilic bacterium. All MD simulations were performed using the AMBER 18 software package [26,27]. The MM subsystem was modeled using the ff14SB variant of the AMBER force field [28]. The system was solvated with a water shell of 8 Å around the protein. The TIP3P water model was used to describe the solvent [29]. Four sodium cation atoms were added to neutralize the system. The system includes 32,108 atoms in total.
MD simulations were performed for 45 QM/MM systems separately, where QM molecules were fixed to optimized structure on the MEP, and MM molecules were thermally fluctuated according to the given canonical ensembles. The geometry of each system was optimized using the steepest descent algorithm for 500 steps, followed by the conjugate gradient algorithm for 4500 steps. After geometry optimization, each system was heated until the temperature (T) reached 358 K over a period of 200 ps in the constant volume and temperature (NVT) ensemble while applying a harmonic restraint of 2 kcal mol −1 Å −2 on the system, except for the hydrogen atoms. The temperature was regulated using the weak-coupling algorithm. After heating, 200 ps of MD simulations were performed to equilibrate the system in the NPT ensemble at T = 358 K and a pressure (P) of 1.0 atm. The pressure was maintained using a Berendsen barostat. After equilibration, additional 7-ns MD simulations were performed in the NPT ensemble at T = 358 K and P = 1.0 atm. During MD simulations, all covalent bond lengths were constrained using the SHAKE algorithm [30]. The time step of MD simulations was set to 2 fs. A cutoff for the non-bonded intermolecular interactions was set to 8 Å. Long-range electrostatic interactions were treated using the particle-mesh Ewald method [31]. Finally, 45 MD simulations were performed for 3 ns in the NPT ensemble at T = 358 K and P = 1.0 atm with fixing QM molecules to the optimized structure at each point on the MEP. The 300 samples used for the QM/MM-FEP analysis were obtained from the 3-ns MD trajectory at each discrete point on the MEP.
Optimized Structures
The first step of the GAR synthesis reaction catalyzed by PurD to generate glycylphosphate, as shown in Figure 1, is accompanied by the following structural changes. In the initial state of the reaction, ATP, two Mg 2+ ions, and glycine bind to PurD. During the reaction, one of the three phosphoric groups in ATP is cleaved and transferred to glycine upon hydrolysis, which yields ADP and glycyl-phosphate at the final state. At present, the structure of the glycyl-phosphate intermediate bound to PurD has not been experimentally determined. In this study, we have determined the optimal geometries for the initial and final conformational states of the reactions based on the QM/MM-ONIOM model. Figure 2 shows the substrate-binding site of the optimized geometries at the initial and final states of the reaction, which demonstrate the conformational changes of substrates accompanied by the conformational changes of amino acid residues adjacent to them within 2 Å. Sixteen amino acid residues are in the vicinity of the reaction center within the range of 2 Å from the substrate molecules, of which 11 amino acid residues, that is, 69% of the total, were charged residues. PurD contains 423 amino acid residues, of which 134 (32%) are charged amino acid residues, indicating that a high percentage of charged amino acid residues are localized in the substrate-binding site. Binding of the substrate molecules to the 10 charged amino acid residues within the ATP-grasp domain of PurD is as follows: Life 2022, 12, 281 7 of 13 ATP was recognized by Lys103, Lys143, Glu185, Glu192, and Lys214; a glycine molecule was bound to Asp212, Arg287, and Glu292; two Mg 2+ ions were recognized by Glu100 and Lys214. Among the 10 charged amino acid residues that were bound to the substrate molecules, 6 were negatively charged and 4 were positively charged, indicating that there were more negatively charged residues at the substrate-binding site. As shown in Figure 2, two Mg 2+ ions maintained coordination bonds with the oxygen atoms of ATP in both the initial and final states of the reaction, where one of the two binds at the βand γ-positions, and the other at the αand γ-positions. charged amino acid residues are localized in the substrate-binding site. Binding of the substrate molecules to the 10 charged amino acid residues within the ATP-grasp domain of PurD is as follows: ATP was recognized by Lys103, Lys143, Glu185, Glu192, and Lys214; a glycine molecule was bound to Asp212, Arg287, and Glu292; two Mg 2+ ions were recognized by Glu100 and Lys214. Among the 10 charged amino acid residues that were bound to the substrate molecules, 6 were negatively charged and 4 were positively charged, indicating that there were more negatively charged residues at the substratebinding site. As shown in Figure 2, two Mg 2+ ions maintained coordination bonds with the oxygen atoms of ATP in both the initial and final states of the reaction, where one of the two binds at the β-and γ-positions, and the other at the α-and γ-positions.
MEP
The MEP of the catalytic reaction in PurD was determined using the string method combined with the QM/MM approach, which connects the initial and final states, as shown in Figure 2. Figure 3 shows the changes in the interatomic distance between the phosphorus and oxygen atoms at the γ-and β-positions of ATP, Pγ-Oβ, and the distance between the phosphorus and oxygen atoms at the carbonyl group of glycine, Pγ-Oglycine The reaction coordinate parameter σ represents the degree of conformational changes of the substrate molecules (ATP 4− , two Mg 2+ ions, and glycine) along the MEP, which is normalized to zero at the initial state and one at the final state. In Figure 3, the distances were plotted against σ, which gives a detailed account of the structural changes during the reaction, indicating the hydrolysis of the γ-phosphate of ATP and the phosphorylation of glycine. Figure 4 shows the conformational changes of the substrates and its neighboring amino acid residues of PurD at six points (σ = 0.0-1.0) on the MEP, where a phosphory transfers from ATP to an adjacent glycine, yielding a glycyl-phosphate and ADP.
MEP
The MEP of the catalytic reaction in PurD was determined using the string method combined with the QM/MM approach, which connects the initial and final states, as shown in Figure 2. Figure 3 shows the changes in the interatomic distance between the phosphorus and oxygen atoms at the γand β-positions of ATP, P γ -O β , and the distance between the phosphorus and oxygen atoms at the carbonyl group of glycine, P γ -O glycine . The reaction coordinate parameter σ represents the degree of conformational changes of the substrate molecules (ATP 4− , two Mg 2+ ions, and glycine) along the MEP, which is normalized to zero at the initial state and one at the final state. In Figure 3, the distances were plotted against σ, which gives a detailed account of the structural changes during the reaction, indicating the hydrolysis of the γ-phosphate of ATP and the phosphorylation of glycine. Figure 4 shows the conformational changes of the substrates and its neighboring amino acid residues of PurD at six points (σ = 0.0-1.0) on the MEP, where a phosphoryl transfers from ATP to an adjacent glycine, yielding a glycyl-phosphate and ADP.
As shown in Figure 3, at the initial state of the reaction (σ = 0.0 to 0.1), the P γ -O β distance remained unchanged, and only the P γ -O glycine distance decreased, as glycine approaches ATP before the ATP hydrolysis. As shown in Figures 3 and 4, a phosphoryl group dissociates from ATP and forms a planar structure at the reaction coordinate of σ = 0.2, and the P γ -O glycine distance reaches 3.37 Å at σ = 0.6, which indicates the production of glycyl-phosphate. At the late stage of the reaction (σ = 0.6 to 1.0), the P γ -O glycine distance remained unchanged. However, the P γ -O β distance continued to change and increased by nearly 1 Å towards the completion of the reaction. As shown in Figure 3, at the initial state of the reaction (σ = 0.0 to 0.1) distance remained unchanged, and only the Pγ-Oglycine distance decreased, as proaches ATP before the ATP hydrolysis. As shown in Figures 3 and 4, a group dissociates from ATP and forms a planar structure at the reaction coord 0.2, and the Pγ-Oglycine distance reaches 3.37 Å at σ = 0.6, which indicates the pr glycyl-phosphate. At the late stage of the reaction (σ = 0.6 to 1.0), the Pγ-Ogly remained unchanged. However, the Pγ-Oβ distance continued to change and in nearly 1 Å towards the completion of the reaction. As shown in Figure 3, at the initial state of the reaction (σ = 0.0 to 0.1) distance remained unchanged, and only the Pγ-Oglycine distance decreased, as proaches ATP before the ATP hydrolysis. As shown in Figures 3 and 4, a p group dissociates from ATP and forms a planar structure at the reaction coord 0.2, and the Pγ-Oglycine distance reaches 3.37 Å at σ = 0.6, which indicates the pro glycyl-phosphate. At the late stage of the reaction (σ = 0.6 to 1.0), the Pγ-Oglyc remained unchanged. However, the Pγ-Oβ distance continued to change and in nearly 1 Å towards the completion of the reaction. Charged amino acid residues that interact with the substrate are represented by a licorice model.
Free-Energy Profile
The free-energy change along the MEP was determined using the QM/MM-FEP method, which is given as the sum of the QM energies and free-energy contributions ascribed to the QM/MM interactions. Figure 5a shows the change in the free-energy along the MEP relative to that in the QM energy. Figure 5b shows the corresponding QM/MM free-energy contribution. As shown in Figure 5a, the free-energy profile shows that the reaction requires an activation energy of 26.1 kcal mol −1 , and the total change in the free-energy reaches −17.0 kcal mol −1 at the end of the reaction. The total free-energy change indicates that the reaction is exergonic and can occur spontaneously in the enzyme.
The activation energy of 26.1 kcal mol −1 is large compared to typical enzymatic reactions [32], even if we consider the fact that Aquifex aeolicus is hyperthermophilic. In the free-energy calculation using the QM/MM-FEP method, MD simulations were performed by fixing all the substrates treated as the QM subsystem to obtain necessary ensemble averages over the MM subsystem. In the current enzymatic reaction, the phosphoryl group moves between two spatially distant substrate molecules. In such a case, the activation energy can tend to be overestimated when the free-energy change is computed using the QM/MM-FEP method because it is not flexible enough to consider the structural fluctuations of the substrate molecules in the QM region.
The change in the QM energy along the MEP is attributed to the rearrangement of atoms in the substrate molecules during the reaction, where interaction with the neighboring amino acid residues involved in the MM-model moiety was considered in the QM/MM Hamiltonian as the electrostatic interaction with the embedded charges of atoms in the enzyme. The profile of the QM energy has a maximum value of 17.9 kcal mol −1 at σ = 0.33, which is corresponding to the transition state shown in the free-energy profile and reaches −40.8 kcal mol −1 at the end. The large difference in quantity from the free-energy profile indicates the importance of the free-energy contribution.
Changes in the intermolecular interactions between the QM-and MM-model moieties along the MEP is shown in Figure 5b as the QM/MM free-energy contribution, ΔFint. The QM/MM free-energy contributions are positively large around the transition state, forming an intermediate complex of ADP-PO3-glycine at σ = 0.33, where the value reaches 1.93 kcal mol −1 corresponding to the QM/MM free-energy difference between the points at σ = 0.3 on the MEP. The largest value of ΔFint is 5.2 kcal mol −1 at σ = 0.5 on the MEP, where the phosphoryl group is about to reach glycine to produce the glycyl-phosphate. The evident change in the QM/MM free-energy contribution shown in Figure 5b indicates a notable change in the intermolecular interactions between the substrate molecules and the enzyme. Therefore, the next step was to conduct a more detailed analysis to clarify the cause of the marked changes in the interaction between the enzyme and substrate molecules during this state of the reaction. As shown in Figure 5a, the free-energy profile shows that the reaction requires an activation energy of 26.1 kcal mol −1 , and the total change in the free-energy reaches −17.0 kcal mol −1 at the end of the reaction. The total free-energy change indicates that the reaction is exergonic and can occur spontaneously in the enzyme.
The activation energy of 26.1 kcal mol −1 is large compared to typical enzymatic reactions [32], even if we consider the fact that Aquifex aeolicus is hyperthermophilic. In the free-energy calculation using the QM/MM-FEP method, MD simulations were performed by fixing all the substrates treated as the QM subsystem to obtain necessary ensemble averages over the MM subsystem. In the current enzymatic reaction, the phosphoryl group moves between two spatially distant substrate molecules. In such a case, the activation energy can tend to be overestimated when the free-energy change is computed using the QM/MM-FEP method because it is not flexible enough to consider the structural fluctuations of the substrate molecules in the QM region.
The change in the QM energy along the MEP is attributed to the rearrangement of atoms in the substrate molecules during the reaction, where interaction with the neighboring amino acid residues involved in the MM-model moiety was considered in the QM/MM Hamiltonian as the electrostatic interaction with the embedded charges of atoms in the enzyme. The profile of the QM energy has a maximum value of 17.9 kcal mol −1 at σ = 0.33, which is corresponding to the transition state shown in the free-energy profile and reaches −40.8 kcal mol −1 at the end. The large difference in quantity from the free-energy profile indicates the importance of the free-energy contribution.
Changes in the intermolecular interactions between the QM-and MM-model moieties along the MEP is shown in Figure 5b Figure 5b indicates a notable change in the intermolecular interactions between the substrate molecules and the enzyme. Therefore, the next step was to conduct a more detailed analysis to clarify the cause of the marked changes in the interaction between the enzyme and substrate molecules during this state of the reaction.
Partial Atomic Charges
The rearrangement of chemical bonds among the substrate molecules along the MEP can change the distribution of partial atomic charges. Figure 6 shows the changes in the arithmetic sums of the partial charges for five atomic groups of Mg-ATP, consisting of two Mg 2+ ions and ATP 4− , glycine, phosphate (PO 3 − ), and Mg-ADP, consisting of two Mg 2+ ions and ADP 3− , and glycyl-phosphate (Gly-PO 3 ). Figure 7 shows a summary of the redistribution of the partial atomic charges of the ligand molecules. Here, each partial charge on a substrate was determined using the Hu-Lu-Yang fitting method [25] applied to the wavefunction calculated at the B3LYP/6-31G(d,p) level of theory, where each geometry of the substrate was extracted from the images of the MEP determined using the QM/MM-ONIOM string method.
Partial Atomic Charges
The rearrangement of chemical bonds among the substrate molecules along the MEP can change the distribution of partial atomic charges. Figure 6 shows the changes in the arithmetic sums of the partial charges for five atomic groups of Mg-ATP, consisting of two Mg 2+ ions and ATP 4− , glycine, phosphate (PO3 − ), and Mg-ADP, consisting of two Mg 2+ ions and ADP 3− , and glycyl-phosphate (Gly-PO3). Figure 7 shows a summary of the redistribution of the partial atomic charges of the ligand molecules. Here, each partial charge on a substrate was determined using the Hu-Lu-Yang fitting method [25] applied to the wavefunction calculated at the B3LYP/6-31G(d,p) level of theory, where each geometry of the substrate was extracted from the images of the MEP determined using the QM/MM-ON-IOM string method. As shown in Figure 6, the overall tendency of the distribution of partial atomic charges clearly changed from σ = 0.2 to 0.5. The glycine moiety is almost neutral at the initial state of the reaction; after σ = 0.2, the closer the phosphate gets to the glycine, the larger its increase in the partial charge; then it reached +0.50 at σ = 0.6, where glycyl-phosphate was produced. The calculated partial charge of the glycyl-phosphate moiety was −0.56 at the end of the reaction, although the formal charge of glycyl-phosphate was −1. The Mg-ATP moiety was also almost neutral initially; however, after σ = 0.2, its partial
Partial Atomic Charges
The rearrangement of chemical bonds among the substrate molecules along the MEP can change the distribution of partial atomic charges. Figure 6 shows the changes in the arithmetic sums of the partial charges for five atomic groups of Mg-ATP, consisting of two Mg 2+ ions and ATP 4− , glycine, phosphate (PO3 − ), and Mg-ADP, consisting of two Mg 2+ ions and ADP 3− , and glycyl-phosphate (Gly-PO3). Figure 7 shows a summary of the redistribution of the partial atomic charges of the ligand molecules. Here, each partial charge on a substrate was determined using the Hu-Lu-Yang fitting method [25] applied to the wavefunction calculated at the B3LYP/6-31G(d,p) level of theory, where each geometry of the substrate was extracted from the images of the MEP determined using the QM/MM-ON-IOM string method. As shown in Figure 6, the overall tendency of the distribution of partial atomic charges clearly changed from σ = 0.2 to 0.5. The glycine moiety is almost neutral at the initial state of the reaction; after σ = 0.2, the closer the phosphate gets to the glycine, the larger its increase in the partial charge; then it reached +0.50 at σ = 0.6, where glycyl-phosphate was produced. The calculated partial charge of the glycyl-phosphate moiety was −0.56 at the end of the reaction, although the formal charge of glycyl-phosphate was −1. The Mg-ATP moiety was also almost neutral initially; however, after σ = 0.2, its partial Figure 7. Summary of the re-distribution of the partial atomic charges of the ligand molecules described in Figure 6.
As shown in Figure 6, the overall tendency of the distribution of partial atomic charges clearly changed from σ = 0.2 to 0.5. The glycine moiety is almost neutral at the initial state of the reaction; after σ = 0.2, the closer the phosphate gets to the glycine, the larger its increase in the partial charge; then it reached +0.50 at σ = 0.6, where glycyl-phosphate was produced. The calculated partial charge of the glycyl-phosphate moiety was −0.56 at the end of the reaction, although the formal charge of glycyl-phosphate was −1. The Mg-ATP moiety was also almost neutral initially; however, after σ = 0.2, its partial charge decreased and reached −0.49 at σ = 0.6, where ATP was hydrolyzed. The calculated partial charge of the Mg-ADP moiety was +0.56 at the end, although the formal charge was +1. The partial charge of the phosphate moiety was −1.16 at the initial state, and its value, −1.07, at the end almost remained unchanged. However, it showed a small peak of −0.89 at σ = 0.33 during the reaction, where the partial charges of the Mg-ADP and glycine moieties were +0.70 and +0.19, respectively.
The redistribution of partial atomic charges on the substrate molecules (Q) affects the electrostatic interactions with the enzyme (E es ), which contributes significantly to the QM/MM free-energy change, ∆F int , as described in Equations (9) and (13). As shown in Figure 5b, in the region of σ = 0.2-0.5, where the partial atomic charges on the substrate molecules changed overall, the value of ∆F int increased remarkably, indicating that the electrostatic repulsion between the substrate molecules and the enzyme increased because of the redistribution of partial atomic charges associated with the reaction process. As shown in Figure 7, both the glycine and Mg-ATP moieties were neutral at the beginning of the reaction. However, glycyl-phosphate was negatively charged (−0.56) and the Mg-ADP moiety was positively charged (+0.56) at the end of the reaction, where the formal charges of glycyl-phosphate and the Mg-ADP moiety were −1 and +1, respectively. This indicates that the partial atomic charges were re-distributed to delocalize within two sites of the glycyl-phosphate and Mg-ADP moieties at the end of the reaction, instead of localizing according to the formal charges. The delocalization of the atomic partial charges might result from the fact that the negatively charged PO 3 − moiety remains coordinated to two Mg 2+ cations throughout the reaction process, as shown in Figure 4. If the partial atomic charges of the substrate molecules are localized to the two sites according to the formal charges, the electrostatic repulsion with the surrounding charged amino acid residues may become too strong for the enzymatic reaction to proceed. The results of the free-energy profile analysis indicate that the two Mg 2+ ions bound to PurD can play an important role in the progression of the enzymatic reaction by appropriately adjusting the partial atomic charges on the substrate molecules.
Conclusions
In this study, we investigated the reaction mechanism of the second step in the de novo biosynthetic pathway of purine nucleotides catalyzed by PurD enzyme based on the free-energy profile analysis. An efficient computational protocol for analyzing the freeenergy profile of the glycine phosphorylation process catalyzed by PurD was developed, which examines the free-energy change along an MEP based on a perturbation method combined with the QM/MM hybrid model. The energetics calculated from the MEP provided valuable information for a comprehensive understanding of the reaction process in PurD.
The free-energy profile revealed that the phosphorylation of glycine by ATP in PurD requires an activation energy of 26.1 kcal mol −1 , and the total change in the free-energy reaches −17.0 kcal mol −1 at the end of the reaction. In this reaction process, the change in the QM/MM free-energy contribution was remarkable, indicating a notable change in the intermolecular interactions between the substrate molecules and the enzyme.
Further detailed analysis of the changes in the partial atomic charges of the substrate molecules revealed that the electrostatic intermolecular interactions between the substrate molecules and the charged amino acid residues in the ATP-grasp domain of PurD play an important role in the reaction process. Of particular interest, the partial atomic charges of the substrate molecules were re-distributed to delocalize within two sites of glycylphosphate and Mg-ADP moieties at the end of the reaction, instead of localizing according to the formal charges. This suggests that the two Mg 2+ ions bound to PurD play an important role in the progression of the enzymatic reaction by appropriately adjusting the partial atomic charges of the substrate molecules.
The free-energy profile analysis of PurD is a starting point for understanding the functional relationships among the ATP-grasp superfamily enzymes. The computational protocol discussed in this study can be used to elucidate the detailed reaction mechanism of the other ATP-grasp superfamily enzymes, which provides molecular-level insight into the evolutionary process of the de novo biosynthetic pathway of purine nucleotides. | 9,534 | 2022-02-01T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Highly Efficient Nanosecond 1.7 µ m Fiber Gas Raman Laser by H 2 -filled Hollow-Core Photonic Crystal Fibers
: We report here a high-power, highly efficient, wavelength-tunable nanosecond pulsed 1.7 µ m fiber laser based on hydrogen-filled hollow-core photonic crystal fibers (HC-PCFs) by rotational stimulated Raman scattering. When a 9-meter-long HC-PCF filled with 30 bar hydrogen is pumped by a homemade tunable 1.5 µ m pulsed fiber amplifier, the maximum average Stokes power of 3.3 W at 1705 nm is obtained with a slope efficiency of 84%, and the slope efficiency achieves the highest recorded value for 1.7 µ m pulsed fiber lasers. When the pump pulse repetition frequency is 1.3 MHz with a pulse width of approximately 15 ns, the average output power is higher than 3 W over the whole wavelength tunable range from 1693 nm to 1705 nm, and the slope efficiency is higher than 80%. A steady-state theoretical model is used to achieve the maximum Stokes power in hydrogen-filled HC-PCFs, and the simulation results accord well with the experiments. This work presents a new opportunity for highly efficient tunable pulsed fiber lasers at the 1.7 µ m band.
Introduction
Laser sources in the 1.7 µm band have many significant applications in material processing, mid-infrared laser generation, gas detection, medical treatment and bioimaging because there are many molecule absorption lines at this wave-band, which is also located in the transparent window of living tissue [1]. In past years, 1.7 µm fiber lasers have been intensively studied owing to their good stability and compact structure . Up to now, dozens of watts continuous-wave (CW) fiber lasers at 1.7 µm have been demonstrated [2][3][4], but there are few studies on high-power pulsed fiber lasers in this waveband, which have unique advantages in some applications. High-power 1.7 µm laser pulses have been proven to achieve higher resolution and larger penetration depth in multi-photon microscopy [5,6], optical coherence tomography [7], and spectroscopic photoacoustic (PA) imaging [8,9]. Particularly, the PA needs nanosecond high-energy pulses to realize volumetric imaging with high resolution [9]. Moreover, high-power short pulses generate a smaller hot-melt area during material processing, which can achieve higher processing accuracy. In terms of gas detection, the absorption line of methane molecules is in the 1.7 µm band. Using 1.7 µm pulsed laser as the detection light can avoid the problem of heat accumulation caused by CW laser irradiation, and high power can increase the detection distance. Thus, it is necessary to improve the power of 1.7 µm pulsed fiber lasers to meet the demands of these important applications. Table 1 presents the characteristics of some 1.7 µm pulsed fiber lasers based on solidcore fibers. It can be observed that different fiber-based solutions are proposed, and they can be mainly divided into two categories according to the gain mechanism. One is based on population inversion (PI) by using rare-earth-doped fibers, such as thulium-doped fibers (TDFs) [8][9][10][11], thulium-holmium co-doped fibers (THDFs) [12,13], and bismuthdoped fibers (BDFs) [14][15][16], to directly produce a signal at 1.7 µm. The other is based on nonlinear effects in the solid-core fibers, such as soliton self-frequency shift (SSFS) [17][18][19][20], four-wave mixing (FWM) [21][22][23], self-phase modulation (SPM) [7,24,25], and simulated Raman scattering (SRS) [26] to realize a wavelength conversion. As can be seen from Table 1, it is difficult to realize the high-power pulsed lasers with rare-earth-doped fibers, and their average powers are only milliwatt and slope efficiencies are low, especially THDFs and BDFs. This is because the TDFs have strong reabsorption in the 1.7 µm region and the fabrication of THDFs and BDFs is not mature. For the Raman soliton fiber lasers (RSFLs) based on SSFS, the output power of which is limited by the mode field area of the fibers, large mode area photonics crystal fibers (LMA PCFs) are always used as gain fibers. After using Er-doped polarization-maintaining very large mode area fibers (VLMAFs), the average output power can be up to 1.5 W, but this special kind of fiber is difficult to fabricate and not commercially available [20]. Furthermore, the output power of RSFLs fluctuates greatly in the wavelength tuning range because the wavelength tuning is accomplished by changing the input pump power. The FWM-based fiber optical parametric oscillator (FOPO) has been demonstrated to operate at 1.7 µm using dispersion-shifted fibers (DSFs) [21], but the output average power is only at milliwatt level due to the nonlinearity of fibers. To further increase the output power, the fiber optical parametric chirped-pulse amplifier (FOPCA) is proposed on the basis of FOPO, but the maximum average power is only 1.42 W after using polarization-maintaining dispersion-shifted fibers (PM DSFs) [22]. Furthermore, the FOPO and FOPCA also raise the issue of noise, affecting the output performance. The super-continuum (SC) generation mainly based SPM is another common solution to generate 1.7 µm laser pulses by using highly nonlinear fibers (HNLFs) or dispersion-compensating fibers (DCFs) [7,24,25]. However, with this method, it is difficult to achieve accurate spectrum control, causing a wide spectrum and low spectral density at 1.7 µm. Recently, a pulsed Raman fiber laser with an average power of 23 W has been reported, but its slope efficiency is low due to a cascaded seventh order Stokes shift, and its pulse width is limited due to modulation of gain switch [26]. In recent years, fiber gas Raman lasers (FGRLs) based on gas-filled hollow-core fibers (HCFs) have attracted great interest due to their potential in realizing tunable and new wavelength laser emissions [27][28][29][30][31][32][33][34][35][36][37][38][39], which provides a possible means of way to generating high-power, highly efficient pulsed fiber laser sources operating in the 1.7 µm band.
In this paper, we demonstrate a multi-watt, highly efficient, tunable nanosecond pulsed fiber laser source at 1.7 µm based on hollow-core photonic crystal fibers (HC-PCFs) by H 2 rotational SRS. The pump source is a homemade high-power 1.5 µm pulsed fiber amplifier seeded by a tunable diode laser. When the operation wavelength of the seed is tuned to 1550 nm and the pump pulse repetition frequency is 1.3 MHz with a pulse width of approximately 15 ns, the maximum average Stokes power of 3.3 W at 1705 nm with a slope efficiency of 84% is obtained in a 9-meter-long HC-PCF filled with 30 bar H 2 , and the slope efficiency achieves the highest recorded value for 1.7 µm pulsed fiber lasers. Over the operation wavelength range from 1693 to 1705 nm, the slope efficiency is higher than 80%, and the average Stokes power is higher than 3 W, and Stokes pulse retains a good Gaussian shape with a pulse width of appromxiately 13 ns. Furthermore, a steady-state theoretical model considering the second-order Raman conversion is used to find the optimal repetition frequency to obtain the maximum Stokes power in this single-pass FGRL, and the simulation results agree well with the experiments. 1 The slope efficiency is not given in the reference; 2 The total power within laser band; 3 The total power conversion efficiency. Figure 1a presents the experimental setup, which is similar to our previous experiments [38], but we used a higher power pump source and a shorter HC-PCF to achieve higher efficiency and Stokes power. The pump source is a homemade 1.5 µm pulsed fiber amplifier which consists of a CW tunable 1.5 µm seed diode laser (CobriteDX1, ID Photonics, Germany), an acousto-optic modulator (AOM, Fibre-Q, Gooch & Housego, England), a tunable fiber filter, and home-made three stages of erbium-doped fiber amplifiers (EDFAs. Due to the limitation of the amplification ability of EDFAs, only in the wavelength range of 1540 to 1550 nm, the pump source can output a maximum average power of about 7.5 W. The AOM modulates the CW light into the Gaussian pulses, and both the pulse width and repetition frequency are tunable. The pulse width at the end of the EDFA3 was set to 15 ns to achieve steady-state SRS and the repetition frequency was only adjusted in the range of 0.7 to 2 MHz in this work. The tunable filter is used to suppress the amplified spontaneous emission (ASE). The fiber coupler with a coupling ratio of 99:1 is used to monitor the pump power in real time. The main output fiber (SMF-28e, Corning, America) of coupler is directly fusion spliced to a 9-meter-long HC-PCF (HC-1550-02, NKT Photonics, Denmark) owing to their similar mode field area, and the splice loss is approximately 1.4 dB, which is close to the theoretical minimum loss [40]. The other end of the HC-PCF is sealed in a gas cell with a glass window, which is used to fill the HC-PCF with H 2 to 30 bar. The Stokes light and the residual pump light transmitted from the glass window are collimated by a plano-convex lens, and then sent to an optical spectrum analyzer (OSA, AQ6370D, Yokogawa, Japan) or a power meter (S470C, Thorlabs, America) by a silver mirror. A long-pass filter (transmission approximately 95% > 1600 nm) placed in front of the power meter is used to filter the residual pump light. Figure 1b presents the measured spectra of the pump source at different wavelengths at the maximum output power, and it can be observed that the ASE is more than 30 dB lower than the signal. The insert in Figure 1b presents the near-field pattern of the pump beams by a 20× microscope objective and a HgCdTe infrared camera (MCT-2327, spectral response 0.8-2.5 µm, Xenics, Belgium), as can be seen, the pump beams operate in a good fundamental mode. Figure 1c presents the measured transmission spectrum of the used 9-meter-long HC-PCF by a supercontinuum source (SuperK COMPACT, 450-2400 nm, NKT Photonics, Denmark). It can be observed that the low-loss transmission range of the HC-PCF is from approximately 1415 to 1740 nm, and there is a high loss peak near 1750 nm. The insert in Figure 1c presents the measured optical microscope image of cross section of the HC-PCF with core diameter of approximately 10 µm. The occurrence of this conversion indicates that the peak power of the first-order Raman pulse exceeds the second-order Raman threshold when the repetition frequency is 1 MHz. Owing to the high fiber loss, the second-order Raman power is very weak. It is difficult to accurately measure the second-order Raman power under our experimental condition, but it can be observed by high-sensitivity OSA and its power level can be estimated by the spectral intensity. The pulse shapes and series are measured by a fast photodetector (EOT ET5000, wavelength 850 to 2150 nm, bandwidth 12.5 GHz) and a broadband oscilloscope (Tektronix MDO3104, bandwidth 1 GHz, sample rate 5 Gs/s) when the pump power is maximum and the repetition frequency of the pump pulse is 1.3 MHz, as indicated in Figure 3. The temporal characteristic of pump pulses was measured before coupling into the HC-PCFs, while the Stokes pulses were measured at the output end of HC-PCF after the residual pump light was completely filtered. It can be seen that the repetition frequency of Stokes pulses is 1.3 MHz, which is the same as the pump pulses, and the shapes for both pump pulse and Stokes pulses are Gaussian-type. Furthermore, because only the center part of pump pulses that is higher than Raman threshold can be converted into the Stokes light, the FWHM (full width at half maximum) of Stokes pulses is smaller than that of pump pulses [29]. Moreover, the rising edge of Stokes pulses is steeper due to the rapid conversion of pump light [28]. The rotational SRS process in this single-pass FGRL can be described by a simple steady-state theoretical model considering the second-order Raman conversion [41]:
Results and Discussion
dz = g S2 P S2 P S1 − α S2 P S2 dP S1 dz = g S1 P S1 P p − α S1 P S1 − λ S2 λ S1 g S2 P S2 P S1 dP P dz = − λ S1 λ P g S1 P S1 P P − α P P P (1) where z is a coordinate of the fiber length; P i is the intensity, α i is the fiber loss, λ i is the wavelength and g i is the steady-state Raman gain coefficient ("I" means "S1" for the first-order Stokes, "S2" for the second-order Stokes, "P" for the pump wave). The boundary conditions can also be given [41]: where P 0 is the initial peak power of the pump pulse coupled into the HC-PCFs; h is the Planck constant; c is the light speed; ∆υ R is the Raman linewidth; A eff is the mode field area of the HC-PCFs. Using the above model for simulation, the first-and second order steady-state Raman gain coefficients can be estimated and slightly adjusted from Ref. [38,41,42], so g S1 = 0.25 cm/GW and g S2 = 0.17 cm/GW. The Raman linewidth ∆υ R can be estimated as 3.1 GHz at room temperature [43]. The HC-PCFs loss λ p = 0.04 dB/m, λ s1 = 0.11 dB/m, λ s2 = 6 dB/m. The mode field diameter of HC-PCFs is approximately 9 µm, so the mode field area A eff = 20.25π µm 2 .
The simulation curves of output Stokes and residual pump power with the repetition frequency are obtained when the pump power is 7.5 W and pump wavelength is 1540 nm, as illustrated by the dashed lines in Figure 4a. As can be seen, the maximum output Stokes power can be obtained near 1.3 MHz repetition frequency. Subsequently, by adjusting the repetition frequncy of pump pulses, the output Stokes power and the residual pump power at different repetition frequencies are measured, as illustrated in Figure 4a-d. The measured results shown in Figure 4a are the output Stokes and residual pump powers of different repetition frequencies at the maxium pump power. It can be observed that the measured results are essentially consistent with the simulation results. The measured results shown in Figure 4b-d are the evolutions of the Stokes power, Stokes pulse energy and residual pump power with the coupled pump power/pulse energy at different repetition frequencies, respectively. It can be observed from Figure 4b that when the repetition frequency is less than 1.3 MHz, the peak power of pump pulse becomes higher, so the Raman threshold of average pump power is reduced. Moerover, the peak power of the generated first-order Stokes pulse also becomes higher, and exceeds the second-order Raman threshold at high pump power level. Thus, the first-order Stokes power drops after reaching a peak as the pump power increases, indicating the conversion of first to second-order Stokes power. When the repetition is higher than 1.3 MHz, less pump power would convert into the first-order Stokes power due to the higher Raman threshold of average pump power. Thus, the optimal repetition frequency is 1.3 MHz, and the maximum Stokes power is approximately 3.1 W. From Figure 4c, it can be seen that the Raman threshold of pump pulse energy is constant at different repetition frequencies. This is not unexpected, because the Raman threshold of pump pulse peak power is constant, which is determined by the characteristics of gas and HC-PCFs. The purpose of adjusting repetition frequency is to set the appropriate peak power to obtain the maximum first-order Stokes power at 1.7 µm. It can be seen from Figure 4d that the residual pump power in the condition of maximum coupled pump power is increased with the increase in repetition frequency, which can be attribured to the increase in the Raman threshold of average pump power.
Furthermore, we measured the evolutions of the output Stokes and residual pump powers with the coupled pump power at different pump wavelengths when the repetition frequency was 1.3 MHz, as indicated in the Figure 4e. The solid and hollow patterns represent the Stokes power and residual pump power, respectively. It can be observed that the maximum Stokes power is achieved at 1550 nm, which is attributed to the performance of the pump source, and the pump power at 1550 nm is slightly higher. Thus, the maximum Stokes power is obtained at 1705 nm, and its evolution of output Stokes power with coupled pump power is specifically plotted in Figure 4f. It can be seen that the Stokes power increases linearly with coupled pump power beyond the Raman threshold, reaching a maximum of 3.3 W with 84% slope efficiency. Moreover, the power conversion efficiency in terms of the maximum coupled pump power of 5.5 W is aproximately 60%. Moreover, the near-field pattern of Stokes beams was measured, as indicated in the insert. It can be observed that the Stokes beams still operate in a fundamental mode. The good mode matching between the pump beams and Stokes beams is conductive to the high opticaloptical conversion efficiency.
Conclusions
We have reported a multi-watt, highly efficient, tunable 1.7 µm pulsed fiber laser in hydrogen-filled HC-PCFs by rotational SRS. When a 9-meter-long HC-PCF filled with 30 bar H 2 is pumped by a homemade high-power tunable 1.5 µm pulsed fiber amplifier, the maximum average Stokes power of 3.3 W at 1705 nm is obtained with a slope efficiency of 84%, to the best of our knowledge, the slope efficiency achieves the highest recorded value for 1.7 µm pulsed fiber lasers. When the pump pulse repetition frequency is 1.3 MHz with a pulse width of approximately 15 ns, over the whole wavelength tunable range from 1693 nm to 1705 nm, the average output power is higher than 3 W, and the slope efficiency is greater than 80%. A steady-state theoretical model is established to analyze the rotational SRS process in hydrogen-filled HC-PCFs, and the simulation results accord well with the experiments. No saturation was observed, so the output power can be further improved by increasing the pump power and the coupling efficiency. | 4,132.8 | 2020-12-30T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Computational Drug Repositioning for Gastric Cancer using Reversal Gene Expression Profiles
Treatment of gastric cancer (GC) often produces poor outcomes. Moreover, predicting which GC treatments will be effective remains challenging. Computational drug repositioning using public databases is a promising and efficient tool for discovering new uses for existing drugs. Here we used a computational reversal of gene expression approach based on effects on gene expression signatures by GC disease and drugs to explore new GC drug candidates. Gene expression profiles for individual GC tumoral and normal gastric tissue samples were downloaded from the Gene Expression Omnibus (GEO) and differentially expressed genes (DEGs) in GC were determined with a meta-signature analysis. Profiles drug activity and drug-induced gene expression were downloaded from the ChEMBL and the LINCS databases, respectively. Candidate drugs to treat GC were predicted using reversal gene expression score (RGES). Drug candidates including sorafenib, olaparib, elesclomol, tanespimycin, selumetinib, and ponatinib were predicted to be active for treatment of GC. Meanwhile, GC-related genes such as PLOD3, COL4A1, UBE2C, MIF, and PRPF5 were identified as having gene expression profiles that can be reversed by drugs. These findings support the use of a computational reversal gene expression approach to identify new drug candidates that can be used to treat GC.
drug-induced gene expression signatures, offers a time-efficient approach to reposition existing drugs for new indications 9,16 . Several computational methods, such as bioinformatics, system biology, machine learning, and network analysis can be used for drug repositioning or repurposing as well as to identify new indications for drugs 17 .
Most computational drug repositioning approaches are based on a "guilt by association" strategy 18 , wherein agents having similar properties are predicted to have similar effects. Many drug repositioning strategies are based on different data, including similar chemical structures, genetic variations, and gene expression profiles 19 . Recently, interest in the use of genomics-based drug repositioning to aid and accelerate the drug discovery process has increased 9 . Drug development strategies based on gene expression signatures are advantageous in that they do not require a large amount of a priori knowledge pertaining to particular diseases or drugs 20,21 .
The purpose of this study is to predict drug candidates that can treat GC using a computational method that integrates publicly available gene expression profiles of GC patient tumors and GC cell lines and cellular drug response activity profiles.
Results
short overview of Included studies. The study selection process is outlined in Fig. 1. Following the search and selection steps, eight studies: GSE2689, GSE29272, GSE30727, GSE33335, GSE51575, GSE63089, GSE63288, and GSE65801, were included in the final analysis. An additional dataset, GSE54129, was excluded due to lower quantitative QC scores after a MetaQC analysis (Supplementary Table S1). Detailed information about the downloaded datasets is summarized in Supplementary Table S2. Tumor gene expression signatures were analyzed for 719 GC samples by comparing RNA expression data for 410 tumors and 326 adjacent normal tissues from the GEO. The samples originated from 410 patients, of whom 152 (37.1%) were Korean, 236 (57.6%) were Chinese, and 22 (5.4%) were Caucasians. The samples of patients who had no prior therapy were from GSE29272, GSE65801, and GSE63288. The sample information was not available in GSE30727 nor GSE26899, while the sample information was not mentioned in GSE33335 nor GSE51575. All patients received some type of pre-treatment in GSE63089.
tumor Gene expression signatures. The workflow for the exploration of compounds using the calculated RGES values is presented in Fig. 2. All probe sets were re-annotated with the most recent NCBI Entrez Gene IDs and then mapped manually to yield 9,113 unique common genes across the different platforms. A fixed-effect model method was utilized by combining the P values in the MetaDE package. Among the gene expression signatures, 136 genes showed increased expression levels in tumors compared to normal tissues (adjusted P < 0.001, RGes Computation. LINCS data for changes in the expressions of 978 landmark genes after treatment of AGS cell lines with 25 compounds used to treat human gastric adenocarcinoma were used for the RGES computations. The median IC 50s values for 2025 compounds used to treat GC cancer cell lines listed in the ChEMBL were used for computation. Disease signatures including 189 DEGs after extraction from the set of LINCS landmark genes were also used for the RGES computation. Variations in the RGES outcomes were evaluated under various biological conditions. The RGES showed larger variations across different cell lines relative to those within different replicates of the same cell line when the same concentration and treatment duration for a compound were used (P < 2.2 × 10 −16 ; Fig. 3A). In addition, longer treatment durations (≥24 h) were associated with lower RGES outcomes compared to shorter durations (<24 h) when a compound was tested on the same cell line at the same concentration (P < 2. Table S5). The calculated sRGES scores for each compound were significantly correlated with drug activity (Spearman correlation rho = 0.27 and P = 1.04 × 10 −2 ; Fig. 5). Additionally, CTRP was used as an external dataset to confirm the correlation between reversal potency and compound activity. Activity data expressed as AUC values for 546 compounds tested in GC cell lines were collected from CTRP. After the sRGES computation, the median AUC values across multiple cell lines were www.nature.com/scientificreports www.nature.com/scientificreports/
Identification of Reversed Genes and Prediction of Compounds. Using the correlation between
the sRGES outcome and the compound activity, compounds having high reversal potency for GC were identified. Next, genes having expression levels that were reversed by the active compounds were predicted by a leave-one-compound-out approach. The five genes that showed significant reversals of expression following treatment with the GC cell lines with the active compounds included: (i) the collagen type IV α1 chain (COL4A1); (ii) procollagen-lysine 2-oxoglutarate 5-dioxygenase 3 (PLOD3); (iii) ubiquitin conjugating enzyme E2 C (UBE2C); (iv) macrophage migration inhibitory factor (MIF); and (v) pre-mRNA processing factor 4 (PRPF4) (Fig. 7). Fifteen compounds, including sorafenib, olaparib, ponatinib, tanespimycin, selumetinib, and elesclomol, were all determined to be active compounds against GC (Supplementary Table S6).
Discussion
Methods to identify drug candidates that can reverse the expression states of disease-related genes can complement traditional target-oriented approaches in drug discovery 9,[22][23][24][25] . In this study, we used public cancer genomic and pharmacologic databases to demonstrate the reversal potency relationship between DEGs and drug activity and to predict potential new drug candidates for GC.
Our results showed that the ability of drugs to reverse DEGs was correlated with drug activity in GC, although this correlation was highly dependent on the cell line as well as the drug concentration and treatment duration. The positive correlation between sRGES and IC 50 values indicated that combining disease gene expression data www.nature.com/scientificreports www.nature.com/scientificreports/ derived from clinical samples with drug gene expression profiles obtained from results with in vitro cell lines could be used to predict drug activity.
In our study, five GC genes, COL4A1, PLOD3, UBE2C, MIF, and PRPF4, showed reversed expressions in response to 15 active compounds. To the best of our knowledge, this is the first study of drug repositioning using a computational reversal gene expression approach in GC. Among these genes, PLOD3 26 and COL4A1 27 were recently shown to be overexpressed in GC. Meanwhile, the overexpression of UBE2C was related to poor prognosis in GC 28 and was a potential biomarker of intestinal type GC 29 . MIF could also be a potential prognostic factor for GC 30 . These genes showed reversed expression levels and thus may be feasible as therapeutic targets for GC. Additionally, PRPF4 as a pre-mRNA splicing factor has been suggested as a potential therapeutic target for cancer therapy 31 .
Among the active drugs identified by our analysis, the multiple tyrosine kinase inhibitor sorafenib 32 and a poly (ADP-ribose) polymerase (PARP) inhibitor, olaparib 33 , have completed phase II and phase III clinical trials, respectively, for GC patients. Meanwhile, the heat shock protein 70 (Hsp70) inducer elesclomol, the novel tyrosine kinase inhibitor ponatinib, the heat shock protein 70 (Hsp90) inhibitor tanespimycin, and the mitogen-activated protein kinase inhibitor selumetinib have not been previously studied clinically for their effectiveness against GC.
GC is a heterogeneous disease that involves multiple factors associated with various molecular pathways that can function differently during the cancer development process. A limitation of this study is that the GC disease gene expression datasets from the GEO are not uniformly associated with clinical outcomes or GC etiologies. The drug activity of predicted compounds may also vary because the GC disease states varied for individual patients. Sampling time information is important, as samples obtained after the initial neo-adjuvant chemotherapy can affect the results of this meta-analysis. Nonetheless, such information was not available from some datasets.
Many recent projects focus on precision medicine to provide insights between diseases and genes. A repurposing strategy based on alterations of driver genes in each tumor can be used to identify therapeutic targets. The collection of therapeutic agents targeting driver genes and determining the connection between each patient and the targeted therapies can enhance promising drug repositioning opportunities and eventually benefit patients. Therefore, RGES may improve predictions of drug candidates because it is based on the molecular characteristics of actual tumors.
Therapeutic efficacy is more complex than a simple correlation of gene expression profiles with drugs and diseases. Therefore, our findings with regard to drug candidates will require further preclinical testing and demonstrations in clinical trials, although our results did validate that the method of the computational analysis of public gene expression databases is a potentially useful means of drug discovery. In summary, our computational approach combined disease gene expression with drug-induced expression profiles in GC to identify new drugs and target genes for GC therapy. This approach can also be used to predict the efficacy of new drug candidates with which to treat GC. This computational approach could be broadly applied to other diseases for which reliable gene expression data are available.
Collection of Gastric Adenocarcinoma Gene Expression Profiles. Publicly available gene expression
profiles for GC patients were downloaded from the GEO database of the NCBI (https://www.ncbi.nlm.nih.gov/geo), A search of the GEO database was conducted in July of 2018 using 'gastric cancer' as a key search phrase. The results for deposits made since January of 2015 were filtered using the search terms Homo sapiens, expression profiling by array, and expression profiling by high-throughput sequencing. Only original experimental datasets that compared the expression levels of mRNAs between GC tumors and normal tissue controls were selected. Datasets containing more than ten sets of normal and tumor samples were retained. Additionally, gene expression profiles of human gastric adenocarcinoma cell lines were downloaded from the CCLE (version 2.7. updated 2015 https://portals.broadinstitute.org/ccle) 12 .
Gene expression Data preprocessing. The GEO accession number, platform, sample type, numbers of cases and controls, references and expression data were extracted from each of the identified datasets, which were then individually preprocessed using a log 2 transformation and normalization approach. If there were multiple probes for the same gene, the probe values were averaged for that gene expression level. All probe sets on different platforms were re-annotated to use the most recent NCBI Entrez Gene Identifiers (Gene IDs), and the Gene IDs were used to cross-map genes among the different platforms. Only genes present in all selected platforms were considered. To combine the results from individual studies and to obtain a list of more robust DEGs between GC and normal control tissues, guidelines outlined by Ramasamy et al. 34 for meta-analyses of gene expression microarray datasets were followed. The R packages MetaQC 35 was used for quality control (QC). MetaQC uses six quantitative QC parameters: (i) measures of internal QC; (ii) measures of external QC; (iii) accuracy QC of featured genes; (iv) accuracy QC of the pathway; (v) consistency of QC in the ranking of featured genes; and (vi) consistency QC in the ranking of the pathway. The mean rank of all QC measures in each dataset was also determined as a quantitative summary score by calculating the ranks of each QC measure among all included datasets.
Disease Gene expression signatures. was used to the identify DEGs in GC. A moderated t-statistic was used to calculate the P values for each dataset, and a meta-analysis was conducted with a fixed-effect model 39 www.nature.com/scientificreports www.nature.com/scientificreports/ https://www.ebi.ac.uk/chembl/) 41 were mapped using GC cell line names followed by manual inspections. Meta-information for compound-induced gene expressions, including the cell line types as well as the treatment durations and drug concentrations was retrieved. Only small-molecule perturbagens having high-quality gene expression profiles (is_gold = 1, annotated in the meta-information) were used for further analysis.
Compound activity profiles. Compound response activity data, described as the half-maximal inhibitory concentrations (IC 50 ) in GC cell lines, were retrieved from ChEMBL. As the IC 50 values for a given compound could vary for the same cell line across different studies, the median IC 50 value was used. Compounds included in the ChEMBL and LINCS were mapped using The International Union of Pure and Applied Chemistry International Chemical Identifier keys. Additionally, the area under the curve (AUC) values for compound activity data in GC cell lines were retrieved from the Cancer Therapeutic Response Portal (CTRP ver 2, https://portals. broadinstitute.org/ctrp.v2.1/) 42 . Sensitivity levels were measured in the form of cellular ATP levels as a surrogate for cell number and growth using CellTiter-Glo assays 43 . A compound-performance score was computed at each concentration of compound. The AUC using percent-viability scores was computed as a metric of sensitivity given that AUC reflects both relative potency and the total level of inhibition observed for a compound across CCLs. Median AUC values across multiple cell lines were used. Compounds were categorized into active (IC 50 < 10 μM) and inactive groups (IC 50 ≥ 10 μM) based on their activities in cell lines. An IC 50 value of 10 μM was chosen as an activity threshold because compounds with IC 50 ≥ 10 μM in primary screenings are often not pursued 44 . Reverse Gene expression score (RGes) Computation and summarization. The method used to calculate RGES outcomes was adapted from the previously described Connectivity Map method 45 . Briefly, genes were initially ranked by their expression values for each compound. An enrichment score for each set of upregulated and downregulated disease genes was computed based on the positions of the genes in the ranked list. RGES values emphasize the reversal correlation by capturing the reversal relationship between the DEGs and compound-induced changes in gene expressions. Therefore, a lower negative RGES indicates a greater likelihood of reversing changes in disease gene expressions, and vice versa. In addition, Spearman's correlation coefficient, Pearson correlation coefficient, and cosine similarity were computed between the DEGs and compound activities as an alternate means of computing the reversal relationship between DEGs and active compounds 46 . The databases can list multiple gene expression profiles associated with one compound due to testing in various cell lines, compound treatment concentrations, and compound treatment durations, which resulted in multiple RGES outcomes for one compound that could reverse disease gene expression. Given these variations, sRGES were weighted and calculated. Results obtained for 10 μM drug concentrations and 24 h treatments were used to define the reference conditions. The analysis code and an example are provided at https://github.com/Bin-Chen-Lab/RGES.
Identification of Reversed Genes.
In cases for which multiple compound activity IC 50 data were available for one compound, median IC 50 values were calculated. In cases for which multiple gene expression profiles yielded multiple RGES values for one compound, a median RGES value was calculated from the GC cell lines. Each gene expression profile was sorted according to its expression value. Upregulated genes were ranked high (i.e., on the top), whereas downregulated genes were ranked low (i.e. on the bottom). Among the upregulated genes, reversal genes were defined as those that were ranked lower in the inactive group (IC 50 < 10 μM) than in the inactive group (IC 50 ≥ 10 μM). In contrast, among the downregulated genes, the reversal genes were defined as those that were ranked higher in the active group than in the inactive group. A leave-one-compound-out cross-validation approach was used to find genes having reversed expressions 47 . For each trial, one compound was removed and the reversed genes were then identified using the approach described above. Only those genes that were significantly reversed in all trials were retained. Genes having P < 0.1 in all trials were considered as reversal genes. statistical Analysis. The degrees of similarity in the gene expressions between tumor samples from the GEO and GC cell lines from the CCLE were assessed by Spearman's rank correlation testing, as was the similarity of RGES and IC 50 from ChEMBL or AUC from CCLE. A Wilcoxon signed-rank test was used to assess differences between RGES across the same and different cell lines, longer (fferent cell lines, <24 h) treatment durations, and higher (≥10 μM) and lower (<10 μM) drug concentrations. P values were adjusted with a Benjamini and Hochberg's false discovery rate method to correct for multiple testing.
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. | 4,012.4 | 2019-02-25T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Mapping Population Distribution from High Resolution Remotely Sensed Imagery in a Data Poor Setting
: Accurate mapping of population distribution is essential for policy-making, urban planning, administration, and risk management in hazardous areas. In some countries, however, population data is not collected on a regular basis and is rarely available at a high spatial resolution. In this study, we proposed an approach to estimate the absolute number of inhabitants at the neighborhood level, combining data obtained through field work with high resolution remote sensing. The approach was tested on Ngazidja Island (Union of the Comoros). A detailed survey of neighborhoods at the level of individual dwellings, showed that the average number of inhabitants per dwelling was significantly different between buildings characterized by a different roof type. Firstly, high spatial resolution remotely sensed imagery was used to define the location of individual buildings, and second to determine the roof type for each building, using an object-based classification approach. Knowing the location of individual houses and their roof type, the number of inhabitants was estimated at the neighborhood level using the data on house occupancy of the field survey. To correct for misclassification bias in roof type discrimination, an inverse calibration approach was applied. To assess the impact of variations in average dwelling occupancy between neighborhoods on model outcome, a measure of the degree of confidence of population estimates was calculated. Validation using the leave-one-out approach showed low model bias, and a relative error at the neighborhood level of 17%. With the increasing availability of high resolution remotely sensed data, population estimation methods combining data from field surveys with remote sensing, as proposed in this study, hold great promise for systematic mapping of population distribution in areas where reliable census data are not available on a regular basis. and the absence of houses in the building layer that are located under trees. In Voidjou, the population is overestimated, since the UZ hosts a large portion of immigrants with household characteristics strongly deviating from the general trend on the island.
Introduction
Obtaining reliable estimates of population numbers is important for policy-making, urban planning, and administration, both at the regional and local scale; and it is also fundamental for risk assessment related to natural and human-induced hazards, e.g., References [1][2][3]. According to the United Nations (2015), a count of the population-or 'census'-must be based on individuals, must be exhaustive and done simultaneously for a given territory, and should be organized at regular intervals, at least every 10 years. The major drawbacks of a census are that it is a complex exercise, which is time-consuming, labor intensive, and expensive [4,5]. To overcome the drawbacks of a traditional census, several countries are nowadays combining different administrative registers to obtain a count of the population and its characteristics (e.g., Belgium, Iceland, The Netherlands) [6]. In data-poor regions though, automating the counting process is not possible due to a lack of up-to-date national civil registers. Therefore, organization of traditional censuses is still required.
If no recent census data are available, the use of remote sensing may be a cost-effective alternative for estimating population numbers e.g., References [7][8][9][10][11][12][13][14][15][16], as it allows large-coverage mapping of human settlements at increasingly higher spatial resolutions. Satellite imagery also makes it possible to provide regular mapping updates for the same area. While estimation of the population from remotely sensed data is not expected to produce results as accurate as a traditional census, it can be a useful approach for producing population estimates between two census dates or in the absence of census data.
In applications of population estimation, remotely sensed imagery may be used in two different ways: (a) To disaggregate census data to a finer spatial scale, using areal interpolation methods; or (b) To define a relationship between remote sensing derived variables and population numbers, using a statistical modelling approach [14]. In the last category of methods, earlier studies have shown that population can be estimated from remote-sensing derived information, such as the number of dwelling units, the area of urban land, the area occupied by different land uses, or even directly from spectral and textural information, available at the pixel scale. Multiplying the number of dwellings in a study area obtained through visual interpretation of aerial photography or high resolution satellite imagery, by the average number of occupants per dwelling obtained through a field survey is considered as time consuming, but it is one of the most accurate ways for population estimation using remotely sensed data [9,11,15,17]. Automated extraction of building outlines from high-resolution imagery using object-based image analysis (OBIA), may substantially lower the cost of data processing, yet the effectiveness of OBIA for successfully identifying individual dwellings strongly depends on spectral/spatial characteristics of the imagery used, as well as the morphological characteristics of the built-up area (i.e., spacing between buildings, sufficient spectral contrast between buildings and their surroundings) [17,18]. If high-resolution LIDAR data or stereoscopic image acquisitions are available, Digital Surface Models (DSM) of the built-up area can be produced. In this case, elevation differences can be used to improve the detection of individual building structures, extract footprint and volume for each building, and calculate available floor space [3,4,19]. Alternatively, in some studies, the number of floors is estimated through building shadow analysis, and in combination with roof area, is used to obtain an estimate of the total floor space per building [20,21]. If no high-resolution data is available, population may be estimated based on the total built-up area, or based on the area covered by different land-use classes, or morphologically homogeneous zones extracted from medium-resolution satellite imagery, using simple regression approaches [10,12,13,16]. Some authors have also explored the potential of directly using spectral and textural responses at the pixel level, to develop regression models to estimate population [7][8][9]. At even coarser scales, imagery showing urban extent has been combined with night-time light emissions, to assess population distribution at continental to global scales [14,19,22,23]. When estimating population from remotely sensed data, numerous physical and socioeconomic variables, obtained from ancillary data sources, can be incorporated as well (e.g., distance to the Central Business District, accessibility to the transport system, slope, period when urban area has been built) [7,24]. By including more variables, the complexity of the model will increase, but the accuracy and the robustness of the model are likely to improve.
In recent years, several studies have focused on estimating population in data-poor regions using high resolution remotely sensed imagery. Population estimation in these studies is mostly based on identifying the number of dwellings, or on estimating the total rooftop area [20,25,26]. Estimates of population are then obtained (a) by multiplying the number of dwellings with an estimated number of people per building structure [25,27], or (b) by multiplying the rooftop area of all buildings with the average number of people per square meter [20,25,26]. Spatial resolution of the imagery in relation to object size, is an important factor for successful extraction of individual dwellings. In informal settlements, where building size may be well below 100 m 2 , an image resolution of 0.5 m will not be enough [28], and image resolutions of at least 0.3 m will be required to capture small building structures [18]. While OBIA may provide good results for roof top delineation of individual dwellings in well-structured, low to medium dense urban areas [29][30][31], identifying individual dwellings in high-density urban areas, in informal settlements, or in environments where roof type materials show strong spectral similarity with their surroundings remains problematic, and manual digitizing is often the only option left to obtain reliable dwelling counts for population estimation [26,[31][32][33]. With the rise of volunteered geographic information initiatives (e.g., www.openstreetmap.org), accurate data on spatial features (e.g., buildings, roads . . . ) is rapidly becoming available for many parts of the world, including developing countries [17]. Therefore, using information on dwelling location, obtained from such data layers, in combination with information on dwelling characteristics captured through remote sensing is a promising avenue for population estimation. The most obvious dwelling characteristic that can be identified from high-resolution imagery is roof type, which in developing countries is often linked to type of housing, household demographics, socio-economic status, and health conditions [34][35][36]. Assuming that household size significantly differs between houses with different roof types, roof type discrimination may be very useful in estimating population numbers, based on available data about the location of individual dwellings.
In this study, we propose an approach to estimate population numbers at neighborhood level, combining information on dwelling location and roof type, obtained from remotely sensed data, with data on household size collected through fieldwork. Given spectral confusion between roof construction materials and the surrounding environment, the complexity of the residential structure, and the absence of ancillary data sources, dwelling locations in our study were obtained by pin-pointing building locations on high-resolution imagery (Pléiades), whilst roof type was extracted from the imagery, using object-based random forest classification. An inverse calibration approach was applied to correct for misclassification bias in population estimation. Uncertainty in model outcome, caused by variations in average dwelling occupancy between neighborhoods was assessed as well. The proposed method was applied on Ngazidja Island (Union of the Comoros). In this data-poor region, no census has been organized since 2003, and there is a need for up-to-date information about population distribution on the island. Model calibration in our study was based on a sample of neighborhoods, for which populations numbers were collected in the field. To assess the accuracy of model outcome, a cross-validation approach was applied using the leave-on-out method. While for this case study, individual dwellings on the island were identified by manually pinpointing dwelling locations on the imagery, with the increased availability of volunteered geographic information datasets, the method proposed might be useful in other areas where such data are readily available, with less cost and effort for data collection.
Study Area
The method proposed in this study was applied on Ngazidja Island (Union of the Comoros) ( Figure 1). Located northwest of Madagascar, in the Mozambique Channel, Ngazidja is the westernmost island of the Comoros archipelago, and is prone to volcanic hazards. Karthala volcano is the site of recent volcanic activities on the island. In the future, the risk of population and infrastructure being affected by the next eruption is high, because urban areas are surrounding Karthala volcano (Figure 1). Owing to limited budgets and more urgent priorities, the last census in the Union of the Comoros dates back to 2003. There is hence a strong need for a reliable, up-to-date estimate of the population number and its distribution. Considering the number of people sharing the same dwelling as a criterion for household definition, according to the last census, the average household has 4.1 members [38]. One dwelling can consist of different building units. It has also been observed that household size grew with an increase of employed people. This is explained by the fact that, as the number of household members earning money increases, there is the tendency to cohabitate with other families [38].
House Characteristics
On Ngazidja Island, three main types of houses were found, differing through the building material used: plant material (Figure 2a Considering the number of people sharing the same dwelling as a criterion for household definition, according to the last census, the average household has 4.1 members [38]. One dwelling can consist of different building units. It has also been observed that household size grew with an increase of employed people. This is explained by the fact that, as the number of household members earning money increases, there is the tendency to cohabitate with other families [38].
House Characteristics
On Ngazidja Island, three main types of houses were found, differing through the building material used: plant material (Figure 2a No official digital delineation of the houses exists, but according to the 2003 census, only 7% of the roofs for Ngazidja Island would have been made of plant material. During recent fieldwork, it was observed that use of this material has become negligible on Ngazidja Island. Therefore, this type of house was not considered in this study. Since metal sheet houses are less expensive than houses with concrete roofs, according to the census of 2003, the majority of the buildings had a roof made of metal sheet (70.2%). The remaining roofs were made of concrete (22.8% in 2003). The National Census Direction considers that the construction material used for the houses reflects the financial situation of the inhabitants, since most Comorians own their house (84.35%) [38]. No official digital delineation of the houses exists, but according to the 2003 census, only 7% of the roofs for Ngazidja Island would have been made of plant material. During recent fieldwork, it was observed that use of this material has become negligible on Ngazidja Island. Therefore, this type of house was not considered in this study. Since metal sheet houses are less expensive than houses with concrete roofs, according to the census of 2003, the majority of the buildings had a roof made of metal sheet (70.2%). The remaining roofs were made of concrete (22.8% in 2003). The National Census Direction considers that the construction material used for the houses reflects the financial situation of the inhabitants, since most Comorians own their house (84.35%) [38].
Collection of Field Data
During fieldwork done in the summer of 2016, information about the number of residents and the type of dwelling they are living in was collected. While preparing the field work, a statistically representative number of large (cities) and small (villages) urban areas were randomly selected. Within these urban areas, 42 "Urban Zones" (UZ) were chosen. UZ were defined as built-up areas delineated by the road network ( Figure 3). UZ can be considered as representative of neighborhoods, which do not exist as an administrative division in the Union of the Comoros. Whereas, one UZ was selected in smaller urban areas, in larger areas two UZ were selected-one with a higher building density and one with a lower density-to deal with possible population density variations related to the UZ location, with respect to the center of the urban area ( Figure 3).
During the fieldwork, all of the on average 90 buildings within each selected UZ were visited. A trained surveyor first collected information about the number of people sleeping on a regular basis in the building, by asking the owner of the building. To double check the information provided, the number of people within each age range was also documented. Owing to limited time to collect the data, information on houses where the owner was absent was collected through a well-informed contact person, designated by the chief of the village. In the Union of the Comoros, social contacts in a village are strong, and trust has been put in the information collected via this alternative channel. However, errors in the data collection process cannot be excluded. Secondly, the surveyor collected information about the roof type of the building. In some cases, stepping back to correctly define the roof type was not possible due to the built-up area configuration. The roof was then identified as unknown. Most of the buildings present on Ngazidja Island have one level (ground floor) only. In case of multi-story buildings, the number of floors was considered in the data collection process, with people living on each floor being considered in the counting of the total number of residents for that building. Buildings with a non-residential function were also included in the survey, because
Collection of Field Data
During fieldwork done in the summer of 2016, information about the number of residents and the type of dwelling they are living in was collected. While preparing the field work, a statistically representative number of large (cities) and small (villages) urban areas were randomly selected. Within these urban areas, 42 "Urban Zones" (UZ) were chosen. UZ were defined as built-up areas delineated by the road network ( Figure 3). UZ can be considered as representative of neighborhoods, which do not exist as an administrative division in the Union of the Comoros. Whereas, one UZ was selected in smaller urban areas, in larger areas two UZ were selected-one with a higher building density and one with a lower density-to deal with possible population density variations related to the UZ location, with respect to the center of the urban area ( Figure 3).
During the fieldwork, all of the on average 90 buildings within each selected UZ were visited. A trained surveyor first collected information about the number of people sleeping on a regular basis in the building, by asking the owner of the building. To double check the information provided, the number of people within each age range was also documented. Owing to limited time to collect the data, information on houses where the owner was absent was collected through a well-informed contact person, designated by the chief of the village. In the Union of the Comoros, social contacts in a village are strong, and trust has been put in the information collected via this alternative channel. However, errors in the data collection process cannot be excluded. Secondly, the surveyor collected information about the roof type of the building. In some cases, stepping back to correctly define the roof type was not possible due to the built-up area configuration. The roof was then identified as unknown. Most of the buildings present on Ngazidja Island have one level (ground floor) only. In case of multi-story buildings, the number of floors was considered in the data collection process, with people living on each floor being considered in the counting of the total number of residents for that building. Buildings with a non-residential function were also included in the survey, because building function cannot be defined from remotely sensed imagery. Therefore, the average number of inhabitants per building was defined including buildings without residents. During the field campaign, data was collected for about 4000 buildings, located in 30 villages, spread over the entire study area ( Figure 4). For 2650 buildings, visual identification of the roof type (concrete, old and new metal sheet) was possible.
building function cannot be defined from remotely sensed imagery. Therefore, the average number of inhabitants per building was defined including buildings without residents. During the field campaign, data was collected for about 4000 buildings, located in 30 villages, spread over the entire study area (Figure 4). For 2650 buildings, visual identification of the roof type (concrete, old and new metal sheet) was possible. building function cannot be defined from remotely sensed imagery. Therefore, the average number of inhabitants per building was defined including buildings without residents. During the field campaign, data was collected for about 4000 buildings, located in 30 villages, spread over the entire study area (Figure 4). For 2650 buildings, visual identification of the roof type (concrete, old and new metal sheet) was possible. Population data obtained through field work was statistically analyzed (see results section), and evidence of a significant difference in population between buildings with different roof types was reported. Training data for object-based land cover classification, was also collected during fieldwork. Ground control points were acquired with a Garmin eTrex 30× GPS, in the center of homogenous areas representative of the following six land cover classes: vegetation (VEG), bare soil (BSO), asphalt (ASP), concrete (CON), old metal sheet (OMET), and new metal sheet (NMET). Ground control points related to roof types were taken at the edge of the houses, and afterward manually relocated to the center of the roof. A total of~500 points per class were collected.
Preprocessing of Pléiades Imagery
Three high spatial resolution Pléiades scenes were obtained for this study [2 m resolution for the four multispectral bands-blue, green, red, near-infrared (NIR)-0.5 m resolution for the panchromatic band] (Figure 4). Acquired in 2013, the images cover a major part of Ngazidja Island. Clouds in the region are a major issue and explain why only 95% of all villages located in the Pléiades acquisition zone, can be observed in cloud and shadow free conditions in at least one of the scenes ( Figure 4). The Pléiades images were orthorectified using ground control points collected on site, and a 30 m resolution DEM (SRTM and TerraSAR-X). Pléiades digital numbers were converted to top-of-atmosphere reflectance.
Building Layer Definition
As a result of the absence of reliable ancillary data showing the delineation of individual buildings, building locations in the study area were identified on the satellite imagery. Automatic extraction of the built-up footprint was not possible because of spectral similarities between some roof construction materials and the surrounding environment, which often consisted of bare soil or cemented surfaces, and because of the complexity of the residential structure. Thus, locations of individual buildings were manually registered by pinpointing a central position on each roof on the pansharpened Pléiades imagery, producing a point layer of building locations. The digitizing was done as a collective work, each participant following a clear protocol for identifying houses in allocated areas. Small building infrastructure (<15 m 2 ) located close to other larger building units was not included, because these smaller buildings usually represent kitchens located outside the main house.
Roof Type Characterization
In a second stage, the roof type for each point location in the building layer was extracted from the Pléiades data. An object-based classification approach was used to group pixels in the image scene into spectrally homogeneous objects, using the four multispectral bands and the panchromatic band, as described in Reference [39]. Next to spectral variables, textural measures were extracted to characterize each object, thus increasing the amount of information available for classification [40,41]. The segmentation of the image into homogeneous objects was done in eCognition ® (Munich, Germany), which uses a region growing algorithm progressively merging neighboring pixels in a scene based on several parameters controlling the size, compactness, and smoothness of the image objects produced [39]. Parameterization of the segmentation criteria is left to the user who defines the optimal settings through an iterative process, where parameters are progressively modified through visual inspection of the obtained segments [42,43]. On account of spectral similarities between some roof construction materials and the surrounding environment, in the context of this study area, a slight over-segmentation of the image was preferred. This implied that one real-world object (i.e., one building) will often be covered by several smaller image objects, to ensure that the characteristics of the objects, which include a point in the building layer, are representative of the roof type the point is located on and are not contaminated by spectrally similar materials possibly present next to building structures.
For each object in the segmented image, 20 spectral and textural variables were calculated (Table 1). Using these variables, a random forest classifier was applied, assigning each object in the image to one of the following six land cover classes: vegetation (VEG), bare soil (BSO), asphalt (ASP), concrete (CON), old metal sheet (OMET), and new metal sheet (NMET). The classifier was run using an ensemble of 1000 trees, and the set of ground control points (~1000 points per scene,~150 points per land cover class per scene) collected on site in August 2014. The object-based land cover map produced for the study area allowed allocating a land cover class to each building identified in the buildings point layer, and to define the total number of buildings assigned to each class within predefined urban zones. Some confusion in the classification in the labeling of roof types could not be avoided though. Whilst some buildings may have been assigned the wrong roof type, some may have been labelled as belonging to a non-building class, like vegetation, bare soil, or asphalt. To correct for such misclassification bias, an inverse calibration approach was applied, using information obtained from the error matrix of the classification, as described in References [44][45][46]. An independent validation sample of the building segments (i.e., those segments that include a point in the building layer), was randomly selected in each scene (~200 points per scene) and labelled through visual inspection of the Pléiades imagery. By comparing land cover labels assigned by the classifier against manually assigned roof types (CON, OMET, NMET) an error matrix was constructed, with the six land cover classes defining the rows in the matrix and the three roof types defining the columns. By dividing the elements in the rows of the error matrix by the row total, one can derive the probability that a building has a roof type j, given that it is classified as land cover type i. These probabilities could then be used to correct the total count of buildings of type j in each neighborhood, according to the following equation: where N j,corr is the corrected number of buildings with roof type j, C the total number of land cover classes, N i the number of buildings labelled by the classifier as belonging to land cover type i, and p(j|i) the probability that a building labelled as land cover class i actually has roof type j.
Population Estimation Model
Data on the number of residents per building type were combined with the number of buildings of each type identified in each UZ, to assess the population at the neighborhood level. Given no significant difference was observed between the number of residents in houses with old metal and new metal roofs (see results below), both types of houses were grouped together, and the number of residents P i for an UZ was estimated as a weighted sum of the corrected number of houses of each type for that UZ, N CONcorr,i and N METcorr,i , obtained through remote sensing multiplied by the average number of residents in concrete and metal houses r CON and r MET , obtained from the field survey: P i = N CONcorr,i r CON + N METcorr,i r MET (2) Considering that the average number of inhabitants per building changes from one UZ to another, the average number of residents per building type (r CON , r MET ) used in Equation (2) was calculated as the mean of the average number of residents for that building type, obtained for each of the 42 visited UZ during the field survey. The uncertainty of the estimated number of residents for each UZ, caused by the variation in the average number of residents per building type observed over different neighborhoods, could then be calculated as follows: with s i the standard deviation of the population estimate P i ; N CONcorr,i and N METcorr,i , the estimated number of buildings of each type for the UZ i considered; s 2 CON and s 2 MET , the variances of the average number of residents for concrete and metal roof houses, respectively; and cov CON,MET , the covariance of the average number of residents for concrete and metal roof houses observed in the visited UZ.
The approach was first tested on the 42 visited UZ to define the predictive accuracy of the estimation, before extending it to the entire study area. Validation of the population predictions was based on the leave-one-out method, using for each UZ, a model calibrated with all field data, except for the data of the UZ itself. Predictive accuracy of the model was evaluated with two error measures: the proportional prediction error (e s ) and the absolute proportional prediction error (ABS(e s )).
ABS(e s ) = | P s − P s | P s .
Both measures compared the predicted population value ( P s ), with the value observed in the field (P s ) and were calculated for each of the 42 visited UZ. These error variables indicate (1) by their median, the bias (e s ) and the magnitude (ABS(e s )) of the error; and (2) by their interquartile range, their distributional dispersion, as discussed in References [9,47].
Relation between Number of Residents and Building Type
According to our field survey, one building hosts on average 3.40 ± 0.65 people, whereas the 2003 census estimated the average household size to be 4.1. Important to notice here is that members of the same household, as defined by the 2003 census, may reside in different building units, whilst our estimate referred to the number of residents per building. Moreover, buildings with a non-residential function were also integrated in our estimation, which lowered our average.
Making a distinction between buildings with different roof types, we found out that, based on our fieldwork, on average 3.63 ± 0.98 people resided in buildings with CON roofs (Figure 5), 3.01 ± 1.11 people resided in buildings with NMET roofs, and 3.23 ± 1.21 people in buildings with OMET roofs. Welch's t-test was performed to test for significant differences in the average number of residents, in buildings of different types. No significant difference could be found between the average number of residents in NMET and OMET houses (p-value > 0.05). This is the reason why the population estimations for NMET and OMET were merged into one new category, namely metal sheet roofs (MET). The average number of residents in these type of houses is 3.13 ± 0.98 ( Figure 5). A significant difference was observed between the number of residents in metal sheet roof houses and in houses with concrete roofs (p-value ≤ 0.05). So, on average more residents are found in buildings with a concrete roof than in buildings with a metal roof. According to the census of 2003, household size increased with an increase of employed people (see also Section 2.1.1). If one assumes that employed people can more easily afford the construction of a house with a concrete roof, this may explain why overall concrete houses had a higher number of residents. Based on our fieldwork, we also analyzed if household size might be influenced by features of the built-up area, such as size, building density, accessibility, shortest distance of a house to other houses in the neighborhood, which could be easily defined from the remote sensing data for each visited UZ. We concluded that the number of residents per building was not significantly influenced by any of these spatial variables, but solely by the type of building, as characterized by its roof type.
Building and Roof Type Mapping
Over 55,000 buildings were manually digitized as point locations, through visual interpretation of the Pléiades images. By overlaying these point locations with the object-based land cover classification, a land cover class label was assigned to each building. As can be seen in Figure 6, the Mean Decrease Gini, which indicates the importance of each variable in the random forest classification process; shows that the most important variables contributing to the separation of the six land-cover classes in this study are spectral variables, such as the ratio green, NIR and blue, the ratio green/blue, and the mean red. Textural variables (standard deviation of spectral bands, metrics derived from the GLCM) proved to have less importance in the classification process than spectral variables. It should be noted though, that the order of importance of the variables might be different when using other segmentation parameters [40], and is also dependent on application context (study area, sensor data used).
Based on a random sample of image segments, including a building point location, the accuracy of roof type extraction was assessed. Table 2 shows the error matrix, comparing the roof type for each building defined in the validation set, against the land cover label assigned by the classifier. The validation indicated an overall classification accuracy of 75%. As can be noticed, the producer's accuracies for the three roof types range from 0.80 for concrete roofs (CON), to 0.64 for old metal roofs (OMET), and 0.72 for new metal roofs (NMET). Buildings with OMET and NMET roofs were frequently assigned to CON by the classifier. The opposite occurred less frequently. This implied that the number of buildings with metal roofs was likely to be underestimated by the classifier. In some cases, building segments were assigned by the classifier to the non-building classes: vegetation, bare soil, or asphalt (VEG, BSO, ASP). This may have been due to (1) errors in the delineation of the image segments, leading to the presence of building, as well as non-building, land cover within the same segment; (2) some degree of spectral confusion between roof types and other land cover classes. So, on average more residents are found in buildings with a concrete roof than in buildings with a metal roof. According to the census of 2003, household size increased with an increase of employed people (see also Section 2.1.1). If one assumes that employed people can more easily afford the construction of a house with a concrete roof, this may explain why overall concrete houses had a higher number of residents. Based on our fieldwork, we also analyzed if household size might be influenced by features of the built-up area, such as size, building density, accessibility, shortest distance of a house to other houses in the neighborhood, which could be easily defined from the remote sensing data for each visited UZ. We concluded that the number of residents per building was not significantly influenced by any of these spatial variables, but solely by the type of building, as characterized by its roof type.
Building and Roof Type Mapping
Over 55,000 buildings were manually digitized as point locations, through visual interpretation of the Pléiades images. By overlaying these point locations with the object-based land cover classification, a land cover class label was assigned to each building. As can be seen in Figure 6, the Mean Decrease Gini, which indicates the importance of each variable in the random forest classification process; shows that the most important variables contributing to the separation of the six land-cover classes in this study are spectral variables, such as the ratio green, NIR and blue, the ratio green/blue, and the mean red. Textural variables (standard deviation of spectral bands, metrics derived from the GLCM) proved to have less importance in the classification process than spectral variables. It should be noted though, that the order of importance of the variables might be different when using other segmentation parameters [40], and is also dependent on application context (study area, sensor data used).
Based on a random sample of image segments, including a building point location, the accuracy of roof type extraction was assessed. Table 2 shows the error matrix, comparing the roof type for each building defined in the validation set, against the land cover label assigned by the classifier. The validation indicated an overall classification accuracy of 75%. As can be noticed, the producer's accuracies for the three roof types range from 0.80 for concrete roofs (CON), to 0.64 for old metal roofs (OMET), and 0.72 for new metal roofs (NMET). Buildings with OMET and NMET roofs were frequently assigned to CON by the classifier. The opposite occurred less frequently. This implied that the number of buildings with metal roofs was likely to be underestimated by the classifier. In some cases, building segments were assigned by the classifier to the non-building classes: vegetation, bare soil, or asphalt (VEG, BSO, ASP). This may have been due to (1) errors in the delineation of the image segments, leading to the presence of building, as well as non-building, land cover within the same segment; (2) some degree of spectral confusion between roof types and other land cover classes. Applying the misclassification correction procedure explained in Section 2.5, the majority of the buildings in the study area had a concrete roof (45%), followed by buildings with new metal sheet roofs (41%). A minority of the buildings had an old metal sheet roof (14%). Applying the misclassification correction procedure explained in Section 2.5, the majority of the buildings in the study area had a concrete roof (45%), followed by buildings with new metal sheet roofs (41%). A minority of the buildings had an old metal sheet roof (14%). where VEG = vegetation, BSO = bare soil, ASP = asphalt, CON = concrete, OMET = old metal sheet, NMET = new metal sheet. The confusion observed between specific classes is further used to apply the misclassification correction procedure. This increase is also confirmed by the field survey data of 2016. One explanation for this sudden increase were the lahar events that followed two mildly explosive eruptions at Karthala in 2005, remobilizing the ash deposits emplaced around the summit area and making them more accessible for exploitation, thus decreasing the price of the raw material and facilitating the construction of concrete houses [48]. Another explanation is that concrete houses were increasingly preferred by the local population. Use of concrete is more sustainable compared to the use of metal sheets, which get rusty by the years. Concrete houses also offer more protection against heat during the day and are considered a symbol of achievement in the Comorian society. This increase is also confirmed by the field survey data of 2016. One explanation for this sudden increase were the lahar events that followed two mildly explosive eruptions at Karthala in 2005, remobilizing the ash deposits emplaced around the summit area and making them more accessible for exploitation, thus decreasing the price of the raw material and facilitating the construction of concrete houses [48]. Another explanation is that concrete houses were increasingly preferred by the local population. Use of concrete is more sustainable compared to the use of metal sheets, which get rusty by the years. Concrete houses also offer more protection against heat during the day and are considered a symbol of achievement in the Comorian society.
Population Estimation
The population estimation model proposed in this study was first applied to the 42 visited UZ, to enable validation using the leave-one-out method. For each UZ, the number of buildings of each type obtained by combining the building layer (Figure 7b) with the results of object-based classification (Figure 7c,d), was first corrected by applying the inverse calibration approach, before being multiplied with the average number of residents per building type, calculated as the mean of the average household sizes observed in the other 41 UZ (Equation (2); Figure 7e). In addition to this population estimate, for each UZ a measure of the uncertainty involved in the assessment was calculated, applying (Equation (3); Figure 7f). Estimated population was then compared with the population observed in the field in 2016 (Equations (4) and (5)).
Population Estimation
The population estimation model proposed in this study was first applied to the 42 visited UZ, to enable validation using the leave-one-out method. For each UZ, the number of buildings of each type obtained by combining the building layer (Figure 7b) with the results of object-based classification (Figure 7c,d), was first corrected by applying the inverse calibration approach, before being multiplied with the average number of residents per building type, calculated as the mean of the average household sizes observed in the other 41 UZ (Equation (2); Figure 7e). In addition to this population estimate, for each UZ a measure of the uncertainty involved in the assessment was calculated, applying (Equation (3); Figure 7f). Estimated population was then compared with the population observed in the field in 2016 (Equations 4 and 5). Taking model uncertainty into account, for most UZ, the reference population falls within one standard deviation of the estimated population value (Figure 8). For some UZ (e.g., Koule, Famare, Dima, Nkourani mkanga), an underestimation of observed population numbers was obtained. This type of error may be partly attributed to the development of new houses, which were not present during the acquisition of the remotely sensed imagery in 2013, as well as to the absence of houses in the building layer that were located under trees and were not visible in the remotely sensed images. The overestimation of the population for Voidjou (Figure 8), can be explained by the specificity of the neighborhood. This UZ hosts a large portion of immigrants, with household characteristics strongly deviating from the general trend on the island (only 1.91 people reside in buildings with CON roofs and 1.81 people in MET roofs).
Remote Sens. 2018, 10, x FOR PEER REVIEW 13 of 20 Taking model uncertainty into account, for most UZ, the reference population falls within one standard deviation of the estimated population value (Figure 8). For some UZ (e.g., Koule, Famare, Dima, Nkourani mkanga), an underestimation of observed population numbers was obtained. This type of error may be partly attributed to the development of new houses, which were not present during the acquisition of the remotely sensed imagery in 2013, as well as to the absence of houses in the building layer that were located under trees and were not visible in the remotely sensed images. The overestimation of the population for Voidjou (Figure 8), can be explained by the specificity of the neighborhood. This UZ hosts a large portion of immigrants, with household characteristics strongly deviating from the general trend on the island (only 1.91 people reside in buildings with CON roofs and 1.81 people in MET roofs). Population in Koule, Famare, Dima, and Nkourani mkanga is underestimated. This may be partly attributed to the development of new houses, which were not present during the acquisition of the remotely sensed imagery in 2013, and the absence of houses in the building layer that are located under trees. In Voidjou, the population is overestimated, since the UZ hosts a large portion of immigrants with household characteristics strongly deviating from the general trend on the island.
Calculation of the proportional prediction errors showed a negligible model bias (es = −0.03) and a magnitude of the error ABS(es) = 0.17. The distributional dispersion of the bias and the magnitude of the error, indicated by the interquartile range, were 0.32 and 0.17, respectively. This indicated that there was quite some uncertainty involved in the population estimation process.
Model Extrapolation and Comparison with Census Prediction
The model calibrated on the 42 visited UZ was applied to the entire area covered by the Pléiades images ( Figure 4). After extracting, for each UZ, the number of houses and defining their type through object-based classification, the inverse calibration approach discussed in Section 2.5, was applied for dealing with misclassification bias in roof type discrimination. The method corrects for the confusion that exists between the CON and MET classes and allows assigning a roof type (CON or MET) to buildings that have been wrongly assigned to a non-building classes (VEG, BSO and ASP). The number of buildings allocated to a new class through the calibration procedure varies between UZ, and is shown for the capital Moroni and its surroundings in Figure 9a. Figure 9b shows the population estimation results, for the UZ of the capital Moroni, and their corresponding standard deviation ( Figure 9c). As expected, the standard deviation of the population estimate was low for areas with lower population numbers and higher for densely populated areas. Population in Koule, Famare, Dima, and Nkourani mkanga is underestimated. This may be partly attributed to the development of new houses, which were not present during the acquisition of the remotely sensed imagery in 2013, and the absence of houses in the building layer that are located under trees. In Voidjou, the population is overestimated, since the UZ hosts a large portion of immigrants with household characteristics strongly deviating from the general trend on the island.
Calculation of the proportional prediction errors showed a negligible model bias (e s = −0.03) and a magnitude of the error ABS(e s ) = 0.17. The distributional dispersion of the bias and the magnitude of the error, indicated by the interquartile range, were 0.32 and 0.17, respectively. This indicated that there was quite some uncertainty involved in the population estimation process.
Model Extrapolation and Comparison with Census Prediction
The model calibrated on the 42 visited UZ was applied to the entire area covered by the Pléiades images ( Figure 4). After extracting, for each UZ, the number of houses and defining their type through object-based classification, the inverse calibration approach discussed in Section 2.5, was applied for dealing with misclassification bias in roof type discrimination. The method corrects for the confusion that exists between the CON and MET classes and allows assigning a roof type (CON or MET) to buildings that have been wrongly assigned to a non-building classes (VEG, BSO and ASP). The number of buildings allocated to a new class through the calibration procedure varies between UZ, and is shown for the capital Moroni and its surroundings in Figure 9a. Figure 9b shows the population estimation results, for the UZ of the capital Moroni, and their corresponding standard deviation ( Figure 9c). As expected, the standard deviation of the population estimate was low for areas with lower population numbers and higher for densely populated areas. Aggregating the population estimates for the UZ to the village level for the entire study area, and comparing them with the 2013 population estimates from the 2003 census, which were based on a hypothetical growth rate [37], shows a good linear fit ( Figure 10); yet estimated population numbers were systematically lower than the ones predicted by the 2003 census. Not including the capital Moroni, which has a much higher population compared to the other villages, an R 2 value of 0.88 was obtained, which implied that a large part of the variation in the 2013 census predictions is explained by the proposed model. The difference between our results and those predicted by the census, are most likely due to uncertainties about the expected growth rate used by the National Census Direction. Aggregating the population estimates for the UZ to the village level for the entire study area, and comparing them with the 2013 population estimates from the 2003 census, which were based on a hypothetical growth rate [37], shows a good linear fit ( Figure 10); yet estimated population numbers were systematically lower than the ones predicted by the 2003 census. Not including the capital Moroni, which has a much higher population compared to the other villages, an R 2 value of 0.88 was obtained, which implied that a large part of the variation in the 2013 census predictions is explained by the proposed model. The difference between our results and those predicted by the census, are most likely due to uncertainties about the expected growth rate used by the National Census Direction. Aggregating the population estimates for the UZ to the village level for the entire study area, and comparing them with the 2013 population estimates from the 2003 census, which were based on a hypothetical growth rate [37], shows a good linear fit ( Figure 10); yet estimated population numbers were systematically lower than the ones predicted by the 2003 census. Not including the capital Moroni, which has a much higher population compared to the other villages, an R 2 value of 0.88 was obtained, which implied that a large part of the variation in the 2013 census predictions is explained by the proposed model. The difference between our results and those predicted by the census, are most likely due to uncertainties about the expected growth rate used by the National Census Direction.
Discussion
The modelling approach proposed in this paper combined high resolution image interpretation with targeted fieldwork to assess population at neighborhood level, for regions for which no population data are available (Figures 7e and 9b). The model proposed assumed that the average number of residents per building type was relatively stable. To deal with the observed variation in average number of residents per building between different UZ, population estimates for each UZ were accompanied by a measure of uncertainty. If the variation in the number of residents per building type among different UZ is higher, the uncertainty of the population estimate at UZ level will increase. The uncertainty of the population estimation at UZ level will also increase with the number of residents per UZ.
Comparison of estimated population, for the 42 visited UZ with population numbers observed in the field (Figure 8), shows some systematic over-and underestimations of population, which can be related to limitations in the approach proposed. Some of these limitations had to do with remote sensing image interpretation. Even though high spatial resolution imagery was used in this study for identifying building structures, visual interpretation was to some extent error-prone. First of all, new houses developed in the period between image acquisition and field work could not be identified from the Pléiades imagery (e.g., Famare, Dima, Nkourani mkanga, Foumbouni; Figure 11). Moreover, houses partly or fully covered by trees could not always be identified in the imagery (e.g., Koule; Figure 11). As such, underestimation of the actual number of population was likely in some UZ ( Figure 8). Foumbouni household characteristics strongly deviated from the general trend on the island, which explained why, despite a clear underestimation of the number of houses, no underestimation of the population was obtained for this outlier, (Figure 8)
Discussion
The modelling approach proposed in this paper combined high resolution image interpretation with targeted fieldwork to assess population at neighborhood level, for regions for which no population data are available (Figure 7e and 9b). The model proposed assumed that the average number of residents per building type was relatively stable. To deal with the observed variation in average number of residents per building between different UZ, population estimates for each UZ were accompanied by a measure of uncertainty. If the variation in the number of residents per building type among different UZ is higher, the uncertainty of the population estimate at UZ level will increase. The uncertainty of the population estimation at UZ level will also increase with the number of residents per UZ.
Comparison of estimated population, for the 42 visited UZ with population numbers observed in the field (Figure 8), shows some systematic over-and underestimations of population, which can be related to limitations in the approach proposed. Some of these limitations had to do with remote sensing image interpretation. Even though high spatial resolution imagery was used in this study for identifying building structures, visual interpretation was to some extent error-prone. First of all, new houses developed in the period between image acquisition and field work could not be identified from the Pléiades imagery (e.g., Famare, Dima, Nkourani mkanga, Foumbouni; Figure 11). Moreover, houses partly or fully covered by trees could not always be identified in the imagery (e.g., Koule; Figure 11). As such, underestimation of the actual number of population was likely in some UZ ( Figure 8). Foumbouni household characteristics strongly deviated from the general trend on the island, which explained why, despite a clear underestimation of the number of houses, no underestimation of the population was obtained for this outlier, (Figure 8) (on average, CON houses in the Foumbouni UZ host 2.87 people and MET houses 2.12 people). Figure 11. Comparison between houses detected from remote sensing imagery and houses observed during field work. In some UZ, the number of houses detected from remote sensing images is lower than the number of houses observed in the field. The development of new houses (e.g., Famare, Dima, Nkourani mkanga, Foumbouni) or houses partly or fully covered by trees (e.g., Koule) can explain this underestimation. The solid line is the trend 1:1 line.
Errors in roof type classification also affected the results obtained with our model. To limit the impact of these errors on population predictions, an inverse calibration procedure was applied to correct building counts obtained for different roof types, using information obtained from the classification's error matrix. This approach is commonly used to correct estimates of the total area of different cover types in land cover maps, obtained through image interpretation [44][45][46]. Using this method, the relative fraction of buildings with concrete and metal roofs within the study area, which based on the original image interpretation was 45.6% for CON and 49.4% for MET (5% of the Figure 11. Comparison between houses detected from remote sensing imagery and houses observed during field work. In some UZ, the number of houses detected from remote sensing images is lower than the number of houses observed in the field. The development of new houses (e.g., Famare, Dima, Nkourani mkanga, Foumbouni) or houses partly or fully covered by trees (e.g., Koule) can explain this underestimation. The solid line is the trend 1:1 line.
Errors in roof type classification also affected the results obtained with our model. To limit the impact of these errors on population predictions, an inverse calibration procedure was applied to correct building counts obtained for different roof types, using information obtained from the classification's error matrix. This approach is commonly used to correct estimates of the total area of different cover types in land cover maps, obtained through image interpretation [44][45][46]. Using this method, the relative fraction of buildings with concrete and metal roofs within the study area, which based on the original image interpretation was 45.6% for CON and 49.4% for MET (5% of the buildings were wrongly assigned to classes that do not belong to roof type classes), changed to 45% and 55%, respectively. It should be kept in mind though, that the error matrix only provides a global estimate of spectral confusion among land cover classes, which may not necessarily be representative of the classification errors occurring within one particular UZ. As such, the correction method cannot avoid that some local bias in the estimated number of buildings of different type may occur.
Another issue affecting the results was that when identifying buildings from the remotely sensed imagery, it was mostly not possible to obtain information on the function of the building or on its current use (e.g., vacant versus non-vacant buildings). This is why, when calculating the average number of residents per building through fieldwork, all buildings were taken in account, irrespective of their use. While this approach ensures that the population estimation implicitly accounts for the fact that a certain amount of building space in each UZ is not used for residential purposes, it is likely to lead to overestimation of the population in UZs where the share of non-residential use or vacant buildings is high, and vice versa. The proposed measure of uncertainty incorporates this effect.
While in our study not much variation in building height was observed, including height of buildings in the modelling might further improve the model performance, especially in regions where clear differences in building height are present. The increasing availability of high resolution remotely sensed data, including for data poor regions, holds great potential for further refining population estimation approaches relying on combining remote sensing with field work, as proposed in this study.
Considering that the average number of residents for concrete and metal roof buildings proved to be significantly different (p-value ≤ 0.05), we decided to incorporate housing type in our population estimation model. The question remains though if distinguishing between different roof types improves the predictive accuracy of the modelling. This will obviously depend on the degree to which the average number of residents in both types of buildings differs, yet it will also depend on differences in the share of both types of buildings at the level of each UZ, as well as on uncertainties involved in the modelling, including uncertainty caused by difficulties in roof type discrimination (see above). A slight improvement was observed in the population estimation per UZ, when integrating roof type discrimination in the model (e s = −0.03 and ABS(e s ) = 0.17), compared to a simplified model not distinguishing different building types (e s = −0.04 and ABS(e s ) = 0.15). This slight difference in model performance may have been largely attributed to the relatively small difference in average number of residents for concrete and metal roof buildings, compared to the variance of both distributions. Whilst a significant difference was observed, the p-value was only slightly below 0.05 (p-value con-met = 0.02). Although in this case study incorporating roof type in the population estimation model did not substantially improve prediction accuracy, compared to a model not distinguishing between different type of buildings; the modelling framework proposed will be useful in regions where differences in average number of residents for different types of houses are more substantial, and where the number of buildings of different types strongly differs between different urban zones.
Conclusions
Organizing a census is not an easy task and requires a high investment in resources, especially for developing countries which have more urgent priorities. This explains partially, why in many places in the world detailed population counts are not available on a ten-yearly basis. In this study, we explored the potential of high spatial resolution remotely sensed imagery, in combination with dedicated fieldwork, as a fast and less costly alternative for providing timely information on population numbers and population distribution. A population estimation model was defined, associating the number of houses of different types found within each neighborhood, obtained through remote sensing, with the average number of residents per building type, obtained through fieldwork.
As a first step in the approach, buildings were manually identified from high spatial resolution Pléiades imagery. An object-based classification approach was used to automatically detect the roof type of each building from the image data. To correct for bias in the count of buildings with different roof types, due to spectral confusion between different materials, an inverse calibration procedure was applied. Population estimation at the level of pre-defined urban zones was accomplished by combining corrected building counts, obtained through remote sensing, with estimates of the average number of residents per building type, collected in the field for a sample of urban zones. For each urban zone, an indication of the uncertainty of the estimate was provided, considering the observed variability in average household size per urban zone. Validation of the model at the neighborhood level, revealed low model bias and a relative error at the urban zone level of 17%. Comparison with a model not distinguishing between different types of buildings, showed that in the present case study, the added value of including building type in the modelling seems to be limited. This could be explained by the difference in the average number of residents per building type being relatively small in the study area, and by the fact that the number of buildings of different types did not show a strong variation between different urban zones. In these circumstances the advantages of discriminating between different building types do not weigh up against the additional uncertainty introduced into the modelling. The modelling framework proposed, may prove useful though in regions with more outspoken differences in the average number of residents, for different type of houses, and with stronger contrasts in urban morphology. While in this case study locations of individual dwellings where manually pin-pointed on the high-resolution imagery, cost and time required for data collection might be substantially reduced in areas where a building data layer would already be available. With the increasing availability of volunteered geographic information, it would be interesting to test the method proposed in combination with these new data sources. | 13,763.4 | 2018-09-05T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
A Novel Biodegradable Poly ( Hydroxybutanedioic Acid-co2-hydroxypropane-1 , 2 , 3-tricarboxylic Acid ) Copolymer for Water Treatment Applications
Minimizing the formation of inorganic scale deposits in industrial water continues to be a challenge for water treatment systems. In order to meet this challenge, a novel biodegradable poly (DL-malic acid-co-citric acid) copolymer, effective in providing calcium carbonate scale inhibition was developed. Synthesis and characterization of the biodegradable, water-soluble and polyester copolymer was performed. Synthesis was done by direct bulk melt condensation in the absence of a catalyst above 150 ̊C. Characterization of the copolymer was carried out using infrared absorption spectra (FTIR), differential scanning calorimetry (DSC) and thermo gravimetric analysis (TGA) equipment. In the present work the precipitation of calcium carbonate from relative supersaturated solutions at different weight ratios of comonomer inhibition rates have been studied. The results indicate that the copolymer is an effective calcium carbonate descaling inhibitor that suppresses the growth process against calcium mineral scale deposits.
Introduction
Mineral scale formation in water is a global problem which persists to this day.Mineral scale deposit occurs primarily because of the presence of mineral and salts in the water.Wastewater is continuously discharged from industries such as mining, water treatment sectors, oil refineries, pulp and paper industries, glass and ceramic production industries, nuclear plants, and uranium refineries.This wastewater contains minerals and salts which may contaminate the groundwater.It is well documented that impurities present in feed water and re-circulating water markedly influence the rate of scale deposit [1][2].
Scale formation in cooling water systems shortens the life of equipment.The major causes of industrial water system failures is the deposition of scale on equipment such as heat exchangers and reverse osmosis (RO) on membrane surfaces and this results in increased operational costs [3].The mineral scale deposits commonly encountered include carbonates, sulfates and phosphates salts of alkaline earth metals.
During the last few decades control of mineral scales formation in industrial waters have been extensively researched.Several reviews that compare the performance of various chemicals and polymers commonly used to control the scale formation are available [4][5][6].Various anti-scaling additives have been proposed.Such additives include certain polyphosphates, polyacrylic acids, polymethacrylic acids, polyacrylamides, lignin sulphonic acid, hydrolysed polymaleic anhydride, hydrolysed copolymers as well as their salts.These polymers satisfactorily inhibit scale formation, however, other problems are encountered due to the continued presence of the above mentioned scale inhibitors once their primary function has been achieved.Generally they are difficult to degrade by the action of microorganisms.Toxicity and non-degradation cause major harm to the water systems and the environment if the above mentioned polymers are discharged into the environment.Hence, there is a need to develop a polymer that has both the good scale inhibition and biodegradation properties.Eco-friendly non-toxic biodegradable polymers are excellent candidates for water treatment applications and have been given a lot more attention since the 1970s [7].Quality and purity are major concerns in water treat-ment industries.The green polymers are good descaling inhibitors without affecting taste, quality, and purity of water [8,9].Increasingly stringent environmental restricttions and the need for water conservation, has led to the development of modern, all-organic, non-phosphorus biodegradable polymers for cooling water treatment programs [10].Over the years a number of investigations were conducted to study the influence of biodegradable polymer on cooling water treatment system [11,12] to minimize the scale deposition [13,14].
In this regard DL-malic acid (DMA) was chosen as a descaling agent because it can be handled safely during the synthesis [15,16].To further improve their efficacy of scale inhibition citric acid is used as a comonomer.This combinational approach enhances their scale inhibition efficacy.
Synthesis of Poly (DMA-CA) Copolymer
Copolymers were synthesized using the following procedure.Two different weight ratios of both DMA and CA were mixed together to form the copolymer as shown in Table 1.The reactants (DMA and CA) were placed in a ceramic bowl and the monomer mixture was first heated up to 100˚C for 2 h in an oven, then heated to 120˚C for about 10 minutes until the reaction mass partially melted.The temperature was thereafter increased to 130˚C and maintained for 9 h until the mass attained a yellowish colour.Finally, the temperature was increased to 200˚C for 7 h to get a complete conversion to a polymer.The synthesized copolymer was washed with distilled water several times (20 times) at ambient temperature until polymer was free of impurities.The yield of the product was 90% -95%.The same procedure was followed for the synthesis of poly (DMA) homopolymer.Photographic images of prepared homopolymer and copolymer are shown in Figure 1.
Experimental Test Method for Calcium
Carbonate Scale Inhibition
Scale Test
The polymer (10 ppm) was added to a flask in which calcium chloride, sodium bicarbonate stock solution and 25 ml water was present.Solutions of synthetic cooling wa- ter (CaCO 3 scale) were made by mixing 50 ml of anionbrine to 25 ml of cation brine solution and 25 ml of DM (demineralized) water.Thereafter 25 ml of demineralized water was added to 25 ml of CaCl 2 stock solution along with 10 ppm of the polymer solutions (scale inhibitor) in a 250 ml reagent bottle.Thereafter 50 ml of NaHCO 3 of stock solution was added to the solution.Another 250 ml reagent bottle was prepared with the same contents expect the polymer solution (scale inhibitor) and was used as the control solution.The pH of both solutions was adjusted to 8.0 by the addition of standard potassium hydroxide solution.Test conditions were set as the follows: without polymer (Blank, hot), with polymer (homopolymer (PM), copolymer (PMC1) and copolymer (PMC2) dose (10 ppm), respectively.The reagent bottles were incubated for 16 hours at 70˚C.After the incubation period, the reagent bottles were removed from the heat source and filtered through a 0.42 micro meter (μm) membrane filters.The filtrate was titrated for calcium using 0.01 molarity (M) EDTA and Eriochrome Black T (EBT) indicator, the percent calcium carbonate inhibition was calculated using the following Equation ( 1 The samples were run from 30˚C to 600˚C.
Results and Discussion
It was found that the addition of copolymer with higher monomer ratio of citric acid has an effect on the scale inhibition performance.The study demonstrated that the addition of small amounts of aliphatic tricarboxylic acid monomer to malic acid, transforms the polymerization from a solid state to a melt state reaction between 130˚C and 200˚C.In order to understand these copolymers were characterized by the following analysis.
FTIR Analysis
The FTIR spectra of DMA-CA monomers and poly (malic acid-citric acid) copolymer are shown in Figure 2.
The peaks in the range from 3100 cm −1 to 3400 cm −1 are assigned to the fundamental stretching vibration of different -OH groups.The double and single -CO stretch peaks of DMA are located at 1719 and 1272 cm −1 , respectively [18,19].The asymmetric and symmetric -CO stretch bands are situated at 1453 cm −1 and 1376 cm −1 [20].The spectra of CA shows a broad peak around 3505 cm −1 which indicates -OH stretching.The very distinct peaks at 2773 cm −1 and 2656 cm −1 indicate hydrogen bond formation between -OH groups and carboxylic acid groups.In the spectra two prominent peaks at 1714 cm −1 and 1428 cm −1 is observed from the asymmetrical and symmetrical stretching of -COO groups respectively.Poly (DMA-CA) spectra shows hydroxyl absorption bands at 3101 cm −1 , which correspond to stretching frequency of the OH groups present in poly (malic acidcitric acid).The FTIR spectrum of this high amount of citric acid sample (Figure 2) (data not shown) shows large absorbance at 1618 cm −1 corresponding to carbonyl stretching in the copolymer.Additional peaks were observed at 2681 cm −1 , 2530 cm −1 and 1189 cm −1 due to incorporation of CA and DMA (respectively) in copolymer structure.All the above peaks confirm the incorporation of DMA and CA units in the copolymer networks.
Differential Scanning Calorimetry (DSC)
Figure 3(a) shows the DSC thermograms of the homopolymer (PM) and copolymer (PMC1 and PMC2).DSC of homopolymer and copolymers gives Tg as 300˚C, 288˚C and 280˚C respectively.The glass transition temperature decreases with the increase of citric acid content in copolymer.This is due to the incorporation of citric acid units (aliphatic) which provide more flexibility and chain length to the biodegradable copolymers.Similar observation was reported by Singh et al. [21].In case of the physical mixture of DMA and CA [PMC2 (Figure 3(c))] shows separate endotherms corresponding to malic acid (137˚C) and citric acid (244˚C) peaks, indicating no copolymer formation when compared to PMC2 copolymer.
Thermogravimetric Analysis (TGA)
TGA analysis of the samples exhibited initial degradation at 220˚C, 218˚C and 214˚C (respectively) as shown in Figure 3(b).Final degradation is shown at 301˚C, 295˚C and 289˚C.These studies indicate that copolymers with citric acid have a reduced degradation temperature threshold, due to the lower thermal stability of citric acid with respect to malic acid homopolymer.With homopolymer (malic acid as monomer), copolymer with less monomer ratio of citric acid and copolymer with higher monomer ratio of citric acid have exhibited similar trends in both DSC and TGA studies.In addition greater weight loss occurs due to the presence of citric acid in copolymers.
Results of Scale Inhibition Test
DL-malic acid homo polymer and copolymers were prepared in order to study their scale inhibition efficiency.These copolymers were freely water soluble in the presence of aqueous alkali conditions.Both polymers were tested under identical conditions and it was found that the copolymers had higher scale inhibition efficiency.The number of carboxyl groups in hydrolyzed poly(malic acid-citric acid) molecular structure was greater than that of polyacrylic and polymethacrylic acid thus had better scale inhibition properties.Further, the presence of the tricarboxylic acid comonomers (citric acid) had a profound effect on the ability copolymer to inhibit scale growth.Tri acids of citric acid with longer polyester chains facilitated greater functional carboxylic groups and allowed the copolymers with larger DL-malic acid/tri acid ratios superior inhibition behavior.All polymers with concentration of 10 ppm were reasonably good inhibitors, however, PMC2 was more effective than PMC1 and PM.PMC2 with concentration of 10 ppm effectively blocked all the active growth sites and had the maximum inhibition of calcium carbonate growth rate over a 16 h period.Results in Table 1, indicate that scale inhibition rate increases from 25.5% to 36.3% with increasing citric acid content from 13.01 to 39.03 (mM) respectively.
Copolymer with higher concentration of citric acid has higher scale inhibition efficiency than homopolymer.Further, the scale inhibition efficiency of the homopolymer was 18% which was significantly lower than that of the copolymer.This suggests that the synergistic scale inhibition effect is due to the presence of citric acid in copolymer network.Also poly(malic acid-co-citric acid) was found to increase the degradation rate and the non-toxic in water.It has higher scale performance than hydrolyzed DMA homopolymer (PM).Inhibitor poly(malic acid) and inhibittor poly(malic acid-co-citric acid), each at a dose level of 10 ppm (on active basis) was evaluated at 65˚C -70˚C for their efficacy against CaCO 3 scales of normal tap water.The time taken for equilibration was 16 hr.This indicated that the scale inhibition ratios increase with the increase of the citric acid concentration.However, the scale inhibition performance becomes better with the addition of citric acid.Further it was observed that even at temperatures greater than 65˚C, the CaCO 3 scale inhibittion exceeded 35% with only 10 mg/L copolymer.
Table 1 shows the effect of higher monomer ratio of citric acid on the scale inhibition performance of the copolymer.For copolymer with higher monomer ratio of citric treatment, the scale inhibitor was effectively absorbed and precipitated to inorganic salt.The formation of CaCO 3 scale is influenced this precipitation.Thus, copolymer with higher monomer ratio of citric acid finds good application in industrial cooling water treatment plants.
Conclusions
Biodegradable polymers synthesized and tested in this study are effective as scale growth inhibitors.The data presented suggest that these polymers can be used effecttively to control growth of calcium carbonate mineral scale.The calcium carbonate scale performance data suggest that copolymers have different effects on the inhibitory power of polymers depending on the percentage of monomer rate of citric acid.A copolymer with higher monomer ratio of citric acid had higher scale inhibition properties.In this regard the following conclusions can be made: 1) The scale reduction by using 10 ppm DL-malic acid (PM) homopolymer was found to be 18% after 16 hrs incubation at 65˚C -70˚C.
2) The scale inhibition rate of calcium carbonate in synthetic cooling water containing the newly developed poly (DL-malic acid-co-citric acid) (PCM1 & PCM2) inhibitors reduced to 25.5% and 36.3%respectively.
i = Calcium ion concentration with the inhibitor.Ca b = Calcium ion concentration in the blank.Ca c = Calcium ion concentration before the test.
FTIR
plots in the range of 500 to 4000 cm −1 was done on a FTIR spectrometer MB3000 Model (Quebec, Canada) using the KBr disk method.The software used was Horizon software from ABB Company.Differential scanning calorimetry (DSC) of poly (DMA) and poly (DMA-CA) were studied by using a SDT Q 600 DSC instrument (T.A. Instruments-Water LLC, Newcastle, DE 19720, USA) at a heating ramp of 20˚C/min under a constant nitrogen flow (100 ml/min).The samples were heated from 30˚C to 600˚C.The thermogravimetric (TGA) analysis of poly (DMA) and poly (DMA-CA) was evaluated on a SDT Q 600 TGA instrument (T.A. Instruments-Water LLC, Newcastle, DE 19720, USA) at a heating rate of 10˚C/min under a constant nitrogen flow (100 mL/min). | 3,075.8 | 2013-04-29T00:00:00.000 | [
"Environmental Science",
"Materials Science",
"Chemistry"
] |
Analysis of dimerization of BTB-IVR domains of Keap1 and its interaction with Cul3, by molecular modeling
Oxidative damage has been associated with various neurodegenerative diseases including Parkinson's disease, amyotrophic lateral sclerosis (ALS), and Alzheimer's disease, as well as non-neurodegenerative conditions such as cancer and heart disease. The Keap1-Nrf2 system plays a central role in the protection of cells against oxidative and xenobiotic stress. The Nrf2 transcription function and its degradation by the proteasomal pathway (Keap1-Nrf2-Cul3-Roc1 complex) are regulated by the cytoplasmic repressor protein, Keap1 which possesses BTB, BACK (IVR region) and Kelch domains. The BTB-BACK domains are important for Keap1 homo-dimerization as well as to interact with Cullin-3 for Nrf2 degradation. The crystal structure of the Keap1-Kelch domain is known; however, that of the BTB-BACK domains are not yet determined. We present here, through molecular modeling studies, the analysis of Keap1-BTB dimerization, and of BTB-BACK domains role in complex with Cul3. The electrostatic charge distribution at the BTB dimer interface of Keap1 is significantly different from other known BTB containing protein structures. Another intriguing feature is also observed that the non-conserved residues at the BTB-BACK-Cul3 interface region may play critical role for differentiating Cul3 recognition by Keap1 from other adaptor proteins for their specific substrates proteasomal degradation.
Background:
Oxidative and xenobiotic stresses including reactive oxygen species (ROS), electrophilic chemicals and heavy metals damage biological macromolecules and disrupt normal cellular functions (reviewed in [1] and references therein).These stress factors are responsible for the development of many diseases such as cancer, cardiovascular disease, diabetes and neurodegeneration.Human bodies possess cytoprotective mechanism for survival by defending oxidative and xenobiotic stress factors.The Keap1-Nrf2 system is one of the most important cytoprotective system which has been developed over the course of evolution [2].Nrf2 (nuclear factor (erythroid-derived 2)-like 2) is a basic region-leucine zipper (bZIP) transcription factor that plays essential role to express many cytoprotective genes in response to oxidative and electrophilic stresses [3,4].The Nrf2 transcription factor belongs to the Cap "n' collar (CNC) family of transcription factors, and is composed of a conserved N-terminal regulatory domain, termed the Neh2 domain, two transactivation domain and a C-terminal b-ZIP domain (Figure 1A).Based on sequence homology, the sequence of Keap1 from human, rat and mouse are highly conserved between them.The (Kelchi-like ECH associated protein 1) sequence can be sub-divided into the N-terminal BTB domain, the intervening region (IVR), the double glycine repeat or Kelch repeat (DGR), and the C-term region (CTR) (Figure 1A).The DGR and CTR domains are collectively to name as DC-domain.The IVR region is also named to as the BACK domain.The BTB-BACK domains of Keap1 are not only important for Keap1 homodimerization, but also serves as an adaptor for the Cullin 3-based ubiquitin E3 ligase for Nrf2 [7, 13-15] (Figure 1B).Cullin-RING ligases (CRLs) are the largest family of multisubunit E3 ubiquitin ligases and adopt a modular assembly that facilitates the ubiquitylation of divergent substrates.The CRL3 subclass utilizes Cul3, which combines exclusively with BTBcontaining proteins as substrate-specific adaptors [16].Keap1 is a classic example which demonstrates the importance of Keap1 dimerization for its substrate ubiquitylation as it requires two βpropeller domains to interact with two distinct epitopes in Nrf2 simultaneously [9,10,12,17].The Cul3 binds to the BTB-BACK domains of Keap1, and also the ring-box (Rbx1/Roc1) to form a ternary complex of a core E3 ubiquitin ligase complex, which helps Nrf2 to undergo proteasomal degradation.
Though the tertiary structure of Keap1-DC is known, the tertiary structures of BTB domain and IVR region (BACK domain) of Keap1 are not yet determined.We present here the predicted structure of BTB-BACK domains of mKeap1 by molecular modeling studies.We have also predicted the structure of BTB-BACK domains of mKeap1 in complex with Cul3.Based on the modeling results, we discuss the homodimer formation of Keap1 and its interaction with Cul3.
Methodology:
The NCBI-BLAST (Basic Local Alignment Search Tool) online program [18] was used to compare the target protein sequence against the protein database and to calculate sequence similarity between them.The mouse Keap1 sequence comprising the BTB domain (aa: 52-179) and the BTB-IVR domains (aa: 52-319) was used to obtain the protein sequences of similar structures, using the RCSB Protein Bank Database
Intact BTB domain
The NCBI-BLAST has revealed 12 protein structures, which possess an intact BTB domain, show significant sequence and structural similarity with the proposed BTB region of mKeap1 (Figure 2).Multiple-sequence alignment of these proteins with mKeap1 showed certain highly conserved amino acids between them such as His59, Asn68, Cys77, Asp78, His96, Ser142, Phe139, Tyr141 and Thr142 (Figure 2A).A conserved triplet motif, VLA (Val99, Leu100 and Ala101), which lie in α2, was also observed in the sequence analysis.The proteins used for sequence alignment also form functional homodimers through their respective BTB domains.Thus, in order to predict the dimer interface residues of Keap1-BTB, the crystal structure of the BTB dimer of human Lrf (PDB Id: 2NN2) [26] was used as a reference structure for discussion, since it has highest sequence identity of 31% with mKeap1 compared to other proteins.As seen from the hLrf structure, the dimer interactions are found between α1 helix of chain A and α2 and α3 of chain B, and vice-versa.Also, an antiparallel β-sheet conformation occurs between β1 strand of chain A and β5 strand of chain B, and vice-versa (Figure 2B).The dimer interface residues of the BTB domain of hLrf were obtained by using the PDBsum database [20].The corresponding probable dimer interface residues of mKeap1-BTB were identified from sequence alignment.These dimer interface residues were found to be located in the predicted α1, α2, α3 helices, β1 and β5 strands Table 1 (see supplementary material), (Figure 2B).
Among the predicted dimer interface residues of mKeap1, it was seen that 40% was conserved, 10% was semi-conserved and 50% was non-conserved.The dimer interacting residues of hLrf in the crystal structure were mutated to corresponding mKeap1 residues using PyMol [24].When we checked residue by residue in the dimer interface region, more hydrophilic patches were seen on the surface of mKeap1 when compared to hLrf (not shown).Moreover, as shown in the Figures 2C & 2D, the hydrophilic environment in hLrf contributed by Gln27 and Ser50 (Lrf numbering) is replaced with hydrophobic environment in mKeap1 contributed by the non-conserved residues (Leu70 and Val98).From this analysis, we speculate that besides the interactions due the conserved residues, variations in charge distribution in the dimer interface of mKeap1 may play essential role to form a stable Keap1 dimer as well as to make a complex with Cul3 for Nrf2 ubiquitination.
The crystal structures of the BTB-BACK domains of hKLHL11 both apo-form and in complex with Cul3 were recently determined (PDB Ids: 3I3N and 4AP2) [28].The hKLHL11 structure reveals the entire domain of BACK besides the BTB domain; whereas the KLHL16 structure lacks two helices at the N-terminal side of the BACK domain, which is functionally important for the Cul3 complex formation.Hence, we have taken the hKLHL11-Cul3 complex structure for our comparison studies.
The homodimer of the BTB-BACK domains of hKLHL11 is an elongated shape with overall dimensions of 150 x 35 x 25 Å (Figure 3B).In the complex structure, it forms heterotetrameric assemblies with each subunit in hKLHL11 homodimer binding one molecule of Cul3.The BACK domain mainly consists of βhelical secondary structures.The two N-terminal helices (α7 and α8 of hKLHL11) form the 3-box motif and subsequently create an antiparallel four helix bundle configuration by interacting with α5 and α6 of the BTB domain (Figure 3B).The helical bundle plays critical role in making complex with Cul3.The remaining helices, α9-α14 at the C-terminus creates a distinct sub-domain and packing perpendicular to the 3-box.This arrangement produces a cleft of 16 Å deep and 18 Å wide between the BTB and BACK domains which is responsible for Cul3 interaction.
In the complex structure, the complex is formed by α2' and α5' of the first Cullin repeat with the BTB and 3-box domains of hKLHL11 (Figure 3C).The contact surface area of the Cul3NTD interface is 1508 Å 2 .A shallow cleft in the BTB surface forms via an induced fit mechanism facilitated by conformational changes in the α3-β4 loop.This loop is disordered in the apo-form, but change into an α-helix when interacts with Cul3; Ser131 of hKLHL11 hydrogen bonds with Glu132 of Cul3 α2' helix.Moreover, Phe130 is shifted about 5 Å to insert into a deep hydrophobic pocket produced between Cul3 helices α2' and α4' (Figure 3D).The backbone carbonyl of His213 (α7) in hKLH11 makes a hydrogen bond with Lys68' of Cul3.Phe246 (α10) of hKLHL11 also contributes a hydrogen bond with Thr24 of Cul3.
The sequence comparison between mKeap1 and hKLHL11 corresponding to the BTB-BACK domains showed 16.6% sequence identity, and 66.8% non-conserved residues between them.The interacting residues in the hKLHL11-Cul3 complex were analyzed using the PDBsum database corresponding residues in mKeap1 were identified from the sequence alignment Table 2 (see supplementary material).
The side chain of Asp181 of KLHL11 interacts with the sidechains of Tyr125' and Tyr62' of Cul3 (Figure 3E).Glu55' of Cul3 makes a salt-bridge and a hydrogen bond with Arg18 and Tyr121 of hKLHL11, respectively (Figures 3D &3E).
Intriguingly, these electrostatic interactions contributed by Tyr121, Asp181, and Arg182 in hKLHL11 may be absent in the Keap1-Cul3 complex as these residues are replaced with hydrophobic residues Val106, Val160 and Met161, respectively, in Keap1 (Figures 3D & 3E).Another interesting feature is also observed in the complex that Leu184 of hKLHL11 hydrophobically interacts with Met124' of Cul3.But, in mKeap1, the corresponding residue is replaced with a hydrophilic residue, Glu163 (Figure 3A).Hence, we speculate that in the Keap1-Cul3 complex, electrostatic surface charge distribution may be different at the Keap1-Cul3 interface when compare to the hKLHL11-Cul3 complex.This unique difference may play critical role for differentiating Cul3 recognition by Keap1 and KLHL11 adaptors for their substrates proteasomal degradation.
Keap1 is a cysteine rich protein.Mouse Keap1 contains 25 cysteine residues and human Keap1 contains 27 cysteine residues.Out of these cysteines, a few have been implicated to play important role as sensors viz.Cys151, Cys273 and Cys288.Cys151 is present in the BTB domain of Keap1.It is evident from the analysis that Cys151 is exposed to the solvent region and hence, it has free access to the cellular environment.Also, it is located at the N-terminal side of α5 which is a key secondary element for Cul3 interaction (not shown).Any disturbance in this region caused by adduction of Cys151 might bring about disruption of Cul3 interactions, thereby, preventing ubiquitination of Nrf2.
Conclusions:
The Keap1 protein, being the master regulator of Nrf2, has become an important therapeutic target for regulating the Nrf2 transcription function.Besides the Keap1-DC domain which is essential for Nrf2 binding, the BTB domain of Keap1 is an important domain to form functional homo-dimerization, essential for Nrf2 ubiquitination via Cul3 ligase complex.The BTB-BACK domains of Keap1 are essentially interacts with Cul3.The important dimer interacting residues, Cul3 interacting residues, electrostatic charge distribution at the dimer interface as well as at the Cul3 binding site have been mapped in the comparative analysis.The information gained in the present study will be helpful for further biochemical analysis of the Keap1-Cul3 complex as well as to aid in designing inhibitor molecules to inhibit the Nrf2 degradation pathway.
Figure 1 :
Figure 1: Schematic diagram of the Keap1-Nrf2 pathway.(A) Functional domains of Keap1 (top) and Nrf2 (bottom).(B) Cul3 based E3 ubiquitin ligase brings about ubiquitination of the substrate molecule Nrf2 via an adaptor protein, Keap1.Nrf2 is then presented to 26S proteasome for degradation.Under homeostatic/unstressed conditions, the cellular concentration of Nrf2 remains low, and it is repressed/modulated by Keap1 thus, Nrf2 is constantly ubiquitinated through Keap1 in the cytoplasm and subsequently undergoes proteasomal degradation [5-7].Under stress condition, such as exposure to electrophiles or ROS, Keap1 loses repression activity and hence Nrf2 dissociates from Keap1 and translocate into the nucleus, and subsequently coordinately activates cytoprotective genes and exerts a protective function against xenobiotic and oxidative stress [8].The Keap1 protein forms a homodimer through the N-terminal BTB domain [7].Under normal conditions, the Keap1-Nrf2 complex forms in 2:1 ratio as revealed by biochemical and structural studies [9].In the Keap1 homodimer, the C-terminal β-propeller domain (Keap1-DC) of each monomer is free from any intermolecular interactions, and is separated from each other.The Keap1-DC of each monomer associates with one molecule of Nrf2 [10].The ETGE and DLG motifs in Neh2 of Nrf2 are key motifs for direct interactions with the Keap1-DC domain [11, 12], and thus, Nrf2 bridges two Keap1-DC of the Keap1 dimer, and appears to be favorable for the efficient ubiquitylationof Nrf2 [9].
Figure 2 :
Figure 2: Comparison analysis of the intact BTB-dimer.(A) Sequence alignment of mKeap1 (aa: 52-179; KEAP1_MOUSE) with sequences of selected known crystal structures.The dimer interface residues are marked by blue stars.The secondary structural features from hLrf (PDB Id: 2NN2) are shown above
Figure 3 :
Figure 3: Comparison analysis of the BTB-BACK domains and its complex with Cul3.(A) Sequence alignment of mKeap1 (aa: 50-319; KEAP1_MOUSE) with hKLHL11 (aa: 67-340; Q13618).The potential interface residues responsible for making complex with Cul3 are shown in blue stars.The secondary structural features from hKLHL11 (PDB Id: 4AP2) are shown above the alignments.The colors reflect the similarity (red boxes and white characters for conserved residues; red characters for similarity in a group; blue frames for similarity across groups).The sequence was aligned and rendered by Clustal W [21] and ESPript [23], respectively.(B) A cartoon ribbon diagram of the BTB-BACK domains (PDB Id: 4AP2).Only one chain of the homodimer is shown for clarity.The loop connecting β1 and β2 is absent in the crystal structure.(C) A cartoon ribbon diagram of the BTB-BACK domains in complex with Cul3 (salmon) (PDB Id: 4AP2), and labeled only the interacting secondary structure elements.(D) & (E) Showing representative diagrams at the Cul3 interface region.The corresponding residues in mKeap1are labeled in magenta.
[19].The intermolecular contacts were analyzed by using the program PDBsum [20].The selected protein sequences, based on the sequence similarity and the size of predicted sequence, were then subjected to multiple-sequence alignment calculation using Clustal W [21].The multiple-sequence alignment results were manually edited wherever necessary to obtain reasonable predicted comparable sequences/structures between the target Keap1 protein and the query proteins.The online STRAP program [22] was initially used to visually inspect the multiplesequence alignment results.The ESPript program [23] was used to produce figures of multiple-sequence alignment results.The PyMol program [24] was used to analyze protein structures as well as to produce figures.
Table 1 :
Dimer interacting residues of the hLrf-BTB domain, and the corresponding probable dimer interacting residues of the mKeap1-BTB domain S. No.
Table 2 :
The potential Cul3 interacting residues of hKLHL11, and the corresponding probable Cul3 interacting residues of mouse | 3,372.6 | 2013-05-25T00:00:00.000 | [
"Biology"
] |
Attenuation of ligand-induced activation of angiotensin II type 1 receptor signaling by the type 2 receptor via protein kinase C
Angiotensin II (AII) type 2 receptor (AT2R) negatively regulates type 1 receptor (AT1R) signaling. However, the precise molecular mechanism of AT2R-mediated AT1R inhibition remains poorly understood. Here, we characterized the local and functional interaction of AT2R with AT1R. AT2R colocalized and formed a complex with AT1R at the plasma membrane, even in the absence of AII. Upon AII stimulation, the spatial arrangement of the complex was modulated, as confirmed by Förster resonance energy transfer (FRET) analysis, followed by AT2R internalization along with AT1R. AT2R internalization was specifically observed only in the presence of AT1R; AT2R alone could not be internalized. The AT1R-specific inhibitor losartan completely inhibited both the conformational change and the internalization of AT2R with AT1R, whereas the AT2R-specific inhibitor PD123319 partially hindered these phenomena, demonstrating that the activation of both receptors was indispensable for these effects. In addition, treatment with the protein kinase C (PKC) inhibitors inhibited the ligand-dependent accumulation of AT2R but not that of AT1R in the endosomes. A mutation in the putative phosphorylation sites of AT2R also abrogated the co-internalization of ATR2 with AT1R and the inhibitory effect of ATR2 on AT1R. These data suggest that AT2R inhibits ligand-induced AT1R signaling through the PKC-dependent pathway.
Here, we utilized fluorescent protein-tagged AT1R and AT2R to identify a more physiologically relevant relationship between AT1R and AT2R and found that AT2R interacts with AT1R both in vitro and in vivo. The receptor complex was internalized by endocytosis in a manner dependent on the activities of AT1R, AT2R, and PKC. We also revealed that such internalization was associated with alterations in the relative orientations of AT1R and AT2R, in which their C termini were brought in close proximity to each other, by Förster resonance energy transfer (FRET) analysis. These findings shed light on the previously unknown molecular mechanisms of AT2R-mediated inhibition of ligand-induced AT1R activation.
Results
AT2R selectively inhibits AT1R-dependent ERK phosphorylation. As AT2R signaling has been reported to counteract AT1R-dependent signaling 1,2,4,5 , we first examined the effect of AT2R on ERK phosphorylation induced by AT1R or other receptors. Whereas AII treatment activate ERK at negligible levels in naïve HeLa cells, it remarkably induce ERK phosphorylation in the presence of AT1R (Fig. 1a). Essentially similar results can be observed in human embryonic kidney 293T cells (Suppl. Fig. S1a). When we analyzed the localization of ERK2 tagged with the near-infrared fluorescent protein eqFP650 (ref. 10), we specifically observed AII-induced nuclear translocation of ERK in the presence of AT1R in HeLa cells (Fig. 1b). Because the time courses of ERK activation revealed by these methods were consistent with each other (Fig. 1c), we utilized eqFP650-ERK in the following experiments to monitor ERK activity in HeLa cells that expressed the intended proteins. HEK293T cells were also used to confirm the imaging data by biochemical analyses. Although AT2R expression failed to induce AIIdependent ERK phosphorylation, it attenuated AII-dependent, AT1R-mediated ERK activation ( Fig. 1d; Suppl. Fig. S1a). This impairment of ERK phosphorylation was ascribed to AT2R signaling because treatment with PD123319, an AT2R inhibitor, restored AII-induced ERK activation (Fig. 1d).
We next investigated the selectivity of the AT2R-dependent attenuation of ERK activation. Treatment with the EGFR inhibitor AG1478 produced no inhibitory effects on AII-AT1R-dependent ERK activation ( Fig. 1e; Suppl. Fig. S1b), showing that EGFR transactivation is dispensable for AT1R signaling in this cell line. Under this condition, we found that AT2R expression did not perturb ERK activation by factors such as EGF ( Fig. 1f; Suppl. Thus, these data indicate that AT2R selectively blocks AII-AT1R-dependent ERK activation and that the inhibition might occur upstream of the ERK cascade. In fact, the AII-AT1R-dependent membrane recruitment of c-Raf1, which is a hallmark of Raf activation, was inhibited by AT2R expression (Suppl. Fig. S2).
AT2R interacts with AT1R. According to the results shown in Fig. 1, we hypothesized that AT2R attenuated AT1R signaling by forming complexes. In fact, the direct binding of AT2R to AT1R was previously demonstrated to occur by the immunoprecipitation method, and this interaction inhibited the G protein-coupled AT1R signaling pathway 9,11 . However, it remained unknown whether the interaction was enhanced by AII stimulation. Thus, we examined the direct interaction of AT2R with AT1R in our model cell line by using a co-immunoprecipitation assay. AT2R bound to AT1R, even in the absence of AII (Fig. 2 Fig. S3, lane 4). However, treatment with losartan, an AT1R inhibitor, or PD123319 did not reverse this association enhancement. Therefore, the receptor interaction profile per se did not correlate with that of ERK activation shown in Fig. 1 and Suppl. Fig. S1. These results do not necessarily negate the possibility that AT2R perturbs AT1R signaling at the receptor level but rather suggest the requirement for approaches other than biochemical analyses to gain further insight into the signaling crosstalk mechanism.
AII stimulation induces AT2R internalization in an AT1R-dependent manner. Because AT1R has been well documented to accumulate in the endosome upon AII stimulation 12,13 , we hypothesized that AT2R might participate in the regulation of AT1R signaling in a spatiotemporally distinct fashion. Therefore, to visualize the subcellular localization and trafficking of AT1R and AT2R, we prepared expression vectors for the receptors tagged with either cyan or yellow fluorescent proteins (CFP or YFP) and observed their localization. In the absence of AII, both AT1R and AT2R resided mainly at the plasma membrane (Fig. 3a). Upon AII stimulation, AT1R was immediately internalized, as described previously 12,13 , whereas AT2R was retained at the plasma membrane ( Fig. 3a; Suppl. Mov. 1,2). We next examined the subcellular localization and changes in co-expressed AT1R and AT2R. Even in the absence of AII stimulation, the localization pattern of AT2R was comparable to that of AT1R (Fig. 3b), indicating that AT1R and AT2R colocalized; this finding was consistent with the co-immunoprecipitation assay results (see Fig. 2). However, upon AII stimulation, AT2R and AT1R were internalized ( Fig. 3b; Suppl. Mov. [3][4][5], in contrast to what was observed in the cells expressing AT2R alone (Fig. 3a), and colocalized with the granular structures in which AT1R was localized.
FRET analysis is a powerful tool for studying molecular interactions in a living cell. Thus, we further analyzed FRET data for CFP-tagged AT2R and YFP-tagged AT1R. Whereas only a weak FRET signal was detected in quiescent cells (Fig. 3c), the signal was increased by AII stimulation (Fig. 3d), most apparently in the granular structures ( Fig. 3c; Suppl. Mov. 6). The FRET signal remained high at least until 80 min after AII stimulation (Suppl. Fig. S4). These data collectively indicate that AT2R directly binds to AT1R, even in the absence of AII, but the molecular orientation does not bring CFP and YFP sufficiently close together for the FRET signal to be observed 14 . AII stimulation further enhanced the association between AT1R and AT2R and induced the conformational change that brought CFP and YFP close together, making the functional interaction between AT1R and AT2R possible. HeLa cells transfected with the expression vectors indicated at the top were serum starved, pre-treated with the AT1R-specific inhibitor losartan or the AT2R-specific inhibitor PD123319, and stimulated by AII. The cells were lysed in lysis buffer and immunoprecipitated with an anti-FLAG antibody, followed by immunoblotting using an anti-HA or anti-FLAG antibody. An aliquot of total cell lysate was also analyzed by immunoblotting. Ligand binding to both AT1R and AT2R is required to induce conformational changes in their complex. To further clarify the relationship between AT1R and AT2R, we next examined the effects of receptor antagonists on the AII-induced association of the receptors. Treatment with losartan, an AT1R inhibitor, completely abolished the FRET signal between the two receptors as well as the internalization of both receptors (Fig. 4a,b), indicating that the AII-dependent internalization and conformational changes were entirely dependent on the AT1R signaling pathway. However, treatment with PD123319, an AT2R antagonist, did not hinder AT1R internalization; instead, it altered AT2R trafficking and significantly inhibited FRET between the two receptors ( Fig. 4a,b), suggesting that the AT2R signaling cascade is also involved, albeit in part, in the conformational change of the receptor heterodimer. These data collectively indicate that signals from both receptors are indispensable for the conformational change in the receptor heterodimer; however, receptor dimerization itself can be induced by ligand binding to one receptor, as revealed by co-immunoprecipitation assay results (Fig. 2). Therefore, taken together with results shown in Fig. 1, the conformational change appears to correlate with the functional association and the co-internalization of the receptors.
AT2R phosphorylation by PKC is required for the functional association between AT1R and AT2R. Because PKC is a well-known downstream effector of AT1R 2,15 , we next investigated the effects of the PKC inhibitors staurosporine and Gö6983 on receptor dynamics. Whereas inhibitor treatment did not affect AT1R internalization, AT2R did not traffic with AT1R to the endosomes (Fig. 5a). In addition, whereas the PKC inhibitors displayed no effect on the FRET signal between the two receptors until 20 min after AII stimulation, it was significantly decreased from 20 min onward in the presence of the PKC inhibitors (Fig. 5b). These data indicate that PKC activation is indispensable for the functional association (conformational change in the receptor dimer) between AT1R and AT2R and the subsequent co-internalization of the dimer. Finally, we evaluated the effect of PKC on the AT2R-mediated inhibition of AT1R signaling. However, the utilization of PKC inhibitors might not be suitable because PKC is reported to mediate AT1R-dependent ERK activation 16 . Indeed, treatment with staurosporine or Gö6983 inhibited the phosphorylation and nuclear translocation of ERK caused by AII stimulation in cells expressing AT1R (Fig. 5c,d). Staurosporine treatment also inhibited ERK phosphorylation induced by AII-stimulated AT1R in 293T cells (Supple. Fig. S5a). Alternatively, we found three possible phosphorylation sites in the C-terminal region of AT2R through sequence analysis. Phosphorylation of AT2R was indeed triggered by AII-AT1R signaling, as revealed by immunoblot analysis of AT2R immunoprecipitates using anti-phosphoserine/phosphothreonine antibodies (Fig. 6a, lane 3). Given that this phosphorylation could be observed even in the presence of PD123319, AT2R phosphorylation might be totally dependent on AII binding on AT1R (Suppl. Fig. S5b). This result encouraged us to generate an AT2R mutant in which all three serine residues were substituted with alanine (AT2R-3A). AT1R signaling-induced phosphorylation of the AT2R mutant was inhibited compared with that of wild-type AT2R (Fig. 6a, lane 6), and the inhibitory effect on AT1R-mediated ERK activation was substantially diminished (Fig. 6b; Suppl. Fig. S5c). This inability of the AT2R mutant to suppress ERK activation was not affected by treatment with the AT2R inhibitor PD123319 (Suppl. Fig. S5d). Although the AT2R-3A mutant was not internalized with AT1R, it did colocalize with AT1R in subcellular compartments other than endosomes, including the plasma membrane and the perinuclear region (Fig. 6c). Furthermore, few FRET signals were observed between AT2R-3A and AT1R following exposure to AII (Fig. 6d). Taken together, these results indicate that the AII-dependent, AT1R-PKC-mediated phosphorylation of AT2R might be critical for hindering AT1R signaling by AT2R.
Discussion
AT1R activates many downstream effectors, including ERK, which is reported to be a major regulator of AII-induced cardiovascular diseases 17 . ERK activation by AT1R is mediated by at least three different signaling pathways in a cell-context-specific manner: PKC-dependent activation, β -arrestin-dependent activation, and matrix metalloprotease and subsequent HB-EGF shedding-dependent activation 18 . Among these pathways, the first two have been extensively investigated in AT1R-expressing cells and shown to produce distinct outcomes in different subcellular compartments. For example, the PKC-dependent pathway mainly functions at the plasma membrane and modulates gene expression, whereas the β -arrestin-dependent cascade is involved in endocytosis and is dispensable for gene regulation 19 . Our findings suggest that AT2R might regulate these pathways differently because the conformational changes detected by the FRET signal occur mainly in the internalized receptors (Fig. 3c). As opposed to the receptor heterodimerization by itself, which could be induced by ligand binding to one of the receptors, the conformational change required for ligand binding to both receptors and PKC activity. These signal modulations might affect the AT1R signaling output, including AT1R-induced vasoconstrictive actions. In general, it is widely accepted that AII counteracts AT1R-mediated vasoconstriction in vivo. Treatment with the AT2R inhibitor PD123319 enhances the AII responsiveness in the rat thoracic aorta after pressure overload 20 . In spontaneously hypertensive rats (SHRs), a lack of AT2R function results in enhanced coronary constriction mediated by AII 21 . Moreover, upon AT1R inhibition, AII evokes a hypotensive response, which is eliminated by AT2R antagonist treatment 22 . Genetic approaches also support these pharmacological characterizations of AT2R. Mice deficient in At2r (AT2R-KO), which develop normal blood pressure in the resting state, display enhanced vasoconstriction after AII injection 23 . Consistently, the AT1-mediated vasoconstrictive effect induced by the chronic infusion of AII is not observed in transgenic mice overexpressing AT2R 24 . However, AT2R may not counteract the function of AT1R in the heart. AII stimulates myocardial hypertrophy and fibrosis in AT2R-KO mice to levels comparable to those in wild-type mice 25 . Therefore, AT2R counteracts the function of AT1R in a cell-context-specific manner, although the detailed mechanisms underlying the difference have yet to be elucidated.
One possible mechanism for such cell type-specific AT1R signaling inhibition by AT2R might be ascribed to the relative expression of the receptors. In most adult tissues, AT2R levels, which are much lower than AT1R levels under physiological conditions, are increased upon exposure to environmental cues that activate AII-AT1R signaling 6,26 . Because the function of AII can be altered by the balance of receptor subtype abundances 6,27 , the extent of AT2R induction might be important for the modulation of AII signaling by AT2R. In particular, the amount of heterodimer, which was shown to be important for signaling modulation in this study, has been shown to be much smaller than the amount of AT1R and AT2R homodimers 28 . We can therefore postulate that AT2R induction at levels adequate to form an abundance of receptor heterodimers might be critical for the inhibition of AT1R signaling by AT2R.
As we clearly demonstrated, AT2R accumulated with AT1R in the endosomes after AII stimulation. In a previous report, the uptake of AT2R was shown to occur only in the presence of promyelocytic leukemia zinc finger protein (PLZF) 29 . Because the effects of AT2R are physiologically opposite from those of AT1R signaling and this synchronous association occurs immediately after AII stimulation, the functional interaction between AT1R and AT2R might be necessary for an acute response to extracellular environmental changes. However, it has also been proposed that AT2R can counteract AT1R signaling in a manner independent of receptor heterodimerization 28 . This raises the possibility that the mode of action of AT2R for AT1R inhibition is also cell-context specific. Moreover, AT2R was also shown to form a stable, functional heterodimer with bradykinin B2 receptor. In this case, AT2R contributes to the non-cell-autonomous inhibition of AT1R signaling through the enhanced production of nitric oxide 30 . Therefore, AII-AT1R signaling might be stringently regulated by the diverse molecular actions of AT2R.
PKC is a crucial downstream effector of the AT1R signaling pathway. Paradoxically, PKC has been reported to be indispensable for AT1R endocytosis 31 . Our results show that AT2R internalization is dependent on AT1R and PKC, indicating that both receptors are internalized by distinct molecular mechanisms (Fig. 7). Given that the previous study demonstrated that Ser354 of AT2R is phosphorylated by PKC 32 , the phosphorylation of this amino acid residue by PKC might be necessary for AII-dependent AT2R internalization. Studies on the detailed molecular mechanism of PKC-dependent AT2R internalization are now in progress.
Reagents and antibodies. AII Supernatants were subjected to SDS-PAGE, and the separated proteins were transferred to polyvinylidene difluoride membranes (Bio-Rad, Hercules, CA, USA). The membrane was incubated with a primary antibody followed by a peroxidase-labeled secondary antibody. Signals were developed by ECL Western Blotting Detection Reagent (GE Healthcare, Little Chalfont, UK) and detected using an LAS-1000 UV mini image analyzer (FUJIFILM, Tokyo, Japan). The intensities of the bands were quantitated using the associated software. Note that samples were incubated with SDS sample buffer at 50 °C but were not boiled to detect AT1R and AT2R while preventing their aggregation. HeLa cells expressing HA-AT1R-YFP and Flag-AT2R-CFP were lysed in NP-40 lysis buffer [10 mM Tris-HCl, pH 7.4, 150 mM NaCl, 5 mM EDTA, 0.5% NP-40, 10% glycerol, 1 mM Na 3 VO 4 , and complete EDTA-free protease inhibitor (Roche)] and precipitated with an anti-FLAG antibody with protein A-Sepharose beads. Proteins bound to the beads were separated by SDS-PAGE and detected by immunoblotting.
Fluorescence microscopy. Intermolecular FRET analysis was performed as previously described 34,35 .
Briefly, HeLa cells were cultured on glass-bottom, 35-mm tissue culture dishes (Asahi Techno Glass Co., Tokyo, Japan) and transfected with expression vectors for fluorescent protein-tagged fusion proteins. The fluorescence imaging workstation for the multicolor time-lapse imaging consisted of an Olympus IX71 inverted microscope equipped with a cooled charge-coupled device (Cool-SNAPHQ, Roper Scientific, Trenton, NJ), excitation and emission filter wheels (MAC 5000, Ludl Electronic Products, Hawthorne, NY), and a Xenon 75-W light source, all controlled by MetaMorph software (Universal Imaging, Downingtown, PA). Images were recorded every 30 sec for up to 1 h. Starting after 10 min, the cells were stimulated with AII. At each time point, fluorescence images were sequentially acquired through the YFP, CFP, and FRET filter channels. The filter sets used were YFP (excitation, 500-25 nm; emission, 535-26 nm), CFP (excitation, 440-21 nm; emission, 480-30), and FRET (excitation, 440-21 nm; emission, 535-26 nm). An XF2034 (455 DRLP) dichroic mirror (Omega Optical, Brattleboro, VT) was used. Images were acquired using the 4 × 4 binning mode and 100 to 200 ms integration times. The background was subtracted from the raw images before FRET calculations. Corrected FRET (FRET C ) was calculated on a pixel-by-pixel basis for the entire image by using the following equation: FRET C = FRET-(0.5 × CFP)-(0.02 × YFP), where FRET, YFP, and CFP correspond to the background-subtracted images of cells co-expressing CFP and YFP acquired through the FRET, YFP, and CFP channels, respectively. The bleed-through fractions of CFP and YFP fluorescence were 0.5 and 0.02, respectively, through the FRET channel. FRET C images are presented in pseudocolor mode. A mask image for the entire cell was created based on the YFP fluorescence intensity, and FRET C values in the region were quantitated using MetaMorph software.
Fluorescence images of eqFP650-ERK and Hoechst-stained nuclei were acquired as described above, except different filter sets were used (for Hoechst, excitation: 400-15 nm, emission: 480-30 nm; for eqFP650, excitation: 535-30 nm, emission: 692-40 nm). A mask image for the nuclear region was created based on the Hoechst fluorescence intensity, and the fluorescence intensities of eqFP650-ERK in the nucleus and the entire cell were quantitated to measure ERK activity. | 4,398.8 | 2016-02-09T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Enhanced stability of freestanding lipid bilayer and its stability criteria
We present a new strategy to dramatically enhance the stability of freestanding lipid bilayers. We found that an addition of a water in oil emulsion stabilizer, SPAN 80 to a solvent phase guarantees nearly millimeter-scale stable freestanding lipid bilayers. The water permeability, bilayer area, contact angle, and interfacial tension were measured as a function of time and SPAN 80-to-lipid weight ratio (ΦSPAN 80) with several different solvents. Surprisingly, the SPAN 80, instead of remaining in the bilayer, was moved out of the bilayer during the bilayer formation. Also we studied the effect of solvent on freestanding bilayer formation, and found that squalene was the only solvent that was not incorporated into the bilayer. The regime of stable bilayer formation was experimentally determined to be 3/1 < ΦSPAN 80 < 15/1, and we suggest general stability criteria for bilayer formation. This technique and the suggested stability criteria can be potentially helpful to many model membrane-based researches in life sciences, physical sciences and biomedical engineering fields.
Results and Discussion
Interestingly, in all our experiments, the freestanding bilayers remained stable at least for several days without changes in the bilayer area or contact angle for an appropriate range of Φ SPAN80 whereas a droplet immediately coalesced for too low Φ SPAN80 . This implies that the SPAN 80 dramatically enhances the stability of freestanding lipid bilayer 18,[20][21][22][23] . Such enhancement in stability of the freestanding bilayer can be explained by the role of SPAN 80 during the impact of the two monolayers which include the following. First, it modifies the spontaneous curvature; its hydrophobic tail is bulky relative to the hydrophilic head, which induces negative spontaneous curvature. For the bilayer with low stability, a transient pore, which is hydrophilic pore through the bilayer with highly positive curvature, is formed before merging of droplet into sub-phase water. The SPAN 80 with negative curvature plays a critical role for preventing the formation of transient pore to stabilize bilayer 35 . Second, it modifies the interfacial tension. As Φ SPAN80 increases, the interfacial tension decreases, thus it reduces the energetic benefit of droplet coalescence 35 . We also checked other surfactants such as oxidized squalene and docosahexaenoic acid (DHA) that have similar molecular shapes significantly enhance the stability of lipid bilayers. In the previous study, this level of stability was achieved only when the limited kinds of lipid (e. g. 1,2-diphytanoyl-sn-glycero-3phosphocholine, DPhPC) with exceptionally bulky tail or solvent (e. g. hexadecane) remaining in the bilayer after zipping process are used, and this result indicates that the SPAN 80 dramatically enhances stability of the freestanding lipid bilayer 18,[20][21][22][23] .
It was expected that our bilayers would be composed of a mixture of lipid and SPAN 80. However, we surprisingly found that SPAN 80 is likely to be moved out of the bilayer during/after the bilayer formation. To systematically verify the removal of SPAN 80 from the bilayer, we measured the bilayer area, contact angle, bilayer tension, and adhesion energy of two monolayers of dimyristoylphosphatidylcholine (DMPC) and dioleoylphosphatidylcholine (DOPC). We performed all of our experiments at 25 °C where both lipids exhibit a liquid disordered phase 36 . Figure 2(a,b) is the plot of the bilayer area and the contact angle of the freestanding bilayer as a function of time. At t = 0, the bilayer area is of the same diameter d ≈ 220 μ m for both DOPC and DMPC. For DMPC, a drastic change in the bilayer area and the contact angle (θ) occurs at t < 200 sec, followed by the constant values d = 523 μ m and θ = 56°, whereas for DOPC, the bilayer area and contact angle remain unchanged. Figure 2(c) shows the interfacial tension of the bilayer γ B of DOPC and DMPC at Φ SPAN80 = 5/1. The bilayer interfacial tension The increase in the adhesion energy over time suggests that the bilayer composition changes after bilayer formation. When only SPAN 80 was used without any lipid, no adhesion was observed, implying that zero adhesion exists between SPAN 80 molecules. Therefore, to maximize the adhesion (to lower the energy), lipid molecules should go into the bilayer, excluding SPAN 80 out of the bilayer. At the same time, this demixing process of lipid and SPAN 80 results in an entropic penalty, more specifically, the entropy of mixing. In other words, the competition between adhesion energy and the entropy of mixing determines the distribution of SPAN 80. For the DOPC and DMPC bilayer at Φ SPAN80 = 5/1, the estimated entropic penalty of SPAN 80 is at most in the same order of magnitude as the energetic gain obtained by introducing more lipids in the bilayer region (see Supplementary Figure S3). Therefore, the decrease in the bilayer interfacial tension for DMPC at an early stage (t < 200 sec) in Fig. 2(c) supports that SPAN 80 is removed from the bilayer, to increase adhesion between the two monolayers, as seen in Fig. 2(d). During this period, SPAN 80 is removed from the lipid bilayer and the interfacial tension of the DMPC bilayer drops into the plausible range, when compared with the bilayer rupture tension γ br (DMPC) ≈ 2.7 mN/m (at least, the bilayer tension should be smaller than the rupture tension). γ B of DOPC (4.3-5.7 mN/m) is also less than γ br (DOPC) ≈ 10.2 mN/m 38 .
Another evidence that SPAN 80 is likely to be removed from the bilayer is shown in Fig. 3, the water permeability measurement. The 100 mM NaCl dissolved in the bottom water of the plane interface generates osmotic gradients across the bilayer, resulting in water transport through the lipid bilayer membrane (Fig. 3(a)). We measured the volume change in the water droplet as a function of time 19,20 . In Fig. 3(b), the water permeability of both the DMPC and DOPC bilayer at Φ SPAN80 = 5/1 decreases from 1521.3 μ m/sec (DMPC) and 169.7 μ m/sec (DOPC) to reach constant values of 83.0 ± 6.0 μ m/sec (DMPC) and 103.6 ± 4.2 μ m/sec (DOPC) after the bilayer formation. This equilibrium permeability is in good agreement with the previous measurements: 83 ± 7.6 μ m/sec for DMPC, 56 ± 9 and 158 ± 5.8 μ m/sec for DOPC 39,40 . The initial decrease in water permeability is consistent with the adhesion measurement and thus is most likely due to the process of removing SPAN 80 from the bilayer. Moreover, this initial decrease in water permeability is similarly shown for different stabilizer, squalene oxide, and values of the equilibrium permeability are almost identical (102.6 ± 6.0 μ m/sec for squalene oxide) no matter what kind of stabilizer is used. This suggests that the freestanding bilayer at equilibrium might be composed of DOPC (or DMPC) lipid only.
The permeability result (Fig. 3(b)) also implies that our freestanding bilayer is squalene-free since this is consistent with the measurement for lipid vesicle that has no solvent in it. It is also widely known that squalene does not invade into bilayers or in between two monolayer leaflets [6][7][8][9]21,22 . Decane and hexadecane exhibit a lower permeability in comparison with squalene. A previous study reported that decane and hexadecane remain in the lipid bilayer after the formation of DIB 21,22 . When the bilayer contain a solvent such as decane or hexadecane, water molecules will cross the solvent layer in addition to the lipid bilayer, which results in drops in the water Figure S5). We note that the interfacial tensions of both decane and hexadecane are around the rupture tension of the DOPC bilayer (≈ 10.2 mN/m) in the absence of solvents.
Combining all the results above, we set up stability criteria for SPAN 80 stabilized bilayer formation (Fig. 4). At very low Φ SPAN80 , SPAN 80 does not reduce the interfacial tension enough to form a stable bilayer, and thus coalescence immediately occurs as soon as a droplet is in contact with the planar surface. For sufficiently high Φ SPAN80 , the interfacial tension is low enough for stable bilayers, exhibiting the successful zipping process with the intermediate contact angle between 90° and 180°. For very high Φ SPAN80 (in a case of 2γ M < γ B ), however, the contact angle reaches 180°, and the adhesion does not occur. Even if the contact angle does not reach 180°, too high Φ SPAN80 reduces adhesion between the two monolayers, and in this case, the entropy of mixing is too big to increase the adhesion, leaving some SPAN 80 in the bilayer. Moreover, the regulation of interfacial tension directly affects the three phase contact angle of the lipid bilayer: two lipid monolayers and a bilayer. The importance of the interfacial tension regulation is easily seen in the DMPC bilayer formation. For DMPC, the interfacial tension of the lipid bilayer is nearly zero. The DMPC monolayer interfacial tension is also low enough, so it appears to form a stable bilayer at first. However, the contact angle of the lipid bilayer changes over time, and eventually becomes very low (< 60°), and the abrupt change at the kink seems to make the bilayer unstable. The stability of the DMPC bilayer becomes worse if the contact angle is very low. Therefore, to enhance the stability of the bilayer and to simultaneously obtain solvent-free and SPAN 80-free bilayers, there is an appropriate and optimum range of Φ SPAN80 . Since different lipid species show different lipid bilayer interfacial tension, to form stable freestanding lipid bilayer this proper range of Φ SPAN80 will change. We also should note that previous DIBs use higher concentrations of lipids that might have an appropriate interfacial tension [18][19][20][21][22][23] .
Conclusion
We demonstrated a new strategy to dramatically enhance stability of DIB with a large area, planar and solvent-free as well by using W/O emulsion stabilizer, SPAN 80. Surprisingly, SPAN 80 is most likely to be moved out of the bilayer, maximizing the adhesion of the lipid monolayers, and overcoming the entropy of mixing penalty. This removal of SPAN 80 was demonstrated by time-dependent adhesion and permeability experiments. We also showed that the freestanding bilayer fabricated by our technique is squalene-free as well. We finally suggested stability criteria for the SPAN 80 stabilized freestanding bilayer formation, involving the regulation of interfacial tension by controlling SPAN 80 concentration. This stabilization strategy can be universally applied to various freestanding bilayer formation techniques such as the conventional DIBs and the traditional black lipid membranes.
Methods
Dimyristoylphosphatidylcholine (DMPC), dioleoylphosphatidylcholine (DOPC) and SPAN 80 are purchased. Squalene oxide is prepared by direct light exposure on squalene for four days with air contact. We use deionized water for all of our experiments. The imaging experiments were performed by using homebuilt side-view microscope. The sample of phospholipid (DMPC or DOPC) in chloroform is contained in glass vial and dried in vacuum. SPAN 80 dissolved in squalene is added into the dried phospholipid, and then sonicated for 30 minutes. We prepare a trough filled with water, and the phospholipid solution is placed on top of water to form a planar squalene/ water interface. The glass capillary of 0.78/1.0 mm in inner/outer diameter respectively is tapered to 10 μ m of diameter by a micropipette puller. The capillary is filled with water and then mounted to the micro-injector. The capillary tip is placed above the squalene/water interface. By applying a pressure of ~100 hPa, the droplet of ~300 μ m diameter is introduced right above the planar interface. Both planar and droplet squalene/water interfaces are incubated for over 10 minutes for the adsorption of phospholipid and SPAN 80 monolayers, which are termed as planar monolayer and droplet monolayer, respectively. The droplet is moved toward the planar interface until the droplet gently touches the planar interface. After a few minutes of waiting, two monolayers undergo "zipping" process, in result, form the lipid bilayer between two water phases. The size of freestanding lipid bilayer can be controlled by adjusting the droplet size. Further details of monolayer interfacial tension measurement, water permeability measurement, and adhesion energy measurement are summarized in the Supplementary Information. | 2,823.8 | 2016-12-16T00:00:00.000 | [
"Physics"
] |
Stackelberg Game Model of Railway Freight Pricing Based on Option Theory
In recent years, although rail transport has contributed significantly to the productivity of the Chinese economy, it has also been faced with the fierce competition and challenge from other modes of transportation, and therefore, freight-pricing issue has received more attention by researchers. In this paper, the rail freight option (RFO) based on option theory is proposed to study the optimal pricing decision of the railway transportation enterprise and contract customers’ optimal purchase decisions. To obtain an effective RFO contract, the railway freight contract transaction process is first analyzed. .en, the theoretical framework for the RFO contract trading is put forward in the railway freight market. Next, a two-stage Stackelberg game theoretic approach is presented based on the principle of utility maximization to achieve the optimal decision of RFO contract. Subsequently, the reverse reasoning method in dynamic programming is used to solve the optimal combination decision of the contract customer. Finally, the optimal pricing decision of RFO is discussed using Kuhn–Tucker conditions and Lagrangian function. .e result shows that the railway transportation enterprise should pay more attention to the option strike price w1 in terms of maximizing system utility and achieving Pareto optimal.
Introduction
Railway is one of the most efficient and environmentalfriendly ways for transportation industry in China, and it has contributed significantly to the productivity of the Chinese economy [1]. However, in the process of the continuous growth of other modes of transportation and the increasing market competitiveness, the market share of the railway freight industry has declined year by year [2]. e marketization management mode of railway freight transport is the reform trend of China's railway transport industry. Hence, the transformation to market-oriented operation must be accelerated for railway freight transport operations to adapt to changes in the transportation market, which will also benefit and help develop railway transport enterprises.
Nowadays, China's railway freight has the coexistence of contract market and spot market. In the contract market, railway transport enterprises sell part of their transport capacities by signing a contractual agreement (usually six months in advance) with the contract customer. e signing of these contracts has certain advantages in terms of stable supply and business relative to the spot market with poor transaction stability. More precisely, the contract provides stable bulk supply for railway transportation and requires transportation enterprises offering transportation guarantee for contract customers at the same time. However, during the contract period, both parties need to implement a fixed contract price and cannot profit from trading in market price fluctuations, which is unfavorable for maximizing the utility of both parties. Meanwhile, railway transport enterprises will not sell all their capacities in the contract market, and the remaining capacities can only be sold through the spot market. In the spot market, the relationship between supply and demand directly determines the freight rate, and some transactions failed due to large price fluctuations. All of these will restrict the railway transportation enterprises to formulate a market competitive freight pricing system. erefore, the problems that how to price the freight rate of railway freight transportation enterprises and improve the competitiveness of railway transportation have to be settled urgently. Pricing freight rate is a complicated problem involving a range of issues; for example, firstly, there is a noncooperative game between railway transportation enterprise and contract customer. Secondly, as shown in Figure 1 [3], competitive pricing should be used that helps railway transportation enterprises gain more market share and business. irdly, the essence of railway transportation service requires flexibility and risk avoidance in pricing mechanism.
In real life, consider a company that uses two channels for trading: contract and spot, and several flexibility contracts were concerned by many scholars, such as return contract [4,5], quantity flexibility (QF) contract [6][7][8][9], multiperiodic supply chain contracts [10][11][12], and risk-sharing contract [13,14]. ese flexibility contracts are widely applied in trade agreement for its flexibility and versatility. However, asymmetric information existing in trade activities is nothing new, and the above contracts cannot help the stakeholders make rational decisions. en, investigating the role of options (contingent claims) in a buyer-supplier system has attracted great attention from researchers [15][16][17][18]. Dawn et al. illustrated how options provide flexibility to a buyer to respond to market changes in the second period [19]. Bester and Krähmer analyzed bilateral contracting in an environment with contractual incompleteness and asymmetric information using a simple deterministic exit option contract [20]. Luca et al. studied how exit options can affect bidding behavior and the buyer's and the seller's expected payoffs in multidimensional procurement auctions [21]. Furthermore, an option tool will improve the economic efficiency of the partners in the discrete environment [22], hedge market risk [23], and promote fair trade [24].
rough numerical examples, Barnes-Schuster et al. verified the role of option contracts in improving the utility of supply chain [25]. Additional applied researches have investigated the application of options in supply chain management. Cheng et al. established a single-cycle secondary supply chain option contract model to determine the optimal pricing and ordering strategies [26]. Xing et al. derived the Seller's optimal bidding and Buyers' optimal contracting strategies in a von Stackelberg game with the Seller as the leader [27]. Wang and Tsao assumed that the option contract executed in the second stage is not equal to the option purchase amount in the first stage, and the optimal strategy is obtained from the perspective of the option buyer [28]. Cai et al. investigated the relationship between the option contract and the subsidy contract and found supply chain coordination and Pareto improvement can be achieved by introducing the option contract [29]. Liu et al. investigate the coordination of both the supplier-led and the retailer-led supply chains under option contract [30]. All of those have largely promised the potential for option contracts' applications to develop freight derivatives.
Like options on stocks, options on freight provide stakeholders with protection against adverse freight rate movements. Increased globalization and increased demand for transportation has resulted in freight itself becoming a volatile commodity. In the freight transaction area, Rolf et al. established a capacity option pricing model and applied option contract into air cargo industry [31]. Koekebakker et al. set up the theoretical framework for the valuation of the Asian-style options traded in the shipping market [32]. Soltani et al. considered a commodity processor and developed models to determine the ocean freight firm's optimal hedging policy [33]. Kyriakou et al. developed an accurate valuation setup for freight options, featuring an exponential mean-reverting model for the freight rate with distinct reversion scales for its jump and diffusion components [34]. Although freight options has been the primary tool used to manage fluctuations in the shipping due to volatile ship prices, the contracts were never traded [35]. For sea transport enterprises, the purpose of buying options is to hedge risks rather than exercise options [36]. By contrast, railway transportation enterprises adopt option trading to improve the competitiveness of railway transportation. As railway transportation market is relatively new, not much scientific research has been done in this area. Guo and Peng applied the option theory into railway freight pricing activities and established a multiphase trigeminal tree pricing model [37]. Meanwhile, a multiphase trigeminal tree pricing model with jump diffusion process was established to depict the fluctuations in RFO pricing strategy caused by the nonmarketing uncertainty which arrive at random discrete time points [38]. ese have a certain reference value for studying the pricing of RFO. However, these studies are independent of the optimal decentralized decision of the supply chain system, and its pricing strategy is not conducive to the long-term stability of the supply chain.
In this paper, a two-stage Stackelberg game model of railway transport enterprise and contract customer is developed based on the perspective of coordinating supply chain. e main contributions of this paper are summarized as follows: (1) Considering the coexistence of contract market and spot market, a new tradable RFO is designed and the transaction process is elaborated (2) A two-stage Stackelberg game theoretic approach based on the principle of supply chain utility maximization is proposed to achieve the optimal decision of RFO contract (3) A new reverse reasoning algorithm is proposed to deal with the optimal decision of contract customer (4) Combined Lagrangian function with Kuhn-Tucker conditions, a new method is put forward to obtain the optimal pricing decision of RFO e remainder of this article is organized as follows: Section 2 details the methodology; Section 3 presents the method for obtaining the contract customers' optimal decision, and the optimal pricing decision of RFO is derived in Section 4; and Section 5 will conclude our work.
2
Discrete Dynamics in Nature and Society
Definition of RFO.
Freight has to be contracted, just like commodities. e only difference is that most commodities are real products, while freight is a service instead of a physical product [39]. So when freight is "bought," the service of products being transported is contracted. Due to nonstorability of freight, it should be traded in time. To protect railway transport enterprises and contract customers against market risks, a new option contract related to freight is provided according to the concept of options in the financial market, called rail freight option (RFO). Unlike other options, RFO helps railway transport enterprises selling nonstorable freight in advance and also insures stakeholders against freight rates moving beyond a specified price level.
Definition 1.
A RFO is a call option contract which states that the contract customer (holder) has the right to pay/ receive the average of the values of the freight rates during some period on or before the expiration date and receive/pay strike price. e railway transport enterprise (writer) then has the obligation to receive/pay this average and pay/receive the strike price when the holder decides to exercise. In fact, a RFO is an option contract for an asset that is subject to the railway freight transport service. e railway transport enterprise is the option that writer has the right, but not the obligation, to sell the option at the option price. And the contract customer is the holder who is a purchaser of option. If the execution price of the expiration date is higher than the spot market price, the contract customer will pay the execution fee and execute RFO; otherwise, the contract customer will abandon the execution right of RFO and choose to purchase the capacity in the spot market. With an option, the contract customer has no risk of losing any money more than strike price due to freight price volatility because he always has the possibility not to exercise RFO. For the railway transportation enterprise, if contract customer gives up the execution of RFO, they will sell the capacities in the spot market without refunding the option fees. is is a way the railway transportation enterprise can spread risk and schedule the freight train plans rationally. Under such circumstances, although the risk avoidance is realized effectively, the feasibility of decision-making process is another issue that should be considered.
Transaction Process Description.
In practice, both railway transportation enterprise and contract customer are mostly partial to risk aversion just to have different degree. erefore, the degree of risk acceptability determines the amount of options purchased. e vast majority of existing studies deal with how to use option for hedging. e optimal decision is a decentralized decision that is independent of the utility of the supply chain. Excessive pursuit of maximization of its own utility based on decentralized decision is not conducive to the long-term stability of the entire supply chain. e expectation of maximizing returns is not the optimal decision point in terms of transportation system. Compared with the existing studies focus on the expected returns of stakeholders, this paper studies the decisionmaking in the RFO trading to help maximize system utility, maintain the long-term development of the entire supply chain, and achieve Pareto optimal.
Consider a railway transportation enterprise who is looking to protect the company against a possible decrease in the freight rates. To this extent, the railway transportation enterprise writes and sells the RFO by speculating contract customer buying behavior so as to formulate reasonable option prices and circulation. en, contract customer determines the purchase amount of RFO according to the pricing announced by the railway transportation enterprise. Discrete Dynamics in Nature and Society And in the second stage, the contract customer will decide whether or not to exercise his RFO. If only the total amount that the contract customer has to add up to the option strike price is less than the spot freight rate at that point, the contract customer will exercise his RFO, and of course, the contract customer will abandon his RFO in reverse. e transaction process actually conforms to the two-stage dynamic game model and is shown in Figure 2.
Symbol and Assumption.
To better understand the model, the list of all the notations used in our work is presented in Table 1. Some notations will be more precisely defined as they appear in later sections of this paper.
In order to build the mathematical model, this study makes the following assumptions: e RFO covered in this article is European call option; that is to say, RFO can only be executed on the expiration date.
Assumption 2. Both railway transport enterprises and contract customers are completely rational and risk averse. In order to maximize system utility and achieve Pareto optimal, the optimal decision is based on the principle of maximizing supply chain utility as much as possible.
Assumption 3.
e freight rate of the spot market in this model is an exogenous variable, which is completely dominated by the external market economic conditions, and is not affected by the railway transportation enterprises and contract customers.
Stackelberg Model Construction.
In the case of contract market coexisting with spot market, two-stage Stackelberg game model is established, in which the railway transportation enterprise is the leader and the contract customer is the follower. e game sequence is shown as the following: Step 1: in the contract market of time T 0 , the railway transport enterprise writes the RFO including the option price w 0 and the option strike price w 1 .
Step 2: according to the published price strategy of RFO (N), the contract customer decides the purchase amount of RFO to maximize their expected returns.
Step 3: in the spot market of time T 1 , the contract customer will decide whether or not to exercise his RFO based on the spot market freight rate p i and the option strike price w 1 . Furthermore, determine the execution amount of RFO (q 1 ) and the capacity purchases amount through spot market (q 2 ). us, the utility function of contract customer π 1 (q 1 , q 2 , p i , N) can be expressed as follows: where U(q 1 + q 2 ) stands for the market return expected by the contract customer and U(q 1 + q 2 ) � e α − e α− β(q 1 +q 2 ) /β is assumed based on risk aversion theory, in which α and β are undetermined parameters. w 0 N is the payment for the contract customer to purchase RFO, w 1 q 1 stands for the option strike price paid by the contract customer exercising RFO, and p i q 2 represents the cost of purchasing the capacity from spot market.
Step 4: similarly, the utility function of railway transportation enterprise π 2 (w 0 , w 1 , b 0 , b 1 , N) can be easily gained as follows: where b 1 q 1 stands for the long-term preparation cost of transportation, [(K − N) ∧ D 2 ][p i − b 2 ] + stands for the proceeds from the sale of the remaining capacity in the spot market, and KC stands for the fixed production cost [40].
Step 5: then, the utility function of system π is obtained as follows: It is worth noting how the game reaches the Nash equilibrium. Both misjudgment of the market and the behavior of the opponent will lead to the deviation of the decision. Once the deviation occurs, one party will inevitably obtain excess profits. e other party will inevitably adjust the decision or withdraw from the market to choose alternatives. It is extremely unfavorable to the supply chain relationship. In order to maximize system utility, the longterm development of the supply chain is maintained and Pareto optima is achieved, and optimal decision should be investigated from the perspective of maximizing the utility of the supply chain.
Optimal Decision of Contract Customer
According to the order of decision, this paper uses the reverse reasoning method in dynamic programming to solve the optimal combination decision of the contract customer firstly. e problem of the contract customer optimal decision is decomposed into the following two stages: Step 1: the constraint optimization method is used to solve the optimal combination (q * 1 , q * 2 ) of the RFO execution amount and the spot market purchase amount 4 Discrete Dynamics in Nature and Society Step 2: the optimal RFO order quantity N * of contract customer is solved according to the optimal combination (q * 1 , q * 2 ) obtained in the first stage
Calculation of the Optimal Combination.
e goal of the contract customer is to maximize its own expected utility, which can be expressed as the following optimization problem with constraints: According to formula (4), the RFO execution amount and the spot market purchase amount of the contract customer are obtained as described in eorem 1.
Theorem 1.
e optimal choice of RFO execution can be expressed as follows: e optimal choice of the capacity purchases amount through spot market can be expressed as follows: Proof. e Kuhn-Tucker conditions of formula (4) are constructed as follows: Next, the first-order partial derivatives of q 1 and q 2 for function τ are obtained on the basis of Lagrange multiplier c as follows: en, the optimal combination (q * 1 , q * 2 ) is discussed as follows: (a) If c 1 > 0, c 2 � 0, c 3 � 0, c 4 � 0, w 1 ≤ p i is gained, and the optimal combination (q * 1 , q * 2 ) can be discussed as the follows: (1) Supposing that the marginal payment ability zU/zN of the contract customer is greater than the spot market freight rate p i , it is profitable for option customer to purchase capacity from the spot market. Hence, the optimal combination (q * 1 , q * 2 ) can be calculated as follows: (b) If c 1 � 0, c 2 > 0, c 3 � 0, and c 4 � 0, w 1 ≤ p i is gained, and the optimal combination (q * 1 , q * 2 ) can be calculated as follows: (c) If c 1 � 0, c 2 � 0, c 3 > 0, and c 4 � 0, w 1 ≥ p i is gained, and the optimal combination (q * 1 , q * 2 ) can be calculated as follows: (d) If c 1 � 0, c 2 � 0, c 3 � 0, and c 4 > 0, w 1 ≥ p i is gained, and the optimal combination (q * 1 , q * 2 ) can be calculated as follows: (2) Conversely, if zU/zN − p i ≤ 0, the contract customer blindly purchasing the capacity in the spot market will reduce its profit. e optimal choice is to stop the repurchase of capacity. Hence, the optimal combination (q * 1 , q * 2 ) can be calculated as follows: erefore, eorem 1 is proved. In addition, this article finds an interesting conclusion that the contract customer's RFO execution is greatly affected by the option strike price w 1 . If only the option strike price w 1 is smaller than spot market freight rate p i and the market demand is much larger than its RFO purchase amount N, the contract customer will choose to execute all RFO purchased. On the contrary, it will choose to abandon RFO and instead purchase the capacity in the spot market to obtain higher profits.
Optimal RFO Order Quantity of Contract Customer.
According to the optimal combination (q * 1 , q * 2 ) obtained, the original utility function independent variable is changed based on the principle of utility maximization, and formula (4) can be updated as follows: en, solving mathematical program with constraints in formula (14), the optimal RFO order quantity N * of the contract customer is obtained as described in eorem 2.
Theorem 2.
Based on the option price written by railway transport enterprise, the contract customer decides the optimal option purchase amount N * satisfies the following formula: Proof. Substituting formula (3) into formula (15), the objective function of formula (15) can be updated as follows: Nh p i dp i 6 Discrete Dynamics in Nature and Society 1 , the expected utility function of the supply chain can be expressed equivalently as Moreover, if formulas (5) and (6) are substituted into formula (18), the expected utility function of the entire supply chain can be calculated as follows: Next, taking the first-order partial derivative with respect to N for formula (19), the optimality conditions of first order can be calculated as follows: Discrete Dynamics in Nature and Society In addition, take the second derivative with respect to N for formula (20) to get z 2 π/zN 2 � U ″ (N).
In accordance with Assumption 2, the article has z 2 π/zN 2 � U ″ (N) ≤ 0 as the result of the contract customer is a risk averse. at is to say, π(w 0 , w 1 , N) is a concave function with respect to N. Let zπ/zN � 0, then the optimal option purchase amount N * satisfies the following formula: erefore, eorem 2 is proved.
e optimal option purchase amount N * of the contract customer is monotonously decreasing with respect to the option strike price w 1 .
(3) w * 0 > 0, w * 1 > 0 In this case, the railway transportation enterprise provides both RFO and the spot market capacity, and the contract signed with the contract customer can be regarded as RFO. (4) w * 0 > 0, w * 1 > 0 e solution is a theoretical local optimal solution without any practical economic significance.
According to the above analysis, when the railway transportation enterprise writes RFO, the option strike price w 1 should be no more than the long-term preparation cost b 1 . A reasonable option price w 0 can make up for the opportunity cost of the RFO order quantity that cannot be sold in the spot market and also avoid the risk that the marginal cost is higher than the option strike price. Meanwhile, the contract customer's RFO purchase amount N is negatively correlated with the strike price w 1 and the option price w 0 . In addition, the strike price w 1 is more sensitive to the impact on the expected supply chain utility than the option price w 0 . is is mainly because if only the total amount that the contract customer has to add up to the option strike price w 1 is less than the spot freight rate p i at that point, the contract customer will exercise his RFO, and of course, the contract customer will abandon his RFO in reverse. erefore, the railway transportation enterprise should pay more attention to the strike price w 1 in terms of maximizing system utility and achieving Pareto optimal.
Conclusion
For railway freight pricing, this paper introduces the concept of option theory and proposes a new tradable rail freight option (RFO). Different from the previous works, this paper establishes a two-stage Stackelberg game model based on the perspective of maximizing the utility of the supply chain to discuss the optimal pricing strategy of RFO. Considering the coexistence of the contract market and the spot market, this study uses the reverse reasoning method in dynamic programming to solve the optimal combination decision of the contract customer firstly. en, the optimal pricing decision of RFO is obtained using Kuhn-Tucker conditions and Lagrangian function. e main conclusions of this article are as follows: (1) the option strike price w 1 should no more than the long-term preparation cost b 1 ; (2) the contract customer's RFO purchase amount N is negatively correlated with the strike price w 1 and the option price w 0 ; and (3) the railway transportation enterprise should pay more attention to the option strike price w 1 in terms of maximizing system utility and achieving Pareto optimal.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that there are no conflicts of interest. | 5,831 | 2020-06-22T00:00:00.000 | [
"Business",
"Economics"
] |
Preparation and characterization of zein–lecithin–total flavonoids from Smilax glabra complex nanoparticles and the study of their antioxidant activity on HepG2 cells
Highlights • New Z-L-TFSG complex nanoparticles were prepared by the anti-solvent coprecipitation method.• The particles exhibited nano-scale size and excellent storage stability.• The nanoparticles had sustained release in a simulated gastrointestinal tract.• Antioxidant performance of Z-L-TFSG NPs in vitro was improved after encapsulation.
Introduction
Smilax glabra Roxb. (SG), called Tufuling in Chinese, is a rhizome of the Liliaceae plant and belongs to the Smilacaceae family. It was used as an edible plant and herbal medicine for hundreds of years in China and other Asian countries (Hua et al., 2018). SG was also used as a food ingredient that was added to teas, functional foods, and sanitarian soups. As a functional food with great nutritional and medicinal value, SG is one of the main components of turtle jelly (Guilinggao) prepared in the southern regions of China. SG was widely used in traditional Chinese medicine in the treatment of nephritis, syphilis, heavy metal poisoning, hypertonia, and other diseases. In fact, SG was shipped to Europe for syphilis treatment in the 16th century (Zhao et al., 2020a). In Hong Kong, Macau, and the Guangdong and Guangxi provinces of China, turtle jelly is widely used as a medicinal diet to nourish the body by reducing body heat and removing toxins in the blood. Some recent studies have suggested that the chemical constituents of SG exhibit antioxidative (Shi, et al., 2020), antibacterial, anti-inflammatory, antigout, antiviral, and hypouricemic properties, and also provide cardiovascular protection and hepatoprotection (Feng et al., 2020;Huang et al., 2019).
Flavonoids are one of the main ingredients of SG (Zhao et al., 2020a). More than 20 different flavonoids and flavonoid glycosides have been separated and identified from SG in previous studies, such as astilbin, isoastilbin, neoastilbin, neoisoastilbin, engeletin, isoengelitin, taxifolin, and quercetin (Shu et al., 2018;Xu et al., 2013). Astilbin is usually considered the main bioactive compound in SG, and has three stereoisomers (neoisoastilbin, isoastilbin, and neoastilbin) that exist simultaneously in SG (Zheng, Zhang, & Zhang, 2018). Free radical-mediated oxidative stress could be involved in the pathogenesis of many diseases, such as inflammation, cancer, and neurodegenerative disorders (McCord, 2000). It was reported that the flavonoids of SG, especially astilbin and its stereoisomers, exhibit significant antioxidant activity (Shu et al., 2018). However, astilbin and its stereoisomers are highly unstable, and, therefore, prone to mutual transformation due to the similarity in their structures, which greatly limits their applications in medicine and food (Zhang, Fu, Huang, Shangguan, & Guo, 2013). Hence, improving the stability of the flavonoids of SG is critical for their practical application. Some studies have shown that the molecular structure of flavonoids can be changed by chemical reaction to improve their storage stability; however, this method was considered unsuitable since the safety of the modified products could not be guaranteed (Hussain, Hadi, & Akbar, 2019). To overcome this shortcoming, active substances are usually embedded in various nanomaterials (such as nanoparticles, microcapsules, nanolotions, and hydrogels) in which nanoparticles (NPs) are widely used (Hussain et al., 2019). Moreover, since Chinese medicine emphasizes multicomponent and multitarget synergy, these flavonoids are often used as a whole to exert a curative effect. Therefore, we need to find a new delivery system for the integrated encapsulation of multiple flavonoids in SG.
Zein is the main storage protein in corn and contains >50 % of hydrophobic amino acids, which make it insoluble in water but soluble in aqueous ethanol. It was observed that zein loses its solubility and can self-assemble into NPs with a decrease in the concentration of ethanol (Luo & Wang, 2014). Lecithin is believed to be a biocompatible and safe excipient and has been used in the pharmaceutical, food, and cosmetic industries. Lecithin is an amphiphilic molecule composed of a hydrophilic head (phosphatidyl substituent) and a hydrophobic tail (fatty acid chain) that can interact with zein and form stable composite colloidal NPs in aqueous ethanol (Dai, Sun, Wang, & Gao, 2016). A previous study reported that epigallocatechin gallate loaded with zein and lecithin NPs showed excellent stability and great sustained-release performance (Xie et al., 2021). Astilbin was encapsulateed as a model flavonoid by preparing the core-shell zein NPs with lecithin; the encapsulation improved the stability of astilbin and also increased its bioavailability both in vitro and in vivo (Ruan et al., 2021). However, currently, there are no published results of the studies conducted on the self-assembled NPs of the total flavonoids from S. glabra (TFSG).
In the present study ( Fig. 1), TFSG was extracted and purified, and the zein-lecithin-TFSG complex nanoparticles (Z-L-TFSG NPs) were prepared using the anti-solvent coprecipitation (ASCP) technique. The zeta potential, particle size, morphology, and structure of the prepared NPs were obtained. The characteristics and antioxidant activity of the composite NPs were evaluated. Furthermore, the antioxidant activity of Z-L-TFSG NPs on the HepG2 cell model was evaluated. We believe that our findings will provide some basis for developing a new kind of safe and effective delivery system for the integrated encapsulation of multiple flavonoids.
Preparation of total flavonoids of Smilax glabra
The S. glabra samples were collected from Guangxi province and verified by Prof. Zhifeng Zhang (College of Pharmacy, Southwest Minzu University, Chengdu, China). The dried sample (5 kg, cut into thin slices) was extracted with 70 % ethanol-water (v/v) three times (100 L, 60 • C for 2 h). The extracts were filtered and collected, and the combined filtrate was evaporated in a rotary evaporator at 50 • C to obtain the ethanolic extract of S. glabra (EESG). The extract was dissolved in distilled water and the pH was adjusted to 5.0 using phosphoric acid. EESG was purified using the D101 macroporous resin. The combined filtrate was concentrated to extract under reduced pressure and freezedried to obtain total flavonoids of Smilax glabra (TFSG) powder.
Preparation of Zein-lecithin-TFSG complex nanoparticles
Zein (1.0 g) was added to 500 mL of 75 % methanol-water (v/v) with continuous stirring using a magnetic stirrer for 2 h (700 rpm) (Zhao et al., 2020b). Then different amounts of lecithin were added to the prepared zein solution and the resulting solutions were stirred using a magnetic stirrer for 3 h to reach zein:lecithin mass ratios of 1:0 (S1), 2:1 (S2), 3:2 (S3), 1:1 (S4), 2:3 (S5), 1:2 (S6), and 0:1 (S7), respectively. Following this, 1.0 mL TFSG-water solution (100 mg/mL) was added to the prepared zein-lecithin (Z-L) solutions, stirred continuously for 1 h, and then concentrated to 20 mL to obtain the Z-L-TFSG NPs. The obtained 20 mL NPs were slowly injected into 60 mL deionized water with constant stirring for 0.5 h (700 rpm). The remaining ethanol was evaporated using a rotary evaporator at 50 • C to form composite nanoparticle dispersions, and the pH value of the dispersions was adjusted to 4.0 by adding NaOH (0.1 mol/mL) or HCl (0.1 mol/mL). The dispersions were subjected to centrifugation at 2000 rpm for 20 min to separate free TFSG and large particles (Xie et al., 2021). The composite nanoparticle dispersions were kept at 4 • C, and some of the dispersions were freeze-dried to obtain the corresponding powders.
Determination of zeta potential, particle size, and polydispersity index One milliliter of Z-L-TFSG NP dispersions from each group (S1-S7) was added to 20 mL of distilled water to obtain the corresponding dilute solutions. The zeta potential, particle size, and PDI of the diluted solutions from each group were determined using a combined method comprising dynamic light scattering (DLS) and particle electrophoresis (Zetasizer Nano-ZS, Malvern, UK). The zeta potential and particle size of the 1.5 mL samples were repeatedly measured three times. All measurements were carried out at 25 • C.
Storage stability
The Z-L-TFSG NP dispersions from each group were stored in a refrigerator at 4 • C for 14 days. The particle size and PDI of the dispersions in each group were observed on the 14th day of the experiment.
Encapsulation efficiency and loading capacity
One milliliter of nanoparticle dispersions from each group was diluted with 7 mL 75 % methanol-water (v/v) using ultrasonication, and the resultant solution was centrifuged at 20 • C at a high-speed centrifuge for 10 min (5000 rpm). The absorbance-unincorporated TFSG in each sample was detected using the sodium nitrite-aluminum nitrate colorimetric method, and astilbin and its stereoisomers were detected using UPLC. The Encapsulation efficiency (EE) and loading capacity (LC) values were obtained from Eqs. (1) and (2), respectively (Radwan, Elmaadawy, Yousry, Elmeshad, & Shoukri, 2020): where W total is the total amount of the required drug in the solution, W free is the amount of free drug in the supernatant, and WZ-L is the total amount of Z-L.
Fourier transform infrared spectroscopy
All Fourier transform infrared (FTIR) spectra were recorded on an FTIR spectrometer (Thermo Fisher, USA) using the attenuated total reflection method. The FTIR samples were prepared by mixing the optimized complex nanoparticle samples (2 mg) with spectroscopic grade KBr (198 mg), and were compressed to the sheet. The spectrum of KBr powder without drugs was determined as the blank baseline. The measurement was performed in the range of 4000-400 cm − 1 with a spectral resolution of 4 cm − 1 , with 64 scans being collected for one spectrum.
Nuclear magnetic resonance spectroscopy measurement
All 1 H NMR spectra were recorded on a Nuclear Magnetic Resonance (NMR) Spectrometer (DD2 400-MR, Agilent, USA), operating at 400.13 MHz with pulsed field gradient capabilities for hydrogen. The NMR samples were firstly evaporated under a stream of nitrogen and then in vacuum (overnight), then dissolved in deuterated DMSO. 1 H NMR spectra were acquired at an acquisition time of 3.2 s and a recycle delay of 1 s acquiring 64 K data points. 1 H NMR spectra of TFSG and Z-L-TFSG NPs were collected.
Differential scanning calorimetry
The physical state of the optimized TFSG, Z-L, and Z-L-TFSG NPs was determined using differential scanning calorimetry (DSC, Q2000, Thermo Fisher, USA). Accurately weighed, freeze-dried samples of 3 mg were placed in an aluminum pan, and an empty, sealed aluminum pan was used as the reference. The samples were heated from -20 • C to 200 • C at a heating rate of 10 • C/min under nitrogen gas flow. DSC thermograms of the physical mixtures of TFSG, Z-L, and Z-L-TFSG NPs were collected.
Transmission electron microscopy
The shape and the surface morphology of the optimized Z-L and Z-L-TFSG NPs were analyzed by Transmission electron microscopy (TEM, H77800, Hitachi, Tokyo, Japan). The samples were first dropped on a copper 400-mesh coated with carbon film; the mesh was then stained with 2 % phosphotungstic acid for 3 min. The freshly stained samples were then observed and photographed.
Atomic force microscopy
The morphology of Z-L and Z-L-TFSG NPs was analyzed using atomic force microscopy (AFM, Dimension Icon, Bruker, Germany). The analysis was performed in the air under contact mode using a Sharp Nitride Lever probe with a spring constant of 0.04 N/m. The samples were sonicated, deposited on freshly cleaved mica substrates, and dried with the help of compressed air.
Determinants of antioxidant activity
DPPH radical scavenging activity TFSG and powders of Z-L-TFSG NPs were dissolved in distilled water and 75 % methanol-water (v/v) to obtain the corresponding TFSG (0.3 mg/mL) and Z-L-TFSG NP (0.3 mg/mL, calculated as TFSG) solutions. Four milliliters of the samples (TFSG or Z-L-TFSG NPs) were added to 4 mL DPPH-ethanol solution (0.1 mmol/L) and mixed thoroughly. The solutions obtained were incubated in dark for 30 min. They were then analyzed at different intervals of time (0, 30, 60, 120, and 240 min). The absorbance values of the sample solutions were determined at 519 nm on an ultraviolet-visible (UV) spectrophotometer (A sample ). The samples mixed with distilled water were used as the blank (A blank ). The mixed solution of DPPH and distilled water was used as the control (A control ). The scavenging rate (Donsì, Voudouris, Veen, & Velikov, 2017;Song et al., 2022;Kao et al., 2019) was determined using the following formula: ABTS radical scavenging activity ABTS free radicals were obtained by mixing potassium persulfate (2.6 mmol/L) with an equal volume of ABTS (7.4 mmol/L) and placing the solution in darkness for 12 h. The working solution of ABTS was prepared by diluting the obtained solution with phosphate-buffered saline buffer (PBS, pH7.4) to an absorbance value of 0.70 (734 nm) and placed at 4 • C. The TFSG and Z-L-TFSG NP solutions (as described in Section 2.12.1) were added to 50 mL of distilled water to make sample solutions. Two milliliters of the ABTS working solution was mixed with 3 mL of the sample solutions and reacted for 10 min (A sample ). The obtained solutions were analyzed at different intervals of time (0, 30, 60, 120, and 240 min). The ABTS-water solution was used as a blank (A blank ). The absorbance values of the above solutions were determined at 734 nm using UV spectroscopy (Saeed, Khan, & Shabbir, 2012;Yi, He, Peng, & Fan, 2022) and their scavenging activities were obtained using the following formula: Hydroxyl radical scavenging activity Hydroxyl radicals (⋅OH) were generated by carrying out a classical Fenton reaction in a freshly prepared reaction mixture containing H 2 O 2 and FeSO 4 . Salicylic acid was used to react with the ⋅OHs, which yielded 2,3-dihydroxybenzoic acid with a typical UV absorption at 510 nm. The absorption of the sample solutions was compared with that of the blank solution to determine the hydroxyl radical scavenging activity. To make the sample solutions, first, 1 mL FeSO 4 (9 mmol/L) and salicylic acid--ethanol solution (9 mmol/L) were taken in the colorimetric tubes; TFSG and Z-L-TFSG NPs solutions were then added to the colorimetric tubes (as described in Section 2.12.1); finally, 1 mL H 2 O 2 (8.8 mmol/L) was added to the colorimetric tubes and the solutions were diluted to 15 mL by adding distilled water. The tubes were heated in a water bath at 40 • C for 30 min; the solutions were analyzed at different intervals of time (0, 30, 60, 120, and 240 min) (Saeed et al., 2012). The absorbance values of the samples were determined at 510 nm by UV spectroscopy (A sample ) and the absorbance of the solution without the samples was used as the blank (A blank ). The free radical scavenging activity (%) was obtained using the following equation:
In vitro controlled release study
The controlled release of free TFSG, TFSG in Z-TFSG NPs, and TFSG in Z-L-TFSG NPs was carried out under simulated gastrointestinal conditions . Three milliliters of the prepared samples were sealed in a dialysis bag (3500 Da molecular cut-off) and then incubated in a flask with 150 mL simulated gastric fluid (SGF, pepsin aqueous solution, 15 mg/mL) with gentle shaking for 2 h (37 • C). The dialysis bag with the sample was transferred to another flask containing 150 mL of the simulated intestinal fluid (SIF; pancreatin aqueous solution, 30 mg/mL) and incubated for 9 h (37 • C). The TFSG content in a 1 mL release medium was determined using the sodium nitrite-aluminum nitrate colorimetric method (as described in Section 2.2). Equal volumes of fresh simulated fluids were added to the flask to maintain a constant volume (Li et al., 2019). The drug release rate (%) was calculated using the following equation: where C is the concentration (mg/mL) of the drug in the solution, V is the volume (mL) of the solution, and M is the amount of drug (mg) in the complex NPs.
Cytotoxicity assay
The cytotoxicity of TFSG and Z-L-TFSG NPs was determined by the cell counting kit-8 (CCK-8) assay using HepG2 cells. The HepG2 cells were cultured in the DMEM medium (high glucose) with 1 % penicillin-streptomycin and 10 % fetal bovine serum. After reaching 80 % confluence, the cells were digested with 0.25 % trypsin to produce a single cell suspension; 100 μL of the suspension (1 × 10 4 cells/mL) was seeded into a 96-well plate and incubated at 37 • C with 5 % CO 2 , 95 % air, and 100 % relative humidity, to allow for cell attachment. The cells were then treated with serial concentrations of TFSG (0, 0.5, 1, 1.5, 2, 5, and 10 mg/mL) or Z-L-TFSG NPs (0, 0.5, 1, 1.5, 2, 5, and 10 mg/mL; calculated as TFSG) for 24 h. Following this, 10 μL of the CCK-8 solution was added to each well, and the cells were incubated for an additional 4 h in an incubator. The optical density was recorded at 450 nm using a microplate reader (MQX200, BIO-TEK, USA). Cell viability was obtained using the following equation: where A control is the absorbance of the negative control, A sample is the absorbance of the HepG2 cells treated with TFSG or Z-L-TFSG NPs, and A blank is the absorbance of the blank sample.
Selection of H2O2 concentration in the oxidative damage model
The HepG2 cells (1 × 10 4 cells/mL) were incubated in a 96-well plate at 37 • C for 24 h. The cells were then treated with H 2 O 2 (0, 100, 200, 300, 400, 500, 800, and 1000 μmol/L) for 24 h. Following this, 10 μL CCK-8 solution was added to each well, and the cells were incubated for another 4 h. Cell viability was evaluated using the CCK-8 assay.
Determination of H 2 O 2 -induced reactive oxygen in HepG2 cells
The HepG2 cells (1 × 10 4 cells/mL) were cultured in a 96-well plate at 37 • C for 24 h. The cells were then treated with the blank group, control group (400 μmol/L H 2 O 2 ), TFSG group (0.5, 1, 1.5, 2, 5, and 10 mg/mL), and the Z-L-TFSG NPs (0.5, 1, 1.5, 2, 5, and 10 mg/mL; calculated as TFSG) for 1 h. Then, H 2 O 2 was added to make the final concentration 400 μmol/L (total volume of 200 μL). The culture medium was discarded after incubation for 24 h, and the cells were washed three times with 200 μL DMEM. Following this, each well was filled with a 150 μL DCFH-DA fluorescent probe (μmol/L). After incubating at 37 • C in the dark for 20 min, the cells were washed three times using cool PBS. After the PBS was absorbed in the wells, reactive oxygen production was measured using a fluorescence inverted microscope (M152-N, MSHOT, China).
Statistical analysis
Data were reported as mean ± standard deviation, and statistical analyses were performed by one-way analysis of variance (ANOVA) using the GraphPad Prism 8 software. Differences were considered to be significant when P < 0.05.
Extraction and purification of total flavonoids from S. Glabra
The content of total flavonoids was 17.1 % when extracted using ethanol. After purification using the D101 macroporous resin, the content of total flavonoids increased to 60.59 %. The results showed that the D101 macroporous resin purified TFSG efficiently. In our research, the maximum amount of total flavonoids was obtained when the pH of the sample solution was 5, indicating that TFSG, which is a weak acidic substance, exhibits a good adsorption effect in an acidic environment.
Particle size, zeta potential, and PDI
The effects of zein:lecithin mass ratios on the zeta potential, particle size, and PDI of the NPs are presented in Table 1. The average particle size of the Z-TFSG and L-TFSG NPs was around 160 and 118 nm, respectively. The particle size of Z-L-TFSG NPs varied with the zein: lecithin mass ratios. With the increase in the amount of lecithin, the particle size of Z-L-TFSG NPs increased at first and then decreased. When the mass ratios of zein:lecithin were 2:1, 3:2, and 1:1, the particle sizes of Z-L-TFSG NPs were larger (216, 272, and 278 nm, respectively) than that of the Z-TFSG NPs (160 nm). It was reported that the C16 alkyl chain of sodium stearate may be mounted onto the hydrophobic region of zein to obtain a zein sodium stearate complex particle with partially unfolded hydrophobic microregions, and an increase in anisotropy was accompanied by the partial unfolding of zein, which could contribute to the large particle size (Gao et al., 2014). Further increasing the lecithin mass ratio led to a significant reduction in the particle size from 278 to 132 and 139 nm for the zein: lecithin mass ratios of 2:3 and 1:2 Z-L-TFSG NPs, respectively. One explanation for this is that lecithin and zein could form a compact structure at a relatively high mass ratio of lecithin, which could decrease the size of composite NPs. Our results were consistent with those of a previous study which reported that the zein: lecithin mass ratios exhibit the same effects on the particle size when curcumin was entrapped in the Z-L NPs (Dai et al., 2017). As shown in Table 1, PDI exhibited a similar trend as particle size. The PDI of the NPs was < 0.2 when the mass ratios of zein: lecithin were between 2:3 and 1:2, which indicated that the particle sizes of the NPs were uniform.
As summarized in Table 1, the zeta potentials of the Z-TFSG and L-TFSG NPs were -35.9 and -44.2 mV, respectively. When the zein: lecithin mass ratio varied from 2:1 to 1:2, the zeta potential of the Z-L-TFSG NPs changed from -34.2 to -46.8 mV. The zeta potential reflects the stability of the NP emulsions. If the nanoparticle emulsions have a high absolute value of zeta potential (positive or negative), the emulsion is more stable and aggregation of the NPs is more difficult (Siddiqui, Alayoubi, El-Malah, & Nazzal, 2014). Generally, NPs with a zeta potential > 30 mV (absolute value) will have good colloidal stability due to sufficient electrostatic repulsion; when the mass ratio of zein: lecithin changed from 1:1 to 2:3, the zeta potential of Z-L-TFSG NPs changed from -37.1 to -44.1 mV. Further increasing the lecithin level rarely showed any influence on the zeta potential of the NP emulsions. Based on the above results, the Z-L-TFSG NPs with higher concentrations of lecithin (zein: lecithin mass ratios of 2:3 or 1:2) were associated with lower particle size, PDI, and higher zeta potential, which contributed to the improved stability of the NP emulsion. The improved stability was because using lecithin as a surfactant improved the emulsifying effect of the NP emulsion. When the zeta potential (absolute value) of the NPs increases, the repulsive force among the particles increases, and the particle size decreases; consequently, the NP emulsions become more stable and the aggregation of the NPs becomes more difficult.
Encapsulation efficiency and loading capacity
The Encapsulation efficiency (EE) and loading capacity (LC) of Z-L-TFSG NPs (S1-S7) are also shown in Table 1. The EE of TFSG loaded onto zein nanoparticles was only 72.08 % in the absence of lecithin. However, with the addition of lecithin, the EE and LC of the complex NPs gradually increased to 76.24 % and 3.02 %, respectively. The EE of Z-L-TFSG NPs obviously increased from 76.24 % to 95.12 % at the zein: lecithin mass ratio of 1:1, and the corresponding LC was 4.72 %. When the mass ratio of zein:lecithin changed from 1:1 to 1:2, the EE of the NPs changed from 95.12 % to 98.00 %, and the LC changed from 4.72 % to 3.97 %. Our results suggested that the EE and LC can be efficiently improved with the addition of lecithin to zein. Lecithin could help entrap the TFSG adhered to the surface of zein in the complex NPs (Fig. 1). Furthermore, lecithin might form a complex with the free TFSG, which could increase the EE and LC (Sebaaly, Jraij, Fessi, Charcosset, & Greige-Gerges, 2015). A similar finding was reported in a previous study, where lecithin was added to the zein-EGCG nanoparticles, and the EE and LC of EGCG were significantly increased (Xie et al., 2021). Moreover, the main bioactive compound astilbin and its stereoisomers (isoastilbin, neoisoastilbin, and neoastilbin) were also detected; the chromatograms, EE, and LC of astilbin and its stereoisomers were shown in Fig. 2A. The EE and LC of astilbin and its stereoisomers changed in the same manner as TFSG, which suggested that the Z-L system could also increase the encapsulation and loading efficiency of the main bioactive compound in TFSG.
Storage stability
The change in PDI and particle size was observed to assess the storage stability of the Z-L-TFSG NPs. The change in PDI and particle size and the precipitation of the complex NPs in each group after 14 days of storage are shown in Fig. 2. As shown in Fig. 2D, the PDI and particle size of S1, S2, S3, and S4 significantly changed after 14 days of storage; a large number of sediments were produced in these groups. In groups S5 and S6, with higher concentrations of lecithin, the particle size and PDI changed gradually after 14 days of storage and only a small amount of sediment was produced. The minor change in the particle size suggested that the storage stability of the S5 and S6 complex NPs was good. The PDI values of S5 and S6 were < 0.3, which suggested that the size distribution of the two samples was uniform during storage.
Taken together, 2:3 was selected as the optimal zein: lecithin mass ratio for the Z-L-TFSG NPs (sample S5), which has been associated with a relatively better EE, LC, PDI, particle size, zeta potential, and storage stability than the other samples. Finally, the Z-L-TFSG NPs with a zein: lecithin: TFSG mass ratio of 10:15:1 was used to encapsulate TFSG in the following studies.
FTIR spectroscopy
The potential intermolecular interactions of TFSG, zein, and lecithin were analyzed by FTIR spectra. According to Fig. 3A, The FTIR spectra of Z-L showed a strong absorption peak at 3360 cm − 1 , which may be attributed to the stretching of the -OH bonds (Miguel A. Cerqueira, 2011).With the addition of TFSG and lecithin, the absorption peak of -OH bonds shifted from 3327 to 3425 cm − 1 , suggesting that hydrogen bonds may have been formed among TFSG, zein, and lecithin. The characteristic peaks of flavones did not appear after their interactions with the protein, which suggested that TFSG might have combined with zein and lecithin by hydrophobic effects or hydrogen bonding. Generally, the characteristic peaks of the protein occurred between 1600 and 1690 cm − 1 (amide I band) and 1480-1575 cm − 1 (amide II band). The amide I and II bands of the TFSG were 1661 and 1089 cm − 1 , respectively. The vibration spectra of the amide I band of Z-L-TFSG NPs in different groups were observed to be shifted from 1661 to 1658 cm − 1 and the amide II band from 1044 to 1089 cm − 1 . These variations may be because of the hydrophobic and electrostatic interactions between zein-lecithin or zein with TFSG (Miguel, Bartolomenu, José, & António, 2011). Above all, zein, lecithin, and TFSG could form stable Z-L-TFSG NPs via hydrophobic, hydrogen bonding, and electrostatic interactions.
NMR measurement
NMR spectroscopy is one of the primary methods to investigate the hydrogen bonding interactions in solution. The sensitivity of 1 H chemical shifts and peak shape changed in the electronic environment makes it a useful probe for detecting hydrogen bonded protons. In order to investigate the existence of intermolecular hydrogen bonds, we performed 1 H NMR spectroscopy study both TFSG and Z-L-TFSG NPs in deuterated DMSO (Fig. 3B).
It was well known that astilbin, isoastilbin, neoisoastilbin, neoastilbin were the main flavonoid compounds in TFSG. The 1 H NMR spectra of TFSG showed all characteristic peaks of flavonoids after encapsulated with lecithin-zein, and discernible changes was also observed in the aromatic signals δ ~ 7.3-6.4 (Ar-H) and in the shape and linewidth of the phenolic proton. Generally, the 1 H NMR resonances of phenol -OH groups in flavonoid compounds could be observed as a significant proton peak signal in aprotic solvents, e.g. OH (5) group, which was the most deshielded signal at δ 12.15 ppm due to its Fig. 2. Encapsulation efficiency, loading capacity and storage stability of Z-L-TFSG NPs. UPLC chromatogram of astilbin and its stereoisomers in TFSG (A), encapsulation efficiency (B) and loading capacity (C) of TFSG, astilbin and its stereoisomers in Z-L-TFSG systems. The change of precipitation(D), PDI (E), and particle size (F) of complex nanoparticles in each groups after 14 days of storage at 4 •C. *p < 0.05; **p < 0.01; ns, not significant (n = 3). participation in a strong intramolecular hydrogen bond of the C (5) OH• • •OC (4) moiety in the A ring, the same was also observed at C (3) OH and C (4) OH (δ 8.6 ppm). However, the 1 H NMR resonances of -OH groups would appear as broad signals especially in protic solvents, due to fast exchange of the -OH protons with the solvent, or with the hydrogen bond between different molecules in the aprotic solvents. In our study, the 1 H NMR resonances of phenol -OH groups display broad signals after encapsulated the TFSG with lecithin-zein in deuterated DMSO-d6, which may be attributed to the intermolecular exchange of phenol -OH proton from flavonoid with various -OH groups protons of the lecithin-zein. The significant proton peak signal change of 5-OH and 3-OH also suggest it formed a stable hydrogen bond between TFSG and lecithin-zein. Meanwhile, the aromatic proton signals of TFSG shift downfield from δ 7.0 ppm to δ 7.2 ppm and from δ 6.7 ppm to δ 6.8 ppm for H-2 ′ and H-5 ′ , H-6 ′ from free to bound state, respectively. The minor changes support indicated aromatic protons of TFSG may be exposed in the hydrophobic cavity of zein.
DSC analysis
The DSC results of pure TFSG, Z-L, and Z-L-TFSG NPs are shown in Fig. 3C.
The DSC curve of TFSG showed a strong endothermic peak at around 100.24 • C (T max 185.21 • C, T min 2.05 • C), which may be attributed to its melting point and indicated the loss of its crystalline structure (Sebaaly et al., 2015). The characteristic endothermal peak of TFSG was not detected in Z-L-TFSG NPs as shown in Fig. 3C, indicating that TFSG in the amorphous form dissolved in the NPs.
As shown in Fig. 3C, Z-L had a broad absorption peak at around 100.30 • C (T max 133.45 • C, T min − 1.38 • C); however, the peak of the thermogram became broader for Z-L-TFSG NPs. The broader peaks indicated higher heterogeneity (range of molecular weights melting) in the sample, which, in turn, confirmed encapsulated TFSG in the Z-L dispersions. As may be seen in Fig. 3C, the denaturation temperature of Z-L-TFSG NPs was higher than that of TFSG and Z-L. The observation may be attributed to the fact that the addition of lecithin increases the electrostatic interaction and hydrophobic effects of the components in the NPs, which, in turn, lead to the high endothermic peak temperature and improve the thermal stability of the complex nanoparticles. Moreover, the enthalpy change (ΔH) of TFSG, Z-L dispersions, and Z-L-TFSG NPs were 132.78, 163.51, and 241.15 J/g, respectively (Fig. 3C). The ΔH is affected by the presence of different components including hydrophobic material, and the changes in ΔH implied that there are chemical or physical interactions in the NPs mixtures. Therefore, the increased ΔH in Z-L-TFSG NPs may due to the extra bonds formed between the Z-L dispersions and Z-L molecules.
Microstructures by TEM and AFM
The microstructures of freshly fabricated Z-L NPs and Z-L-TFSG NPs were observed by TEM and AFM (Fig. 3). As seen in the TEM images, the Z-L NPs were found to be spherical with a smooth surface and diameters of around 100 nm ( Fig. 3D and E); the observation was consistent with the results of DLS. The Z-L-TFSG NPs showed a spherical shape with uniform diameters of around 130 nm ( Fig. 3H and I), which might be because of attachment onto the surface of zein NPs. Moreover, the determination of zeta potential confirmed that anionic lecithin might have been deposited on the surface of zein NPs due to the electrostatic attractions.
The morphological structure presented by AFM in Fig. 3 (F, G, J, K) also confirmed the TEM results. The AFM images revealed the Z-L NPs to be spherical with uniform size, with the sample of Z-L-TFSG NPs showing a larger size. We found that some of the Z-L-TFSG NPs had geometrically irregular shapes, which may be attributed to the Z-L-TFSG NPs being clumped and connected. A similar result was also reported by previous studies (Miguel, Bartolomenu, José, & António, 2011). The root-mean-square surface roughness (R ms ) was used to evaluate the surface roughness of the samples. As shown in Fig. 3F and G, the surface of the Z-L NPs was relatively smooth (R ms = 2.3 nm). After loading with TFSG, the R ms of the Z-L-TFSG NPs increased to 6.6 nm, which was 2.2 times that of the thickness of Z-L NPs. The morphological observation indicated that the Z-L-TFSG NPs were prepared and well distributed.
Controlled release of TFSG
The complex NPs were subjected to a simulated gastrointestinal condition to evaluate the release behavior of TFSG from Z-L-TFSG NPs (Fig. 4A). The cumulative release of TFSG from Z-L-TFSG NPs in SGF gradually increased with the increase in the digestion time and reached approximately 20.6 % after 2 h. However, the cumulative release of TFSG from Z-TFSG NPs was about 35.3 % after 2 h of digestion. In SIF, with increasing digestion time, the cumulative release of TFSG from Z-L-TFSG NPs was also lower than that of Z-TFSG NPs. This was probably owing to the electrostatic interaction between zein and lecithin in the acidic environment (Ahmad, Ashraf, Gani, & Gani, 2018;Caporaletti, Carbonaro, Maselli, & Nucara, 2017). 56.0 % of free TFSG was released quickly from Z-L-TFSG NPs in SGF after being digested for 2 h, and the residual 44.0 % was also quickly released after digestion in SIF for 6 h. The release rate of encapsulated TFSG significantly decreased in the present study, which suggested that the encapsulation exhibited a controlled release effect. The release rate of TFSG in Z-L complex nanoparticles was the lowest among TFSG, Z-TFSG NPs, and Z-L-TFSG NPs. Therefore, it may be concluded that Z-L-TFSG NPs provided better protection to encapsulated TFSG, which may improve the stability of TFSG in the gastrointestinal tract.
In vitro antioxidant activity
The in vitro antioxidant activity of TFSG, Z-L-TFSG NPs, and Z-L is shown in Fig. 4. In the beginning, the scavenging capacity of TFSG to DPPH⋅ and ⋅OH radicals was the highest, 72.0 % and 75.0 %, respectively. With the increase in the reaction time, the antioxidant capacity of TFSG on ⋅ DPPH⋅ and ⋅OH radicals decreased owing to their instability and easy degradation in the reaction environment. On the contrary, the scavenging capacity of Z-L-TFSG NPs on DPPH⋅ and ⋅OH radicals was the lowest at the beginning of the reaction, after which it increased. After incubation for 120 min, the antioxidant activities of the Z-L-TFSG NPs to DPPH⋅ and ⋅OH radicals were found to be the highest, which was significantly higher than that of free TFSG (P < 0.05). Z-L-TFSG NPs and TFSG also exhibited a similar scavenging ability during ABTS measurement, but the difference in the greatest scavenging ability between Z-L-TFSG NPs and TFSG was not significant. Our result implied that TFSG was unstable in the reaction environment and that the Z-L layers had a protective effect on TFSG. Moreover, our study suggested that the TFSG encapsulated by Z-L NPs could enhance its antioxidant activity. This could be due to the aromatic amino acids such as phenylalanine, tyrosine, and tryptophan in zein; the role of these aromatic amino acids was demonstrated using the heat-induced technique. Thus, these aromatic amino acids could increase the antioxidant capacity of these proteins (Song et al., 2022). Our results suggest that the encapsulation of TFSG in Z-L-TFSG NPs could be an efficient method to enhance its antioxidant ability.
In vitro cytotoxicity
The CCK-8 assay was applied to evaluate the potential cellular cytotoxicity of Z-L-TFSG NPs at different concentrations. As presented in Fig. 5A, TFSG and Z-L-TFSG NPs showed similar effects on cell activity in the range of 0.5-10 mmol/L. Among the tested concentrations, TFSG and Z-L-TFSG NPs did not show obvious cytotoxicity in the range of 0.5-5 mmol/L. Over 89 % of the cells treated with ≤10 mmol/L Z-L-TFSG NPs survived after incubation for 24 h, which can be considered in the subsequent experiments as non-toxic concentrations. Significant cytotoxicity was observed after exposure to TFSG or Z-L-TFSG NPs at a concentration of 10 mmol/L for 24 h. The cell survival rates reduced to 84-86 %, showing only slight cytotoxicity against the HepG2 cells. Our results suggested that Z-L-TFSG NPs could not increase the growth inhibition of TFSG on HepG2 cells, which were considered to be relatively safe.
Protective effects of Z-L-TFSG NPs against H 2 O 2 -induced oxidative damage
As a kind of reactive oxygen species (ROS), H 2 O 2 can enhance oxidative stress, induce lipid peroxidation, and lead to cell damage or ultimately death⋅H 2 O 2 -induced oxidative damage has been widely used to evaluate the therapeutic effects of many bioactive substances on oxidative stress. HepG2 cells are usually selected to establish antioxidant models to evaluate the antioxidant capacity of bioactive extraction. Therefore, we studied the protective effects of TFSG and Z-L-TFSG NPs on the H 2 O 2 -induced oxidative damage of HepG2 cells (He, Bu, Xie, & Liang, 2019). As illustrated in Fig. 5B, in the range of 100-1000 μmol/L, HepG2 cell viability decreased with increasing H 2 O 2 concentration, exhibiting dose-dependent toxicity. When the concentration of H 2 O 2 was 400 μmol/L, the cell viability of the HepG2 cells decreased to 39.2 %, which indicated that the oxidative damage induced by H 2 O 2 was serious Thus, 400 μmol/L was chosen as the experimental concentration to establish oxidatively damaged models. Fig. 5C shows that TFSG and Z-L-TFSG NPs could inhibit the H 2 O 2induced oxidative stress. In the control group (H 2 O 2 -injured group) treated with only H 2 O 2 , the cell viability was 29.2 % of that of the blank group. With the increase in the TFSG and Z-L-TFSG NP concentration in the experimental group, both the protective effect for oxidative damage and the ability to inhibit oxidative stress on the cells were improved, suggesting that the protective effects of both TFSG and Z-L-TFSG NPs on cell viability were concentration-dependent. When the concentration of Z-L-TFSG NPs was 2 and 5 mg/mL, the cell viability enhanced to 54.2 % and 62.6 % respectively, which was notably higher than that of TFSG. Our results suggested that TFSG encapsulated by the Z-L NPs could enhance the protective effects of TFSG against H 2 O 2 -induced oxidative damage. Together with the cytotoxicity study, our results confirmed that the protective effects of TFSG and Z-L-TFSG NPs on HepG2 cells were achieved by inhibiting oxidative stress rather than by stimulating the proliferation of HepG2 cells.
Inhibitory action of the H 2 O 2 -induced damage by ROS on HepG2 cells
2,7-Dichlorodi-hydrofluorescein diacetate (DCFH-DA) is a kind of fluorescent dye without any fluorescence itself, which can penetrate the cell membrane freely. The level of ROS in HepG2 cells can be evaluated by determining the fluorescence intensity of 2 ′ ,7 ′ -dichlorofluorescein (DCF) (He et al., 2019). The stronger the fluorescence intensity in fluorescence pictures, the higher the ROS level in the cells. In the present study, the DCFH-DA fluorescence probe was applied to detect the ROS level in the HepG2 cells. After treatment with H 2 O 2 , the fluorescence intensity of DCF increased significantly, which indicated that the H 2 O 2induced ROS emerged in the HepG2 cells. In the control group (H 2 O 2injured group), H 2 O 2 induced a significant increase of ROS in the HepG2 cells, which led to a 2.4-fold enhancement in the fluorescence intensity compared with the blank group. In the experimental group, TFSG and Z-L-TFSG NPs had similar effects on ROS, and the fluorescence intensity decreased with increasing TFSG or Z-L-TFSG NPs doses, exhibiting a dose-dependent manner ( Fig. 5D and E). When the dose of TFSG and Z-L-TFSG NPs was >1 mg/mL, the fluorescence intensity was obviously lower than that of the control group. Our results suggested that TFSG and Z-L-TFSG NPs could inhibit the damage caused by ROS to HepG2 cells. Z-L-TFSG NPs tend to be associated with an inhibition effect higher than that of TFSG, but the difference was not significant.
Inference of the formation mechanism of Z-L-TFSG NPs
Herein, we proposed a schematic diagram to elucidate the possible formation mechanism of Z-L-TFSG NPs in Fig. 1. It was reported that zein could self-assemble into colloidal particles when zein and curcumin in aqueous ethanol solutions were dropped into deionized water. With the increasing addition of lecithin, the interaction among zein, lecithin ; Fluorescence staining analysis of ROS production (E) and the corresponding fluorescence intensity (D) in HepG2 cells under fluorescent microscope using a fluorescent microplate reader. *p < 0.05, **p < 0.01, ***p < 0.001 vs control group (n = 5). and curcumin was increased (Dai et al., 2017). In our study, based on the results of zeta-potential, turbidity measurement, DSC, FTIR, as well as the morphological observation by TEM and AFM, we proposed the hypothesis of the formation mechanism of Z-L-TFSG NPs. As shown in Fig. 1, the alkyl chain of lecithin was interacted with zein and TFSG mainly through electrostatic interaction, hydrogen bonding, and hydrophobic effects, and formed a more compact structure, which may result in the smaller size of the nanoparticles. It also enhanced the storage stability and increased the LE and EE of TFSG.
Conclusions
In the present study, TFSG was extracted and purified. Z-L-TFSG NPs were prepared using the ASCP technique. Under the optimal mass ratio of zein:lecithin:TFSG (10:15:1), the Z-L-TFSG NPs had a particle size of approximately 131 nm, a PDI of approximately 0.18, a zeta potential of approximately -44.1 mV, and an EE of 98.0 %. The nanoparticles were characterized by DSC, FTIR, TEM, and AFM. Z-L-TFSG NPs showed superior stability and better controlled release property in simulated gastrointestinal digestion. The encapsulation of TFSG in Z-L NPs could improve its in vitro antioxidant capacity. Moreover, Z-L-TFSG NPs could enhance the protective effects of TFSG against H 2 O 2 -induced oxidative damage to HepG2 cells. Based on these results, it may be concluded that the encapsulation of TFSG in Z-L NPs could improve the stability and show a more significant effect on the antioxidant capacity than that of TFSG. As such, the Z-L self-assembled NPs could be developed as a promising delivery system for the integrated encapsulation of multiple flavonoids in food and drugs. | 9,997.4 | 2023-01-01T00:00:00.000 | [
"Chemistry",
"Agricultural and Food Sciences"
] |
Murinization of Internalin Extends Its Receptor Repertoire, Altering Listeria monocytogenes Cell Tropism and Host Responses
Listeria monocytogenes (Lm) is an invasive foodborne pathogen that leads to severe central nervous system and maternal-fetal infections. Lm ability to actively cross the intestinal barrier is one of its key pathogenic properties. Lm crosses the intestinal epithelium upon the interaction of its surface protein internalin (InlA) with its host receptor E-cadherin (Ecad). InlA-Ecad interaction is species-specific, does not occur in wild-type mice, but does in transgenic mice expressing human Ecad and knock-in mice expressing humanized mouse Ecad. To study listeriosis in wild-type mice, InlA has been “murinized” to interact with mouse Ecad. Here, we demonstrate that, unexpectedly, murinized InlA (InlAm) mediates not only Ecad-dependent internalization, but also N-cadherin-dependent internalization. Consequently, InlAm-expressing Lm targets not only goblet cells expressing luminally-accessible Ecad, as does Lm in humanized mice, but also targets villous M cells, which express luminally-accessible N-cadherin. This aberrant Lm portal of entry results in enhanced innate immune responses and intestinal barrier damage, both of which are not observed in wild-type Lm-infected humanized mice. Murinization of InlA therefore not only extends the host range of Lm, but also broadens its receptor repertoire, providing Lm with artifactual pathogenic properties. These results challenge the relevance of using InlAm-expressing Lm to study human listeriosis and in vivo host responses to this human pathogen.
Introduction
Co-evolution of microbes with their hosts can select stringently specific host-microbe interactions at the cell, tissue and species levels [1]. Species-specific host-microbe interactions, which are the rule rather than the exception, pose a challenge for the use of laboratory animal models to study human pathogens, including Listeria monocytogenes (Lm), the etiological agent of listeriosis, a deadly foodborne infection. Lm is able to actively cross the intestinal barrier, reach the systemic circulation and cross the blood-brain and placental barriers, leading to its dissemination to the central nervous system and the fetus [2].
The mouse is a genetically amenable model that is widely used to investigate human diseases [3,4]. To obtain a mouse model in which the pathogenic properties of a given pathogen are similar to what is observed in human, species specificity can be circumvented by humanizing the mouse by transgenesis [5,6,7,8], knock-in [9], knock-out [10] or xenograft techniques [11]. One can also adapt the pathogen to the mouse by multiple passages on cell lines [12,13] or in vivo [14], or specifically ''murinize'' a pathogen ligand so that it interacts with the mouse ortholog of a species-specific human receptor [15,16].
The Lm surface protein InlA interacts with E-cadherin (Ecad) and mediates Lm entry into epithelial cells, which express this adherens junction protein [17,18]. Cadherins constitute a family of calcium-dependent cell adhesion receptors. Ecad is expressed mainly in epithelia, whereas N-cadherin (Ncad) is found primarily in neuronal cells and endothelial cells together with VE-cadherin [19,20]. Ncad can also be coexpressed with Ecad in epithelial cells [21]. Importantly, Ncad has been reported to not act as a receptor for InlA, and so far Ecad is the only known classical cadherin acting as a receptor for InlA [18]. In contrast to Ecad from human, guinea pig, rabbit and gerbil, mouse Ecad (mEcad) and rat Ecad are not recognized by InlA and do not promote bacterial entry [9,22]. The interaction of InlB, another Lm invasion protein, with its host receptor is also species-specific [23]. InlB recognizes the hepatocyte growth factor receptor Met of human, mouse, rat and gerbil but not that of guinea pig and rabbit [9,23,24].
Two mouse lines have been established to study InlA-Ecad interaction in vivo: a transgenic mouse line expressing human Ecad (hEcad) in enterocytes (hEcad Tg) [6], and a humanized mEcad knock-in mouse line (E16P KI) with an E16P amino acid substitution which enables mEcad to interact with InlA without affecting Ecad homophilic interactions and allows Lm internalization [9,22]. Using these two humanized mouse models, we have demonstrated that InlA mediates Lm crossing of the intestinal epithelium upon targeting of luminally-accessible Ecad around goblet cells [6,9,25], and that InlA and InlB act interdependently to mediate the crossing of the placental barrier [9]. Epidemiological investigations have confirmed the relevance of these experimental findings, and shown that InlA is implicated in Lm crossing of human intestinal and placental barriers [9,26].
In 2007, Wollert et al. engineered a genetically modified InlA with the purpose of increasing its binding affinity to hEcad [16]. Two amino acid substitutions in InlA, S192N and Y369S, were shown to enhance InlA binding affinity to hEcad [16]. Neither S192N nor Y369S substitution has been observed in the more than 500 Lm isolates InlA sequences we have checked (our unpublished results). Wollert et al. published that this increased affinity for hEcad translates into an increased bacterial entry into human epithelial cells (Caco-2) [16]. Importantly, Wollert et al. also showed that this modified InlA binds the extracellular cadherin domain 1 (EC1) of mEcad in solution with a comparable affinity to that of the wild-type (wt) InlA for hEcad EC1 [16]. They hypothesized that this interaction would allow Lm expressing this ''murinized'' InlA (InlA m ) to cross intestinal barrier and would render wt mice orally permissive to Lm infection, a phenotype which is mediated by InlA in permissive models [6]. In support of this hypothesis, Wollert et al. found an increased intestinal, spleen and liver bacterial loads of wt mice orally inoculated with Lm expressing InlA m , yet only after 3 to 4 days post infection, which is later than in models permissive to InlA-Ecad interaction [6,9,16]. Moreover, the ability of InlA m to mediate mEcad-dependent Lm internalization into host cells has never been tested. In addition, InlA m unexpectedly promoted pronounced inflammation and intestinal epithelial cell damages in wt mice [16], whereas wt InlA mediates the crossing of the intestinal barrier without inducing significant intestinal response and tissue damage in hEcad transgenic mice [6,27].
This prompted us to investigate the detailed properties of InlA m in cultured cells, as well as the in vivo cell and tissue tropisms of bacteria expressing InlA m , as compared to that of its isogenic parental Lm strain that expresses wt InlA. Here, we demonstrate that InlA m promotes bacterial entry not only into mEcad-positive but also into mEcad-negative mouse cells. We show that InlA mmediated entry into mEcad-negative cells is mouse Ncad (mNcad)dependent. Importantly, InlA m -mNcad interaction allows bacteria to specifically target Ncad-positive villous M cells in vivo, a cell type which is not targeted by Lm in humanized mouse models permissive to InlA-Ecad interaction. This leads to enhanced intestinal inflammatory responses and disruption of the intestinal barrier integrity, both of which are not observed in Lm-infected humanized mice and human listeriosis. Together, these results demonstrate that the murinization of InlA not only extends Lm host range, but also broadens its receptor repertoire, consequently changing Lm cell tropism and enhancing host immune responses to Lm. These results challenge the relevance of using InlA mexpressing Lm to study human listeriosis and in vivo host responses to this human pathogen.
Murinization of InlA promotes bacterial entry into mEcad-expressing cells but has no impact on bacterial entry into hEcad-expressing cells
We first investigated whether the increased affinity of InlA m to hEcad translates into an enhanced invasion of hEcadexpressing cells, as proposed by Wollert et al. [16]. To this end, we assessed InlA m -dependent entry into LoVo cell, a human epithelial cell line expressing hEcad [22]. Lm wt strain and Lm expressing InlA m (Lm-inlA m ) invaded LoVo cells at similar levels ( Figure 1A). Because Lm can be internalized by InlA-independent pathways such as InlB-Met, we transferred either inlA or inlA m onto the chromosome of Listeria innocua (Li), a naturally non-invasive and non-pathogenic Listeria species, in which heterologous expression of inlA has been shown to confer invasiveness [17,18,28]. Li expressing either InlA (Li-inlA) or InlA m (Li-inlA m ) were equally invasive in LoVo cells ( Figure 1B). These results indicate that contrary to what is reported by Wollert et al. [16], the increased affinity of InlA m to hEcad does not translate into an increased level of bacterial entry. Both Li-inlA and Li-inlA m recruited hEcad when incubated with LoVo cells, suggesting that hEcad is involved in both InlA-and InlA mmediated entries ( Figure 1E, upper panel). Because purifed InlA m interacts with the purified EC1 domain of mEcad, Wollert et al. have proposed, although not tested, that InlA m would mediate bacterial entry into mEcad-expressing cells [16]. We therefore tested the ability of InlA m to promote bacterial entry into the mouse epithelial cell line Nme, which expresses mEcad [29]. InlA m promoted bacterial entry into mEcadexpressing Nme cells, although to a lower level than InlA in hEcad-expressing LoVo cells ( Figure 1C and D). Li-inlA m also recruited mEcad during cell invasion, whereas as expected, Li-inlA does not ( Figure 1E, lower panel). Together, these results show that (i) the increased affinity of InlA m to hEcad does not enhance bacterial entry into hEcad-expressing cells, and (ii) the murinization of InlA confers to Lm an enhanced ability to be internalized into mEcad-expressing cells [16].
Author Summary
Co-evolution of microbes with their hosts can select stringently specific host-microbe interactions at the cell, tissue and species levels. Listeria monocytogenes (Lm) is a foodborne pathogen that causes a deadly systemic infection in humans. Lm crosses the intestinal epithelium upon the interaction of its surface protein InlA with Ecadherin (Ecad). InlA-Ecad interaction is species-specific, does not occur in wild-type mice, but does in transgenic mice expressing human Ecad and knock-in mice expressing humanized mouse Ecad. To study listeriosis in wildtype mice, InlA has been ''murinized'' to interact with mouse Ecad. Here, we demonstrate that in addition to interacting with mouse Ecad, InlA m also uses N-cadherin as a receptor, whereas InlA does not. This artifactual InlA m -Ncadherin interaction promotes bacterial translocation across villous M cells, a cell type which is not targeted by InlA-expressing bacteria. This leads to intestinal inflammation and intestinal barrier damage, both of which are not seen in humans and humanized mouse models permissive to InlA-Ecad interaction. These results challenge the relevance of using InlA m -expressing Lm as a model to study human listeriosis and host responses to this pathogen. They also illustrate that caution must be exercised before using ''murinized'' pathogens to study human infectious diseases. InlA m promotes mEcad-independent entry into mouse cells Monk et al. have reported that Lm-inlA m invades mouse CT26 cells more efficiently than Lm [13]. Strikingly, CT26 cells do not express mEcad (Figure 2A) [30], yet we confirmed that InlA m mediates bacterial entry into these cells ( Figure 2B). Because classical cadherins exhibit a high level of conservation in their EC1 domains ( Figure S1A), we tested whether Li-inlA m would recruit another classical cadherin than mEcad in CT26 cells. We labeled CT26 cells with a pan-cadherin antibody, which recognizes the cytoplasmic domain of classical cadherins [31]. CT26 cells were strongly stained with the pan-cadherin antibody ( Figure S1B), indicating that they likely express classical cadherin proteins. Furthermore, this pan-cadherin-immunoreactive protein was recruited in CT26 cells by Li-inlA m but not Li-inlA ( Figure S1B). Immunoblotting and immunostaining revealed that CT26 cells express Ncad ( Figures 2C and D), a classical cadherin known to be expressed in endothelial cells, neurons and some transformed epithelial cells [20]. Importantly, Li-inlA m , but not Li-inlA, recruited Ncad in CT26 cells ( Figure 2D). We next tested other cell lines for Ncad expression. We found that Nme cells (which also express mEcad and are permissive to InlA m -mediated entry), human HeLa cells, and guinea pig 104C1 cells all express Ncad ( Figure 2C). As in CT26 cells, InlA m promoted bacterial entry into HeLa and 104C1 cells, although these two cell lines do not express Ecad and are therefore not permissive to InlA-dependent entry ( Figure S2) [23]. These results suggest that the murinization of InlA confers to this protein the ability to interact with Ncad from different species, and to enter into host cells expressing Ncad.
mNcad is a receptor for InlA m but not InlA
To investigate if mNcad serves as a receptor for InlA m -mediated entry into CT26 cells, CT26 cells were treated with mNcadspecific siRNAs or scrambled control siRNAs. Treatment of CT26 cells with mNcad siRNAs led to a reduced expression of mNcad which correlated with a significantly decreased InlA m -dependent entry ( Figures 3A and B). To directly assess the ability of mEcad and mNcad to act as receptors for InlA m , we used the BHK21 cell line, which is of hamster origin and does not express any known classical cadherin [32], and transfected this cell line with plasmids encoding either hEcad, mEcad or mNcad. As expected, both InlA and InlA m mediated bacterial entry into hEcad-expressing cells ( Figure 3C). Moreover, InlA m mediated entry into mEcadexpressing cells, whereas as previously shown, InlA did not ( Figure 3C) [22]. Most importantly, we also demonstrated that InlA m mediated bacterial entry into Ncad-expressing cells, whereas, as previously shown, InlA did not ( Figure 3C) [18].
To investigate whether the InlA m receptor repertoire extends to other members of classical cadherins, we tested the ability of mouse P-cadherin (mPcad) and VE-cadherin (mVEcad) to serve as receptors for InlA m ( Figure S1A). Neither mPcad nor mVEcad acted as a receptor for InlA m or InlA ( Figure 3C). Taken together, these data confirm that InlA exhibits a species-specific and narrow repertoire for Ecad and mediates entry into hEcad-but not mEcad-expressing cells, and demonstrate that by widening InlA species spectrum from human to mouse Ecad, murinization of InlA extends its receptor repertoire to Ncad.
Murinization of InlA extends the cell tropism of Lm at the intestinal level
In order to investigate if these in vitro results translate into an in vivo phenotype, and study in particular the cell tropism of InlA mexpressing bacteria, we investigated Ncad luminal accessibility at the intestinal epithelium level, which is the portal of InlA-mediated entry of Lm. In contrast to luminally-accessible Ecad which is mostly observed as rings surrounding goblet cells [25], mNcad was accessible on the apical pole of villous M cells ( Figure 4, Movie S1), but not M cells of Peyer's patches (Movie S2) in wt mice. The expression of luminally-accessible Ncad was also detected on the apical pole of villous M cells in E16P KI mice ( Figure S3, Movie S3). These results suggest that InlA m may allow bacteria to target villous M cells upon mouse oral inoculation.
To specifically investigate whether InlA m -expressing bacteria target cells that express luminally-accessible Ncad, we inoculated orally wt mice with Li-inlA or Li-inlA m , and for comparison we inoculated humanized E16P KI mice orally with Li-inlA. As expected from our recent results [25], Li-inlA were found in goblet
InlA m -mNcad interaction has an impact on Lm systemic dissemination in orally inoculated mice
To investigate the impact of InlA m -mNcad interaction on the infection process, we inoculated orally wt and E16P KI mice with Lm-inlA m or Lm. In Lm-infected E16P KI mice in which InlA-Ecad interaction is functional, InlA promoted Lm invasion of the small intestinal tissue and bacterial dissemination to spleen and liver as early as 2 days post infection (dpi) ( Figure 6). In contrast, in Lm-inlA m infected wt mice, in which both InlA m -Ecad and InlA m -Ncad interactions are functional, Lm bacterial loads in the small intestinal tissue, spleen and liver were not significantly increased at 2 dpi compared to Lm-infected wt mice, but were at 4 dpi ( Figure 6). This delayed systemic dissemination was also observed when comparing Lm-inlA m to LmDinlA in E16P KI mice ( Figure S7). These results demonstrate that, although promoting Lm crossing of the wt mouse intestinal barrier, InlA m delays bacterial systemic dissemination relative to InlA in E16P KI mice, and therefore alters the kinetics of Lm infection in vivo. and B) and in hEcad Tg mice ( Figures S8A and B). Importantly, neutrophil infiltration correlated only with InlA m -mediated invasion, and did not reflect bacterial load in the villi, which was actually the highest in Lm-infected humanized mice, in which no neutrophil infiltration was observed (Figures 7A-C, S8A-C). Moreover, a significant increase in IFN-c and IL-1b expression was observed in the intestinal tissue of wt mice infected with Lm-inlA m , whereas no significant increase was observed in Lm-infected wt and humanized mice (Figures 7 D and E). Together, these results indicate that InlA m -Ncad-mediated intestinal invasion per se leads to exacerbated host responses compared to InlA-Ecadmediated intestinal invasion, and are not a reflect of enhanced bacterial tissue invasion.
InlA m -mNcad interaction leads to enhanced intestinal response and compromised intestinal barrier function
We next assessed intestinal barrier integrity upon infection by testing the intratissular diffusion of biotin administered intraluminally (see Material and Methods) [33]. In wt and humanized mice infected by Lm for two days, biotin localized exclusively to the luminal side of the small intestine ( Figures 7F and S8D). In contrast, although the intestinal villi of Lm-inlA m infected wt and humanized mice were not heavily infected, biotin accessed the lamina propria ( Figures 7F and S8D). These findings indicate that InlA m -Ncad-mediated intestinal invasion leads to a disruption of intestinal barrier integrity. Together, these results demonstrate that the murinization of InlA profoundly modifies the pathogenic properties of Lm by altering its intestinal portal of entry, host intestinal responses and intestinal barrier integrity.
Discussion
InlA interaction with Ecad allows Lm translocation across the intestinal epithelium and is therefore a critical event in the development of systemic listeriosis, one of the deadliest foodborne infections in human. Because InlA does not interact with mEcad, the discovery and characterization of this key step were made in species permissive to InlA-Ecad interaction (guinea pig, gerbil) and humanized mouse models (hEcad Tg and E16P KI mouse lines) [6,9]. A genetically engineered Lm strain expressing a murinized InlA (InlA m ) enabling interaction with mEcad in vitro has been proposed to constitute an attractive alternative model to study human listeriosis in wt mice [16]. A practical advantage of this latter system is that it can be readily used to infect several different mouse lines. However, a systematic study comparing the properties of Lm expressing InlA m to that of its isogenic parental strain has not been performed, neither in vitro nor in vivo.
Here we show that InlA m is able to recruit mEcad and mediate mEcad-dependent entry into cultured cells. We also show that InlA m mediates entry into goblet cells of wt mice, which express luminally-accessible mEcad. These results confirm that the S192N and Y369S substitutions confer to InlA a phenotype in wt mice which is observed in humanized mice permissive to InlA-Ecad interaction [25].
Importantly, we also uncover that InlA m is able to recruit Ncad and mediate Ncad-dependent internalization. This artifactual interaction translates in vivo into InlA m -dependent targeting of villous M cells, intestinal inflammatory responses, disruption of intestinal barrier integrity and delayed bacterial systemic dissemination in wt mice, as well as in humanized mice. Such stricking phenotypes are not observed in humanized mice orally-inoculated with wt Lm, suggesting that they depend on InlA m -Ncad interaction and invasion of villous M cells, but not on InlA m -Ecad interaction and invasion of goblet cells (Figure 8). It is important to note that these phenotypes are also present in E16P KI and hEcad Tg mice infected with Lm-inlA m , indicating that intestinal inflammation is a direct consequence of InlA m -mediated intestinal invasion, and proving that the absence of inflammation in Lm-infected humanized mice is not a side effect of mouse humanization, but is a genuine property of InlA-dependent intestinal invasion. These results are in agreement with the observation by Wollert et al. that infection with Lm-inlA m leads to severe intestinal inflammation and tissue damage in wt mice [16], and with our earlier observation that InlA has little impact on Lm intestinal responses in mice permissive to InlA-Ecad interaction [6,27]. This indicates that the murinization of InlA, in addition to broadening the host range of Lm, also extends its receptor repertoire to another member of the classical cadherin family, Ncad, therefore modifying its cell tropism, host responses and the dynamics of infection.
The engineering of InlA m was based on the rational protein design of a modified InlA that would increase InlA-hEcad binding affinity [16]. Indeed, S192N and Y369S substitutions in InlA lead to a 6,700-fold increase in the binding affinity of InlA to hEcad [16]. Here we have shown that this does not translate into increased invasion of hEcad-expressing cells. Before drawing this conclusion, we ensured that the BHK21 cell line we used does not express other cadherins than the one we intended to study. A possible reason for the observed increased level of invasion of Lm-inlA m in Caco-2 cells observed by Wollert et al. is the coexpression of Ecad and Ncad in these cells [21]. These results suggest that InlA-hEcad interaction, although it is of relatively low affinity (K D = 864 mM) [16], has been naturally selected to mediate an optimal level of infection.
We have shown that InlB, another major invasion protein of Lm, does not play a significant role for the crossing of the intestinal barrier [23]. In contrast, InlB has been reported to promote Lm expressing InlA m to invade intestinal villi [34]. Our results shed light onto these apparent contradictory results and raise the possibilty that InlA m -Ncad mediated invasion of villous M cells may involve the InlB pathway.
Shigella flexneri, the etiological agent of bacillary dysentery is associated with strong polymorphonuclear infiltration, severe local inflammation, disruption of intestinal barrier integrity, yet no systemic dissemination [35,36]. In contrast, listeriosis in human and humanized mice is characterized by the paucity of intestinal symptoms, the absence of polymorphonuclear intestinal infiltration, little local inflammation, the absence of intestinal barrier disruption, but systemic dissemination [6,27,36,37]. We have demonstrated that Lm-inlA m triggers pro-inflammatory response and disrupts epithelial integrity in intestinal tissue of wt and humanized mice, and exhibits a delayed systemic dissemination, compared to Lm-infected humanized mice. These observations strongly suggest that the targeting of villous M cells by InlA mexpressing bacteria triggers pro-inflammatory host responses which contain bacterial invasion but lead to intestinal epithelium damages. This fits with the observation that antigen delivery via villous M cells stimulates immune reponses [38]. Like InlA m , Als3 is a Candida albicans invasin that binds both Ecad and Ncad to invade host cells [39]. Candida albicans has been shown to favor gut inflammation and promotes food allergy accompanied by gut epithelial barrier hyperpermeability, the underlying mechanisms of which are so far unclear [40,41]. Our study indicates that Candida albicans may use Als3 to target Ncad-positive villous M cells, and thereby trigger intestinal inflammation. The specific functions of villous M cells remain poorly understood, yet villous M cells are a particularly abundant constituent of the intestinal epithelium. Our results show that InlA m -and Als3-expressing microorganisms would be particularly instrumental to study villous M cell functions.
Repeated infection of mice in vivo or mouse cells in vitro allows the obtention of ''murinized'' pathogens adapted to the mouse. Despite the great adaptability of microbes, evolutionary constraints limit pathogen variability [42]. A mutation beneficial under certain environmental conditions may end up as disadvantageous in another, highlighting the fine-tuning of host-microbe interactions. The structure-based rational design of InlA m was proposed as a subtle and elegant way to electively ''murinize'' a microbial ligand with least impact on the pathogen. However, we provide here evidence that the rationally designed InlA m has gained the unfortunate ability to interact with another surface protein than its cognate receptor Ecad. Even though InlA m mediates Lm crossing of the intestinal barrier, a phenotype which is strictly dependent on InlA-Ecad interaction, the way by which Lm crosses the intestinal barrier in an InlA m -dependent manner differs from what observed with wt Lm in humanized mice and humans, as does the resulting infection process. This illustrates that murinization of human-specific pathogens, although an elegant and rational approach, may unfortunately mislead rather than ease the understanding of human infectious diseases' pathophysiology. Caution must therefore be exercised before engineering and using ''murinized'' pathogens to study human infectious diseases.
Bacterial and cell culture
Bacterial strains, plasmids and primers are listed in Table S1. Note that the sequences of inlA, inlA m in Lm and in Li were confirmed by sequencing, as well as the integration sites of inlA and inlA m in Li and the deletion site of inlA in Lm. Listeria and Escherichia coli strains were respectively cultivated in BHI and LB at 37uC with shaking at 180 rpm. To deliver plasmids into Li, E. coli S17-1 (colistin and nalidixic acid sensitive) cells were transformed with the plasmids followed by conjugation with Li (colistin and nalidixic acid resistant). Mammalian cell lines used in this study were routinely cultured at 37uC in 5% CO 2 . Except for the culture medium for BHK21 which was supplemented with 5% fetal bovine serum, all the cell culture media were supplemented with 10% fetal bovine serum. Human epithelium LoVo cells were cultured in F12K nutrient GlutaMax medium. Mouse epithelium Nme cells were cultured in DMEM GlutaMax medium supplemented with 10 mg/ml insulin. Mouse CT26 and guinea pig 104C1 cells were cultured in RPMI1640 GlutaMax medium supplemented with HEPES buffer and sodium pyruvate. Human HeLa cells were cultured in MEM GlutaMax medium. Hamster BHK21 cells were cultured in GMEM GlutaMax medium supplemented with tryptose phosphate buffer and HEPES buffer. All the culture medium and related chemicals were purchased from Gibco (Invitrogen). Transient transfection of mammalian cells was performed with jetPRIME transfection kit (Polyplus transfection). The scrambled (sc-37007) and mouse Ncad specific siRNAs (sc-35999) were purchased from Santa Cruz. For the transfection of siRNAs, mouse CT26 cells were seeded into the 24well plates for 1 day and then transfected with scrambled siRNAs (25 nM) or mNcad-specific siRNAs (25 nM) followed by 1 day incubation and replacement of transfection medium with growth medium another 1 day of incubation before infection. For the transfection of plasmid DNAs, BHK21 cells were transiently transfected with pcDNA3 expression vector harboring the cDNAs of each cadherin (1 mg DNA for each well in a 24-well plate) followed by 2 days incubation before infection.
Construction of plasmids
The strategy to express inlA or inlA m in Li is as described based on integrative plasmid pAD containing a constitutive promoter [43]. The primers EagI_UTRhly-F and UTRhly-R were used to amplify the hly 59 UTR of Lm EGDe. Full length of inlA and inlA m were amplified from the genomic DNA of Lm EGDe and Lm-inlA m , respectively, with the primers UTRhly_inlA-F and SalI_inlA-R2. The resulting PCR products were ligated to hly 59 UTR by splicing-by-overlap-extension (SOE) PCR. The final SOE PCR products, containing the entire hly 59 UTR sequence fused to the start codon of the inlA (hly 59 UTR-inlA) or inlA m , (hly 59 UTR-inlA m ), were then cloned in pCR-Blunt (Invitrogen) and verified by sequencing. Plasmids containing correct sequence and pAD-cGFP were digested by EagI and SalI. The backbone of pAD-cGFP was ligated with hly 59 UTR-inlA and hly 59 UTR-inlA m to form pAD-inlA and pAD-inlA m .
Invasion assay
Cell suspensions from confluent monolayers were seeded at a concentration of 5610 4 cells per well in 24-well tissue culture plates and grown for 40-48 hr in an antibiotics-free medium at 37uC. Lm and Li strains were grown to OD600 at 0.8 and 0.6 in BHI, respectively. Bacterial culture were then washed with PBS and diluted in cell culture medium without serum. Bacterial suspensions were added to the cells at a multiplicity of infection (MOI) of approximately 50 and incubated for 1 hr. Following wash with complete medium, 10 mg/ml of gentamicin was added to kill the extracellular bacteria for 1 hr. The cells were then washed by complete medium and PBS, and homogenized in PBS supplemented with 0.4% Triton X-100, followed by serial dilution and colony forming units (CFUs) counting. For cadherin recruitment assay, the procedure was the same as the invasion assay except that the cell attachment buffer (HEPES 20 mM, NaCl 150 mM, glucose 50 mM, MgCl 2 1 mM, CaCl 2 2 mM, MnCl 2 1 mM, 0.1% BSA) was used for infection and PBS (Ca 2+ / Mg 2+ ) (Gibco) was applied to wash the non-attached bacteria stringently followed by fixation.
Animals
Eight to 10-week old C57BL/6 female mice (JANVIER) and isogenic mEcad E16P KI female mice were food restricted overnight but allowed free access to water. Lm culture was prepared as described [6], and inoculated with a feeding needle intragastrically [44]. Mice were then immediately allowed free access to food and water. All the procedures were in agreement with the guidelines of the European Commission for the handling of laboratory animals, directive 86/609/EEC (http://ec.europa. eu/environment/chemicals/lab_animals/home_en.htm) and were approved by the Animal Care and Use Committee of the Institut Pasteur, as well as by the ethical committee of ''Paris Centre et Sud'' under the number 2010-0020.
Immunofluorescence labeling and immunoblotting
Preparation of tissue sections and whole mount tissues were as described [9,25]. The following antibodies and fluorescent probes were used for immunostaining and Western blot: anti-hEcad clone HECD-1 mouse monoclonal antibody (Invitrogen), anti-mEcad clone ECCD-2 rat monoclonal antibody (Invitrogen), anti-b-actin clone AC-15 mouse monoclonal antibody (Sigma), anti-Ncad clone 32/N-cadherin mouse monoclonal antibody (BD), anti-Ncad clone GC-4 mouse monoclonal antibody (Sigma), anti-pan cadherin clone CH-19 monoclonal antibody (Sigma), anti-M cell clone NKM 16-2-4 rat monoclonal antibody (Miltenyl Biotec), R6 anti-Li rabbit polyclonal antibody and R11 anti-Lm rabbit polyclonal antibody [45], Rat anti-mouse Ly-6G (BD), wheat germ agglutinin (WGA) conjugated with Alexa Fluor 647 (Jackson was extracted from the ileum loops of infected or PBS-treated mice 48 hr post infection (n = 4). Following reverse transcription reaction, gene expression was quantified by qPCR with normalization to the GAPDH transcript. Values are expressed as a mean + SD of the fold change relative to that in PBS-treated mice. No significant difference on IFN-c (D) and IL-1b (E) expression was observed among PBS-treated, Lm and LmDinlA-infected E16P KI mice. In contrast, Lm-inlA m oral infection induced 5 to 15 fold increase of IFN-c and IL-1b gene expression in intestinal tissue compared to Lminfected and PBS-treated wt mice. Statistical analysis was performed with the unpaired Student's t test. (F) Biotin (red) penetration into intestinal lamina propria was done to address intestinal barrier integrity during infection. Mice were sacrificed 2 days post infection. Biotin was injected into ileum loop followed by PBS wash and fixation. Tissues were stained for Lm (green, highlighted by the arrows) and counterstained with WGA (grey) for goblet cells, respectively. Biotin is located within lamina propria of the villi from Lm-inlA m infected mice but not Lm infected wt and E16P KI mice. Scale bar, 20 mm. See also Figure S8. doi:10.1371/journal.ppat.1003381.g007
Biotin penetration experiment
Biotin was used as a molecule to address the integrity of intestinal epithelium as described previously [33]. Briefly, 2 mg/ ml of EZ-link Sulfo-NHS-Biotin (Pierce) in PBS was slowly injected into the lumen of ileum loop via the open end adjacent to cecum immediatedly after removal of the entire ileum. After 3 min, the loop was opened followed by PBS wash and 4% paraformaldehye fixation.
Intestinal tissue genes expression quantification
Four mice for each condition were sacrificed 2 days post infection. 1 cm-long of ileal loop of each animal was applied for RNA extraction. The RNA isolation, reverse transcription and quantitative real time PCR (qRT-PCR) were performed as described [46]. Primers used for qRT-PCR were pre-designed, validated RT 2 qPCR primer pairs (SABioSciences, Qiagen) as follows: IFNG (IFN-c, PPM03121A), IL1B (IL-1b, PPM03109F) and GAPDH (PPM02946E).
Statistical analysis
Values are expressed as mean + SD. Statistical comparisons were made using the unpaired Student's t test, Mann-Whitney u test or the x 2 test as indicated. p values,0.05 were considered significant. Significant differences are marked with an asterisk for p,0.05, two asterisks for p,0.01, three asterisks for p,0.001 and four asterisks for p,0.0001. Figures S4A and B, respectively) and in E16P KI mice infected by Lm (C, related to Figure S4C) presented in Figure S5 were shown. These images demonstrate that the bacteria highlighted in the Figure S4 The number of bacteria in each infected villus was also quantified. Bacteria load of Lm in the intestinal villi was higher than that of Lm-inlA m in both E16P KI and hEcad Tg mice upon oral infection 24 hpi. In order to compare the result of Lm-inlA m with Lm in E16P KI mice, the data of Lm-infected E16P KI mice shown here in B and C were from figure 7B and C, respectively. Statistical analysis was done with Mann-Whitney u test (n = 20 villi from 2 mice). (D) Biotin was injected into ileum loop followed by PBS wash and fixation. Tissues were stained for Lm (green, highlighted by the arrows) and counterstained with WGA (grey) for goblet cells and epithelia. Biotin is located within lamina propria of the villi from Lm-inlA m infected mice but not Lm infected mice. Scale bar, 20 mm.
(PDF)
Movie S1 Luminally accessible Ncad is expressed on the apical poles of villous M cells in wt mice, related to Figure 4. Whole mount intestinal tissue of a wt mouse was stained before permeabilization for accessible mNcad (green) and NKM 16-2-4 for M cells (red), and after permeabilization for nuclei (blue) and WGA for goblet cells (grey). Intestinal villus is oriented with the villus tip facing the viewer. The luminally accessible apical surface of villous M cells is labeled with the anti-Ncad antibody. Images were acquired as a z stack by confocal microscopy and assembled as a three-dimensional reconstruction with Imaris software.
(MOV)
Movie S2 Peyer's patch M cells do not express luminally accessible Ncad in wt mice, related to Figure 4. Whole mount intestinal tissue of a wt mouse was stained before permeabilization for accessible mNcad (green) and NKM 16-2-4 for M cells (red), and after permeabilization for nuclei (blue) and WGA for goblet cells (grey). The luminally accessible apical surface of Peyer's patch M cells is not labeled with the anti-Ncad antibody. Intestinal Peyer's patch is oriented with the tip facing the viewer. Images were acquired as a z stack by confocal microscopy and assembled as a three-dimensional reconstruction with Imaris software. (MOV) Movie S3 Luminally accessible Ncad is expressed on the apical poles of villous M cells in E16P KI mice, related to Figure 4. Whole mount intestinal tissue of an E16P KI mouse was stained before permeabilization for accessible mNcad (green) and NKM 16-2-4 for M cells (red), and after permeabilization for nuclei (blue) and WGA for goblet cells (grey). Intestinal villus is oriented with the villus tip facing the viewer. The luminally accessible apical surface of villous M cells is labeled with the anti-Ncad antibody. Images were acquired as a z stack by confocal microscopy and assembled as a three-dimensional reconstruction with Imaris software. (MOV) Movie S4 Li-inlA m targets both villous M cells and goblet cells in the intestinal villi upon oral inoculation of wt mice, related to Figure 5. Ileal loop of a wt mouse orally infected by Li-inlA m was taken 5 hr post infection, followed by fixation and staining for Li (green), M cells (red), goblet cells (grey) and nuclei (blue) after permeabilization. Images were acquired and assembled as described for Movie S1.
(MOV)
Movie S5 Lm-inlA m targets goblet cells in the intestinal villi upon oral inoculation of wt mice, related to Figure 5. Ileal loop of a wt mouse orally infected by Lm-inlA m was taken 5 hr post infection, followed by fixation. Vibratome section was stained for Lm-inlA m (green), M cells (red), goblet cells (grey) and nuclei (blue) after permeabilization. Images were acquired and assembled as described for Movie S1. (MOV) Movie S6 Lm-inlA m targets villous M cells in the intestinal villi upon oral inoculation of wt mice, related to Figure 5. Ileal loop of a wt mouse orally infected by Lm-inlA m was taken 5 hr post infection, followed by fixation. Vibratome section was stained for Lm-inlA m (green), M cells (red), goblet cells (grey) and nuclei (blue) after permeabilization. Images were acquired and assembled as described for Movie S1. (MOV) Movie S7 Lm targets goblet cells in the intestinal villi upon oral inoculation of E16P KI mice, related to Figure 5. Ileal loop of a wt mouse orally infected by Lm was taken 5 hr post infection, followed by fixation. Vibratome section was stained for Lm (green), M cells (red), goblet cells (grey) and nuclei (blue) after permeabilization. Images were acquired and assembled as described for Movie S1. (MOV) | 8,590.4 | 2013-05-01T00:00:00.000 | [
"Biology"
] |
Presence of Toxocara Eggs on the Hairs of Dogs from Southwest Nigeria
The close contact between dogs and humans poses a high risk of exposure to Toxocara canis eggs which can lead to Visceral Larva Migrans (VLM) syndrome. The aim of the study was to assess whether the hair of domestic dog in Nigeria was contaminated with eggs of T. canis, a zoonotic parasite. Samples of hair from 267 dogs of different ages comprising local and exotic breeds were collected from the neck, back and anal regions between April 2015 and February 2016 at Ile –Ife and Ibadan, Southwest Nigeria. Eggs were recovered from the hair using a previously standardised detection method. Eggs were found on the hair of 48 (18.0%) dogs. A total of 188 T. canis eggs were recovered from the hair of infected dogs. None of the eggs found were embryonated. 62.5% of the infected were under one year of age. As no domestic dogs which were positive from hair samples had negative faecal samples, this indicates that the presence of T. canis eggs in hair is probably due to self-contamination. As T. canis eggs were found on the hair of domestic dogs, direct contact with dogs may be a potential risk factor for transmission of T. canis eggs to humans.
Introduction
Toxocara canis is one of the most common gastrointestinal parasites of domestic dogs and other canids. Infected dogs can shed large number of eggs into the environment causing infection in other dogs and in paratenic hosts including small mammals and humans [1]. The presence of potentially infective eggs of T. canis in the environment is one of the key routes of transmission to humans [2].
The most widely recognised source of human infection is the ingestion of contaminated food, water and soil. Children in their first decade of life are the most vulnerable due to their geophagic behaviour and mouthing of contaminated objects which is further linked to high risk of exposure at playground contaminated with dog faeces [3]. In addition infection can also occur following ingestion of partial or whole paratenic host such as raw livers of domestic animals such as chicken, ducks, rabbits, sheep and cattle [4,5] and ingestion of raw vegetables or fruits [6,7].
Direct contact with dogs that harbour a patent Toxocara infection is usually not considered a risk, because the eggs need to mature 3-6 weeks before they are infective [8][9][10]. Another proposed mode of transmission in recent studies is contact with embryonated eggs on a dog's hair [11][12][13][14][15][16].
Various surveys conducted worldwide indicate that prevalence of T. canis in canid definitive host ranged from 86%-100% in puppies and 1%-45% in adult dogs [17][18][19][20]. In Nigeria, studies have revealed high prevalence of T. canis with values of 80% in puppies [21] and 33.8%-41.7% in adult domestic dogs [22][23][24]. The seroprevalence of toxocariasis in human population has been reported to fluctuate between 2.2% and 92.8% depending on countries, study groups age and socio-cultural level [25]. There are only two studies reported so far in Nigeria, and the prevalence of the recent study was reported to be 86.1% [26].
Wolfe and Wright [11] stated that if eggs could embryonate on the hair of a dog, direct contact with dogs could be seen as an additional route of transmission. Recent studies from different parts of the world have reported presence of both unembryonated and potentially infective eggs on the hair of dogs [8,12,13,27]. Currently, there is dearth of information regarding the presence of T. canis eggs in the coat of Nigerian dogs.
The aim of this present study was to investigate whether Toxocara eggs were present on the hair of domestic dogs sampled from different locations within Southwest Nigeria as well as to determine their developmental stage and related with the animals' characteristic such as age, sex, breed type.
Sample collection and egg detection
Reconnaissance visits to identify 500 dog-owning households in Ilesa and Ibadan, Southwest Nigeria were carried out between January and March 2015 for exploratory discussion on the purpose of the study. Based on proximity and geographic location, a total of 267 dogs were sampled from different locations within Ilesa and Ibadan between April 2015 and February 2016. The age and sex of each dog was recorded. The dogs were classified into puppies (age ≤ 6months), young dogs (age 7-12 months) and adults (>12 months). Dog hair samples from local breeds (African Shepherd) and exotic breeds (Alsatian, Mongrel, and Rottweiler) were used in the present study.
The hair samples were taken from three different locations on the dog's body; the neck, back and anal region. Each hair was taken using a scissors which was washed thoroughly with hypochlorite solution between each hair sample taken. Each of the three hair samples taken from the dog was placed in individual re-sealable labelled with dog's ID number. The name, age, sex, breed type and site location of each dog was recorded. Each dog was also assigned with an identification number (ID). Hair samples were weighed and the weight ranged from 0.06-0.96 g. Besides collecting hairs, faeces from 79 dogs were also collected into clean, sterile 5 ml specimen bottles and processed using modified Kato-Katz technique [28] and then examined for T. canis eggs.
Eggs were recovered from the hair using the technique described by Wolfe and Wright [11] with some modification. Each hair sample was added 300 ml distilled water with one or two drops of Tween 40, which separates any eggs from the hair. The samples were mixed in a homogeniser (Mod: TH-220 Omni, International Marietta, GA, USA) for 3 mins, they were then poured onto a 250 μm sieve. The samples were then washed under dripping water while on the sieves, and the sediment collected was transferred to centrifuge tubes. They were then centrifuged at 1500 rev for 10 mins, the supernatant was decanted and the remaining sediment with a drop of distilled water was collected using Pasteur Pipette and transferred onto clean microscope slides. The sediments were examined under a light compound microscope at X100 magnification to identify the T. canis eggs. The eggs were classified into two groups; unembryonated and embryonated.
Statistical analysis
Data were expressed as the prevalence of Toxocara eggs on the hair, the mean eggs per gram of hair (epg) ± SEM (Standard Error of Mean). The Chi-square (ᵡ²) test was used to test the existence of associations between the categorical variables (age of the dogs, sex, breed type and mode of life) and the prevalence of T. canis on the hair. The effect of factors affecting the presence of the T. canis eggs was performed using the univariate logistic regression model. All data were analysed using SPSS version 17.
Results
A total of 801 hair samples were examined from 267 dogs over a period of 10 months. Toxocara eggs were found on the hair of 48 (18.0%) dogs of which 62.5% were under one year of age ( The statistical analysis showed that there were significant differences between free-roaming and kennelled dogs (p<0.05) and between local breeds (African shepherd) and exotic breeds (p<0.05) regarding the presence of T. canis on dogs/hair samples. However, there were no significant differences among the age group and between genders (p>0.05) (
Discussion
In this study, T. canis eggs were found in 18.0% of dogs' hair samples examined. This result is comparable to the 21.56% and 19.16% prevalences reported by two authors [13,29] respectively. The prevalence obtained in this study is lower than those reported in previous studies [8,11,15]. In the Turkish study, a higher prevalence of 49% in a mixture of stray and owned dog was reported [15]. Wolfe and Wright [11] and Roddie et al. [12] reported higher prevalences of 25% and 67% of dogs harbouring eggs in their coats, respectively. These higher prevalences may be as a result of the focus upon stray dogs in the two studies with a mixture of stray and owned dogs being sampled by Wolfe and Wright [11] and only stray dogs by Roddie et al. [12]. The higher prevalence in stray dogs is most likely attributable to the lack of anthelminthic treatment, contact with soil and grooming given to these animals.
It has been reported that age is not related with the contamination of Toxocara eggs of the hair [11,12,30] or in other words the eggs on the hair can be seen in all age groups but that it is more common in less than one year old [9,13,14,29]. The results of the present study suggested that young and adult dogs are more likely to harbour T. canis eggs on their hair than puppies. A similar finding was reported by Oge et al. [16]. It has been reported that young and adult dogs are susceptible to Toxocara infection, even if they have been previously infected as puppies [31]; and young and adult dogs may still pose a risk to human health. In this study, both sexes seemed to have similar resistance to Toxocara infections. Previous studies by different authors [12,13,16,27,29,30] reported no significant difference in the prevalence between genders.
Studies on the dog breed effect on parasite prevalences are limited. In this study, the prevalence of T. canis in dog's hair samples was significantly higher in local breeds than in exotic breeds. A similar finding was reported by Anene et al. [32] where prevalences and intensities of different parasite infections were significantly higher in local breeds and their crosses than in exotic breeds. Another author [33] reported that the prevalence of most parasites was similar for dogs of mixed-breed and for dogs of a defined-breed, except for Cystoisospora spp. and T. canis, which showed a significantly higher prevalence in mixed-breed dogs.
In the present study, the highest number of eggs was recovered from the neck, followed by the back, while the lowest number was recorded in the anal region. This trend could be explained by the dog's playing behaviour resulting in increased soil contact [8]. Other behaviour such as scent rolling could also be responsible for increase contact with soil.
In this study, we found no embryonated eggs on the hair of domestic dogs. This finding was similar to that of some previous studies that also reported no embryonated eggs [8,9]. A study by Devoy Keegan and Holland [34] found that if unembryonated T. canis eggs can develop fully on the hair under controlled conditions, then these developed eggs would pose a risk. However, some studies have reported embryonated eggs on the hair of dogs which suggested that direct contact with dogs may be important risk factor [13,15,16,29].
Conclusion
Although soil contamination with Toxocara eggs is significantly responsible for human toxocariasis, ingestion through direct contact with dog has been suggested as an alternative route of transmission for this zoonosis. This study has confirmed the presence of T. canis eggs in the hair of domestic dogs from Ibadan and Ilesa, Southwest Nigeria. Hence, direct contact with these dogs such as petting may pose risk especially to the children and more dangerous than soil contamination for the transmission of toxocariasis in humans. Education of the public about zoonotic potential of T. canis, the prevention of environmental contamination with dogs' faeces, reduction of the stray dog and the use of anthelminthic and animal hygiene can help to prevent cases of visceral larva migrans (VLM) in humans. | 2,633.4 | 2016-08-12T00:00:00.000 | [
"Biology"
] |