text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Planar Antenna Design for Internet of Things Applications
Planar antenna plays an important role in Internet of Things (IoT) applications because of its small size, low profile and low cost. In IoT wireless module, antenna is typically occupied one-third size of overall circuit; therefore, planar antenna, i.e., integrated on printed circuit board (PCB) is one of attractive design. In this chapter, the fundamental of antenna is firstly discussed. Printed Inverted-F Antenna (PIFA) is taken as an example to explain the design process of simple planar antenna and a size-reduced 2.4 GHz ISM band PIFA is used for experimental explanation for the short-range IoT applications. Finally, a wideband antenna is shown as wideband planar antenna for short-range and long-range IoT applications.
Introduction
Internet of Things (IoT) aims to connect existing sensors and devices to the internet in real time.Constantly collecting the information from surrounding help users making wiser decision, leading to higher quality in daily life and higher efficiency in industries [1][2][3][4][5].Short-range and long-range wireless techniques are suitable used in different IoT applications.One of the wireless techniques for short-range IoT applications is bluetooth low energy (BLE) under 2.4 GHz industrial, scientific, and medical (ISM) band because of its low power consumption [6,7].On the other hand, cellular communications provide larger coverage in long-range IoT applications but they have high power consumption [8].In RF/microwave modules, the size reduction and performance enhancement of the antenna are key design parameters, therefore, planar antenna, i.e., integrated on printed circuit board (PCB) is a suitable antenna for IoT applications.
In this chapter, the fundamental of antenna is firstly discussed.Printed Inverted-F Antenna (PIFA) is taken as an example to explain the design process of simple planar antenna and a size-reduced 2.4 GHz ISM band PIFA is used for experimental explanation for the short-range applications.Finally, a wideband antenna is shown as another approach on wideband planar antenna for short-range and long-range IoT applications.
Fundamental of antenna 2.1 Introduction of dipole antenna
Dipole antenna is one of the simple antenna that demonstrates the fundamental concept of antenna and it is a foundation of many practical antennas [9].In Figure 1, dipole antenna is configured with two symmetric conductive arms carrying radio frequency current.Its length is required to be half wavelength (0.5λ) for maximum response and a half wavelength corresponds to approximately 6 cm (in air) in the 2.4 GHz ISM band.The current across the dipole generates the electromagnetic wave radiation propagating from the dipole arms.
Ground plane
A good conductive and reflective ground plane reduces the half wavelength dipole antenna to be quarter wavelength antenna [11].This ground plane plays the same role of one of the arms and becomes a part of the antenna.This ground plane is considered as a mirror.In an optical mirror, if an object is placed in front of a mirror, a virtual image is generated with the same size and the same distance behind the mirror.In this case, if a signal source is placed above the ground plane, a virtual image of the source is generated with same current flowing direction and same phase shown in Figure 2, therefore, a quarter wavelength antenna and a ground plane form a half wavelength antenna.A well-designed ground plane should be very much larger in its dimensions than the half wavelength itself [11].The active antenna measurement such as the over-the-air (OTA) measurement is a good method to indicate the overall antenna performance in the complete products compared to passive antenna measurement [12].If the ground plane is significantly small, poor radiation performance is predicted in active antenna measurement.
Effect on length
An electric field with wavelength λ1 propagates toward to a dipole antenna with length L equals to 0.5λ1.This induces a sinusoidal current distribution shown in Configuration of a half wavelength dipole antenna [10].
Figure 3(a), given that the current distribution on the dipole is uniform.If the incident wave has wavelength λ2 which is much longer than the dipole length, for example L = 0.1λ2, current distribution is induced in triangular shape.The maximum current occurs at the center feed point and decreased linearly toward two ends to zero, as shown in Figure 3(b).
For a dipole with uniform current distribution, the radiation resistance R rad in free space is given by the following well-known equation [9,10]: The radiation resistance of Figure 3(b) is much smaller than that of Figure 3(a) based on Eq. (1).Low radiation resistance is an indication of inefficient radiation.Most of the power is not radiated by the antenna when the length of antenna is not designed based on the wavelength of incident signal.This poor radiating condition occurs when the antenna operates not equal to its resonant frequency.Matching network is usually used to maximize the power transfer from the radio transceiver to the antenna [10].Matching network sometimes is used to tune the operating frequency back to the desired value if the resonant frequency is little shifted to desired frequency.
Printed inverted-F antenna (PIFA)
The printed inverted-F antenna (PIFA) is one of common planar antennas used in the commercial and medical devices because of its small size, low profile and low cost [13][14][15][16][17][18][19][20].The typical PIFA structure is shown in Figure 4. Its working principle is same as monopole antenna with quarter-wave long along the main resonant line in Figure 4, therefore, the size of the ground plane is also an important part of antenna.It has a shorting feed point at the end of the main resonant line.This folded part introduces capacitance to the input impedance of the PIFA which is canceled by the shorting feed point.This foldable part, therefore, reduces the antenna size.The matching network in Figure 4 is used for maximum power transfer and, hence, efficient radiation [10].Lump elements are normally used in matching network to minimize the size.In this section, PIFA is used as example of planar antenna since PIFA fulfills the requirements of IoT applications.
Meandered PIFA design
Figure 4 shows a typical PIFA structure on a printed circuit board (PCB) which is indicated with the dotted area at the PCB upper layer.Meandering line is commonly used to increase the total length in antenna design.The meandering line in Figure 5 is used to replace the main resonant line in PIFA shown by combination of horizontal and vertical lines to form multiple turns.
The requirements of the PIFA are the operation frequency, power transmission efficiency, size and even cost.Simulation, fabrication and measurement are conducted until the antenna fulfills the defined application requirements.In general, a well-designed PIFA has the feature of having resonance at the operation frequency and good return loss, i.e., effective power transmission to antenna and compact in size.High directivity sometimes is considered in certain situation.However, it will not be discussed here since most of planar antennas are omni-directional transmission and reception instead of unidirectional antennas.The basic design rules and antenna performance characterization methods are addressed by case study in 2.4 GHz ISM band.The operation frequency of the antenna is governed by the basic dispersion relation c = fλ.The letter c represents the speed of electromagnetic wave in the air, which is a constant if only consider the wave traveling in single medium.In previous section, it shows that a dipole antenna resonates when the physical length of antenna equals to the quarter wavelength of incident signal and a sufficiently large ground plane form the mirror image under the plane.The length of the resonant line occupies a considerable area on the PCB, which is around one third of a wireless module.Proper selection of traces' length and width reduces the occupied area and impedance matching network for maximum power transfer [13].
Simulation and experimental results
Powerful computer simulation tools are used to drastically reduce the design time.Advanced Design System (from Keysight) is one of the electromagnetic (EM) simulators used to estimate the performance of certain designs in this chapter.Normally, there is small variation between the simulation and measurement results because of the fabrication variation, material variation and connectors mismatch, etc.There is limited effect on frequency below 6 GHz, however, this becomes significant when the frequencies are in millimeter wave.Two antennas are designed based on structure in Figure 5.The dimensions of these two examples (called Antenna PCB A and Antenna PCB B) of PIFA shown in Table 1 and in Figure 6.
Antennas in Figure 6 were simulated on FR4 substrate with a dielectric constant of 4.6 and 0.8 mm thickness.The PCB design mainly contains PIFA, ground plane, transmission line and 3.5 mm SMA connector.The size of PCB, 18.8 mm× 43.2 mm was chosen, which is the normal size of a 2.4 GHz ISM band wireless module.At end of meandering line is 4.2 mm for Antenna PCB A and 2.9 mm for Antenna PCB B. The resonance frequency of Antenna PCB A is expected to be lower due to the longer trace.Antenna PCB C was fabricated to calibrate the transmission line by the port extension measurement.The simulated results of Antenna PCB A and Antenna PCB B are shown in Figure 7 and the return loss indicates the resonant frequency of the antennas [9].The input feed point in the simulation is at Port A in Figure 6(a) which is without the transmission line and connector but others are the same as in Figure 6(a).
In Figure 7
Wideband planar antenna: foldable and non-foldable 4.1 Wideband planar antenna for IoT applications
Various IoT applications use different solutions to connect devices and sensors.Low power technologies such as Bluetooth and Zigbee are preferred for short-range applications because of their low power usage.Cellular communication technologies sometimes are required used for large coverage and high data rate applications Planar Antenna Design for Internet of Things Applications DOI: http://dx.doi.org/10.5772/intechopen.92456despite its large power consumption.Multiple narrowband antennas are needed when different technologies are used in IoT applications.One wideband antenna, therefore, is an attractive approach to replace multiple narrowband antennas.Different wideband structures were proposed to combine different frequency bands into one individual wideband antenna to serve different technologies in order to reduce the size and simplicity [21][22][23].The wideband antenna is still large to be used in portable devices, therefore, foldable design [21] provides flexibility of IoT products as well as further size reduction.
Wideband planar antenna design
Dipole antenna in Figure 1 could be extended to be a wideband antenna.Two conductive arms are replaced by thicker wire or even a plane to extend the bandwidth.One of example for wideband planar foldable and non-foldable antennas is shown in Figure 9. Table 2.
Gain between dielectric and PCB antennas [10] (includes the area of matching network, but not the ground plane).
Parameters used in the wideband planar foldable and non-foldable antennas.This planar antenna consists of two rectangular metal planes.The important parameters could be tuned in this design are width W, length L and gap G.The length is used for tuning in this example as it has significant impact on the performance of antenna.All parameters are fixed in Table 3 except the length L which is the parameter chosen to be tuned for foldable and non-foldable antennas.The foldable design was fabricated in the metal sheet and the non-foldable design was fabricated on the FR4 substrate with a dielectric constant of 4.6 and thickness of 0.8 mm.The simulated results of return loss are shown in Figure 10 with different lengths of sheet L. In Figure 10, the frequency range is shifted to the lower side with a longer length L because the length L is closer to the quarter-wavelength of a lower frequency.Figure shows the comparison between simulated and measured results of wideband planar foldable and non-foldable antennas.Figure 11(a) shows the simulated and measured results of the fabricated foldable antenna which shows that the simulated and measured results are close to each other with the bandwidth of 76% from 1.3 to 2.9 GHz.This range covers the applications in GPS, the 2.4 GHz ISM band, and the general 3GPP WCDMA bands and LTE bands.Figure 11(b) shows the simulated and measured results with L equal to 36 and 41 mm (same width of W = 25 mm).Simulated and measured results show that they are close to each other with the bandwidth of 76% from 1.35 to 2.75 GHz, which is little worse than the foldable design in Figure 11(a).The maximum gain of the non-foldable is between 2.5 and 3.5 dBi.
Conclusion
The architecture of the PIFA on PCB with meandering line was shown.The measurement results of return loss and gain performances shown that it has better performances compared to the dielectric antennas as well as without any extra matching components.When only single communication technology is used in IoT product, PIFA is recommended.Using meandering line can reduce the antenna size as well as keeping the performance.PIFA design, therefore, is suitable for ISM band and other IoT applications.In the product utilizing numbers of communication technologies at same times, one wideband antenna integrated in the product is more suitable.Both foldable and non-foldable wideband structures, therefore, were proposed and fabricated for their different uses in IoT applications.Both measurement results of two structures show more than 65% in bandwidth.Their operating frequency covers IoT applications in GPS, the 2.4 GHz ISM band, and the common 3GPP WCDMA and LTE bands.And the foldable structure has advantage of wearable applications.
During the design process, the type of antenna is firstly confirmed and then the key parameters such as frequency and size need to be determined.Simulation software and measurement equipment are important tools to verify its performance and further design iterations may be required for fine-tuning the performance.
Figure 2 .
Figure 2. Signal source and ground plane effect [11].(a) Actual condition above a ground plane.(b) Equivalent condition with a virtual image under the ground plane.
Planar
Antenna Design for Internet of Things Applications DOI: http://dx.doi.org/10.5772/intechopen.92456 the transmission line and connector in Figure 6(a).The width of the transmission line is 1.5 mm so that the characteristic impedance of the line is equal to 50 Ohm.The return losses of antenna were measured by vector network analyzer (VNA).If the return losses are not significant high enough, matching network is needed for maximum power transfer.Antenna PCB C is used for port extension by VNA so that the measurement reference plane is moved to Port A since the VNA can predict the open circuit at end of the transmission line from the connector in Antenna PCB C by the electrical length L of the transmission line.The simulated and measured results of Antenna PCB B are plotted in Figure 8.It is also shown that the overall performance of antenna at Port B is close to the same at Port A. The radiation patterns and gain measurement are carried out by passive antenna measurement system.The active antenna measurement sometimes is used to indicate the overall transmission and reception of the complete products.The maximum gain of the PIFA is normally around 3 dBi.Antenna PCB B, therefore, is suitable for 2.4 GHz ISM band applications.
Figure 7 .
Figure 7. Simulated S-parameter, S11 of Antenna PCB A and Antenna PCB B at Port A.
Figure 8 .
Figure 8. Antenna PCB B: Simulated (at Port A) and measured (at Port A and B) S-parameter, S11.
Table 1 .
, the resonance frequency of Antenna PCB A is lower than that of Antenna PCB B. Antenna PCB A and Antenna PCB B were fabricated with Parameters used in the PIFA.
Table 2
shows the comparison table of different antennas, which shows that the dielectric antennas have a little size smaller than the PIFA.However, PIFAs were only fabricated on PCB, which is approximately zero in thickness as well as zero cost of antenna and matching components. | 3,598 | 2020-09-23T00:00:00.000 | [
"Business",
"Computer Science"
] |
Analysis of the strong vertices of ΣcΔD∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{c}\Delta D^{*}$$\end{document} and ΣbΔB∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{b}\Delta B^{*}$$\end{document} in QCD sum rules
In this work, we analyze the strong vertices ΣcΔD∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{c}\Delta D^{*}$$\end{document} and ΣbΔB∗\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{b}\Delta B^{*}$$\end{document} using the three-point QCD sum rules under the tensor structures iϵρταβpαpβ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\epsilon ^{\rho \tau \alpha \beta }p_{\alpha }p_{\beta }$$\end{document}, pρp′τ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p^{\rho }p'^{\tau }$$\end{document} and pρpτ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p^{\rho }p^{\tau }$$\end{document}. We firstly calculate the momentum dependent strong coupling constants g(Q2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g(Q^{2})$$\end{document} by considering contributions of the perturbative part and the condensate terms ⟨q¯q⟩\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\langle {\overline{q}}q\rangle $$\end{document}, ⟨gs2GG⟩\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\langle g_{s}^{2}GG \rangle $$\end{document}, ⟨q¯gsσGq⟩\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\langle {\overline{q}}g_{s}\sigma Gq\rangle $$\end{document} and ⟨q¯q⟩2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\langle {\overline{q}}q\rangle ^{2}$$\end{document}. By fitting these coupling constants into analytical functions and extrapolating them into time-like regions, we then obtain the on-shell values of strong coupling constants for these vertices. The results are g1ΣcΔD∗=5.13-0.49+0.39GeV-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_{1\Sigma _{c}\Delta D^{*}}=5.13^{+0.39}_{-0.49}\,\hbox {GeV}^{-1}$$\end{document}, g2ΣcΔD∗=-3.03-0.35+0.27GeV-2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_{2\Sigma _{c}\Delta D^{*}}=-3.03^{+0.27}_{-0.35}\,\hbox {GeV}^{-2}$$\end{document}, g3ΣcΔD∗=17.64-1.95+1.51GeV-2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_{3\Sigma _{c}\Delta D^{*}}=17.64^{+1.51}_{-1.95}\,\hbox {GeV}^{-2}$$\end{document}, g1ΣbΔB∗=20.97-2.39+2.15GeV-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_{1\Sigma _{b}\Delta B^{*}}=20.97^{+2.15}_{-2.39}\,\hbox {GeV}^{-1}$$\end{document}, g2ΣbΔB∗=-11.42-1.28+1.17GeV-2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_{2\Sigma _{b}\Delta B^{*}}=-11.42^{+1.17}_{-1.28}\,\hbox {GeV}^{-2}$$\end{document} and g3ΣbΔB∗=24.87-2.82+2.57GeV-2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$g_{3\Sigma _{b}\Delta B^{*}}=24.87^{+2.57}_{-2.82}\,\hbox {GeV}^{-2}$$\end{document}. These strong coupling constants are important parameters which can help us to understand the strong decay behaviors of hadrons.
I. INTRODUCTION
The physics of charmed hadrons became an interesting subjects since the observations of J/ψ meson [1, 2]and charmed baryons (Λ c , Σ c ) [3].Up to now, lots of charmed baryons have been discovered by different experimental collaborations [4].Moreover, many bottom baryons such as Λ b , Ξ b , Σ b , Σ * b and Ω b have also been confirmed in experiments by CFD and LHCb collaborations [5][6][7][8][9][10].Although scientists have devoted much of their energy to this field, but the details of some charmed and bottom baryons are still less known.Thus, many experimental plans for the research of charmed and bottom baryons have been proposed by PANDA [11], J-PARC [12] and many other facilities.Under this circumstance, theoretical research on production of the baryons is very interesting and important.The strong coupling constants of baryons is an important input parameter which can help us to understand their production and decay processes [13].This is the first motivation for us to carry out the present work.
Since the observation of X(3872) by Belle collaboration in 2003 [14], exotic hadrons which are beyond the usual quarkmodel emerged like bamboo shoots after a spring rain [15][16][17][18][19][20][21][22][23][24][25][26][27][28].Some exotic states were interpreted as hadronic molecular states because their masses are close to the known twohadrons thresholds [29].However, the study of mass spectra is insufficient to understand the inner structure of these exotic states.We need to further study their strong decay behaviours, where the strong coupling constants are particularly important.For examples, in Ref [30], the authors predicted two pentaquark molecular states D * Σ c and D * Σ * c with the QCD sum rules.These two states were named as P c (4470) and P c (4620) which have the isospin I = 3 2 .If we studied their two-body strong decay P c (4470/4620) → J/ψ∆, this process can be described by the triangle diagram in The strong interaction between the hadrons is nonperturbative in the low energy region, which can not be studied from the QCD first principle.But, as an important parameter, the strong coupling constant is urgently needed in studying the production and strong decay process of hadrons.Thus, some phenomenological methods are employed to analyze the strong vertices [31][32][33][34][35][36][37][38][39][40][41][42][43][44][45].The QCD sum rules (QCDSR) [46] and the light-cone sum rules (LCSR) are powerful phenomenological methods to study the strong interaction.In recent years, some coupling constants have been analyzed with LCSR by considering the higher-order QCD corrections and subleading power contributions [47,48].These studies show that considering the higher-order QCD corrections and subleading power contributions is very important for the accuracy of the results.In our previous work, we have analyzed the strong vertices Σ c ND, Σ b NB, Σ * c ND, Σ * b NB, Σ c ND * and Σ b NB * in the frame work of QCDSR basing on threepoint correlation function [38,40,41], where the higher-order perturbative corrections were neglected.As a continuation of these works, we analyze the strong vertices Σ c ∆D * and Σ b ∆B * using the three-point QCDSR under the tensor structure iǫ ρταβ p α p ′ β , p ρ p ′τ and p ρ p τ .According to our previous work, it showed that the subleading power contributions are really important for the final results.Considering higherorder corrections should make the final results more accurate, however it will also make the calculations of the three-point QCDSR very complicated.Thus, we neglect contributions from these corrections in the present work.
The layout of this paper is as follows.After the introduc-tion in Sec.I, the strong coupling constants of the vertices Σ c ∆D * and Σ b ∆B * are analyzed by QCD sum rules in Sec.II.In these analyses, the off-shell cases of the vector mesons are considered.In the QCD side, the perturbative contribution and vacuum condensate terms qq , g 2 s GG , qg s σGq and qq 2 are also considered.In Sec.III, we present the numerical results and discussions.Sec.IV is reserved for our conclusions.Some calculation details and important formulas are shown in Appendix A and B. The first step to analyze strong coupling constants with QCD sum rules is to write the following three-point correlation, where T is the time ordered product, and respectively.These interpolating currents can be ex-pressed as [49], where i, j and k represent the color indices and C denotes the charge conjugation operator.
The correlation function can be handled at both hadron and quark level in the framework of QCD sum rules, where the former is called the phenomenological side and the later is called the QCD side.Matching the calculation of these two sides by quark hadron duality, the sum rules for the strong coupling constants can be obtained.
A. The phenomenological side
In the phenomenological side, a complete sets of hadron states with the same quantum numbers as the hadronic interpolating currents are inserted into the correlation function.After isolating the contributions of ground and excited states, the expression of the correlation function can be written as [50], where h.c.denotes the contributions of higher resonances and continuum states.From this above equation, we can see that the current J µ ∆ (0) couples not only with the baryon J P = 3 2 + but also with the state of 1 2 + .Similarly, the meson current J ν D * [B * ] (0) couples with both the vector meson with J P = 1 − and the pseudoscalar meson with J P = 0 − .Therefore, there will be some redundant terms, that is the second, third and fourth term in Eq. ( 3)).They will disturb the items that we are interested in(the first term in Eq. ( 3)).These redundant matrix elements can be parameterized by the following equations, where N represents baryon with spin parity 1 4)), the projection operators (g µρ − p ′µ p ′ρ p ′2 ) and (g ντ − q ν q τ q 2 ) are employed in Eq. ( 3).The matrix elements about the vertex Σ c [Σ b ]∆D * [B * ] can be written as follows, where P = p + p ′ .The matrix elements appearing in Eq. ( 3) are substituted with Eqs. ( 4) and ( 5).Then, the correlation function in the phenomenological side can be written as the following form, From Eq. ( 6), we can see that the correlation function will have so complex tensor structure, e.g./ p / p ′ γ ρ γ τ γ 5 , / pγ ρ γ τ γ 5 , / p / p ′ g ρτ γ 5 , p ρ p τ γ 5 , • • • that the calculation become tedious and lengthy.Theoretically, if all the criteria of QCD sum rules are satisfied, each tensor structure can lead to the same results.For simplicity, we choose the tensor structure in the following ways, Πphy 1 , Πphy 2 and Πphy 3 are named as scalar invariant amplitudes which can be obtained by using Eq. ( 6) and ( 7), where, B. The QCD side In the QCD side, we firstly contract all of the quark fields in the correlation function with Wick's theorem, Here, S mn u[d] (x) and S mn c[b] (x) are light and heavy quark full propagators which can be written as [51,52], where 2 , λ a (a = 1, .., 8) are the Gell-Mann matrixes, i and j are color indices, σ αβ = i 2 [γ α , γ β ] and f λαβ , f αβµν have the following forms, Taking the same way as the phenomenological side, the correlation function in QCD side can also be written as, After conducting operator product expansion(OPE) and taking their imaginary part, we can obtain the spectral density of correlation function.Finally, the correlation function can be written as following form by using the dispersion relation, where s = p 2 , u = p ′2 , q = p − p ′ , s 1 and u 1 are the kinematic limits which are taken as (2m q + m Q ) 2 and 9m 2 q respectively.The QCD spectral density ρi (s, u, q 2 ) can be obtained by Cutkosky's rules [53][54][55][56][57][58], and their calculation details are briefly discussed in Appendix A. Full expressions of the QCD spectral density for different tensor structures are shown in Appendix B. The contributions of perturbative part and the vacuum condensation terms including qq , g 2 s GG , qg s σGq and qq 2 are all considered, where their Feynman diagrams are shown in Fig. 2.
C. The strong coupling constants
We take the change of variables p 2 → −P 2 , p ′2 → −P ′2 and q 2 → −Q 2 and perform double Borel transformation [59,60] to both the phenomenological and QCD sides.The variables P 2 and P ′2 are replaced by T 2 1 and T 2 2 which are called the Borel parameters.Then we take . Finally, we can obtain the following equations about the strong coupling constants g i (i = 1, 2, 3) using the quark-hadron duality condition, FIG. 2: Feynman diagrams for the perturbative part and vacuum condensate terms.
The momentum dependent coupling constants can be expressed as, where s 0 and u 0 are the threshold parameters which are introduced to eliminate the h.c.terms in Eq. ( 6).They satisfy the relations, , where m and m ′ are the masses of the ground and first excited states of the baryons.
In the framework of QCD sum rules, two conditions should also be satisfied, which are the pole dominance and convergence of OPE.To analyze the pole contribution, we write down, Then, the pole contribution can be defined as [50], The convergence of OPE is quantified via the contributions of the vacuum condensates of dimension n, which is defined as, where ρQCD i (s, u, Q 2 ) and ρQCD,n i (s, u, Q 2 ) represent the spectral densities of total and the nth dimension vacuum condensates, respectively.Fixing Q 2 = 3 GeV 2 in Eqs. ( 18) and ( 19), we plot the pole contributions with variation of the Borel parameter for different tensor structures in Fig. 3. To satisfy the convergence of OPE, we should also find a good plateau which is generally called 'Borel window'.Then, an appropriate Borel parameter in the Borel window is selected to make pole contributions larger then 40%.Considering these above requirements, the Borel windows are selected as 5(24) GeV 2 ≤ T 2 ≤ 7( 26 As for the gluon condensate g 2 s GG , it plays a less important role since |D(4)| < 1%.Therefore, the convergence of OPE is well satisfied.
By taking different values of Q 2 , we finally obtain the momentum dependent coupling constants g(Q 2 ) whose values are shown in Fig. 6.In order to obtain the on-shell values of these coupling constants, it is necessary to extrapolate these results into the time-like regions (Q 2 < 0).This process is realized by fitting g(Q 2 ) with appropriate analytical functions and setting the vector meson ).To our knowledge, there are no specific expressions for the momentum dependent strong coupling constants which describe the interactions between hadrons.We only know that the value of running coupling constant α s (Q) decreases with the increment of square of momentum.Commonly, when we choose appropriate fitting functions, two conditions should be considered.The first is that the coupling constants should be well fitted by the fitting functions in the space-like regions (Q 2 > 0).Secondly, the on-shell values of the strong coupling constants, which are obtained by extrapolating the fitting functions into deep time-like regions, should converge.Based on our previous work, the combination of exponential and polynomial functions usually satisfies these conditions.In this work, the coupling constants of vertex Σ c ∆D * are well fitted by the combination of exponential and polynomial functions.For vertex of bottom baryon, the exponential function is not well convergent in Q 2 = −m 2 B * because the square mass of the vector bottom meson is much larger than that of charmed meson.Thus, the polynomial function is employed to fit the coupling constants of the vertex Σ b ∆B * .Finally, the momentum dependent strong coupling constants can be fitted into the following analytical functions, where a, b, c, d, e and f are the fitted parameters whose values are show in Tables I and II.The fitting curves for vertices Σ c ∆D * and Σ b ∆B * are also shown in Fig. 6.Finally, the on-shell values of strong coupling constants are obtained by setting 21),
Fig. 1 .
From this figure, we can see that analysis of strong vertices P c Σ c D * , P c Σ * c D * , DD * J/ψ, D * D * J/ψ, Σ c ∆D, Σ * c ∆D, Σ c ∆D * and Σ * c ∆D * is essential for us to study the strong decay behaviors of these two exotic states.This constituents the second motivation of our present work.
II.THE QCD SUM RULES FOR VERTICES Σ c ∆D * AND Σ b ∆B *
2 + 2 + and 3 2 +
, D[B] is the pseudoscalar charmed(bottom) meson, U(p, s) and U α (p, s) are the spinor wave functions of the baryon with spin parity 1 , respectively, ε β is the polarization vector of the vector meson D * [B * ], λ N is the pole residues, f D[B] is the decay constant.To extract the contributions of Σ c [Σ b ], D * [B * ] and ∆, and eliminate the contaminations of the redundant terms(see Eq. ( . Their values are m Σ c = 2.45 GeV, m Σ b = 5.81 GeV, m ∆ = 1.23 GeV, m D * = 2.01 GeV and m B * = 5.33 GeV, m u(d) = 0.006 ± 0.001 GeV, m c = 1.275 ± 0.025 GeV and m b = 4.18 ± 0.03
TABLE I :
The parameters of the analytical function for the coupling constants of vertex Σ c ∆D * .
TABLE II :
The parameters of the analytical function for the coupling constants of vertex Σ b ∆B * . | 3,792.6 | 2023-08-13T00:00:00.000 | [
"Mathematics"
] |
Double diffusive magnetohydrodynamic squeezing flow of nanofluid between two parallel disks with slip and temperature jump boundary conditions
In this paper, double diffusive squeezing unsteady flow of electrically conducting nanofluid between two parallel disks under slip and temperature jump condition is analyzed using the homotopy perturbation method. The obtained solutions from the analysis are used to investigate the effects of the of Brownian motion parameter, thermophoresis parameter, Hartmann number, Lewis number and pressure gradient parameters, slip and temperature jump boundary conditions on the behavior of the nanofluid. Also, the results of the homotopy perturbation method are compared to the results in the literature and good agreements are established. This study is significant to the advancements of nanofluidics such as energy conservation, friction reduction and micro mixing biological samples. c © 2017 University of West Bohemia. All rights reserved.
Introduction
The study of fluid flow between parallel surfaces has generated wide research interests over the years due to its increasing application in the field of science and engineering.Fluid flow in parallel medium such as disks has useful applications in manufacturing industries, power transmission equipment amongst others due to parallel surfaces in relative motion.In efforts to study fluid flow between parallel surfaces, Mustafa et al. [25] investigated the heat and mass transfer between parallel plates undergoing unsteady squeezing fluid flow while Hayat et al. [11] presented the squeezing flow of second grade fluid between parallel disk in the presence of magnetic field.In another work, Domairry and Aziz [3] applied the homotopy perturbation method to study the effect of suction and injection on MHD squeeze flow between parallel disks.The squeezing flow viscous fluid flow between plates under unsteady condition was analyzed by Siddiqui et al. [47].Rashidi et al. [28] explored analytical solutions to study unsteady squeezing flow between parallel plates.Khan and Aziz [20] presented a study on the flow of nanofluid between parallel plates due to natural convection.Shortly after, Khan and Aziz [19] analyzed double diffusive natural convective boundary layer fluid flow through porous media saturated with nanofluid while Kuznestov and Nield [21] also studied nanofluid flow between parallel plates but with natural convective boundary layer.Hashimi et al. [10] developed analytical solutions in studying squeezing fluid flow of nanofluid.The effect of stretching sheet wall problem adopting natural convective boundary conditions was investigated by Yao et al., Kandasamy et al., Makinde and Aziz [18,22,50].However, in recent past, the effects of slip effect on fluid flow have been considered by many researchers [1,[3][4][5][6]8,17,23,27,49] due to its significance to most practical fluid flow situations.When flow system characteristics size is small or low flow pressure, the assumption of no slip boundary condition becomes insufficient and inadequate in predicting the flow behavior under such scenario.Therefore, additional boundary conditions are required to adequately predict the low flow pressure or low system characteristics size.Such slip boundary condition was first initiated by Navier [26] upon which other researchers have built [2,24].Most of the above reviews have been limited to the analysis of squeezing flow under no slip and no temperature jump boundary conditions.Moreover, most of the nonlinear fluid flow and heat transfer problems have solved with different numerical and semi-analyticalnumerical or approximate analytical methods [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46].However, the development of analytical solutions by some of the approximate analytical methods such as Adomian decomposition method, homotopy analysis method, variation iteration methods for the flow process often involved complex mathematical analysis leading to analytic expression involving a large number terms.In practice, analytical solutions with large number of terms and conditional statements for the solutions are not convenient for use by designers and engineers [9,48].Also, in such methods, the search for a particular value that will satisfy the other end boundary condition(s) necessitated the use of software and such always results in additional computational cost in the generation of solution to the problem.Consequently, in many research works, recourse has been made to numerical methods in solving the problems.However, in the class of the newly developed approximate analytical methods, homotopy perturbation method is considered to relatively simple with fewer requirements for mathematical rigour or skill.Homotopy perturbation method (HPM) gives solutions to nonlinear integral and differential equations without linearization, discretization, closure, restrictive assumptions, perturbation, approximations, round-off error and discretization that could result in massive numerical computations.It provides excellent approximations to the solution of non-linear equation with high accuracy.Moreover, the need for small perturbation parameter as required in traditional PMs, the difficulty in determining the Adomian polynomials, the rigour of the derivations of differential transformations or recursive relation as carried out in DTM, the lack of rigorous theories or proper guidance for choosing initial approximation, auxiliary linear operators, auxiliary functions, auxiliary parameters, and the requirements of conformity of the solution to the rule of coefficient ergodicity as done in HAM, the search Lagrange multiplier as carried in VIM, and the challenges associated with proper construction of the approximating functions for arbitrary domains or geometry of interest as in Galerkin weighted residual method (GWRM), least square method (LSM) and collocation method (CM) are some of the difficulties that HPM overcomes.The results of HPM are completely reliable and physically realistic and unlike the other approximate analytical methods, it does not involve the search for a particular value that will satisfy the other end boundary condition(s) [12][13][14][15][16]. Therefore, in this work, homotopy perturbation method is used to study the slip effects of squeezing nanofluid flow through two parallel disks under magnetic field with pressure gradient.The analytical solution of the homotopy perturbation method is used to investigate the effects of various flow parameters with the effects discussed in detail.
Model development and analytical solution
Consider a nanofluid that flows axisymmetrically through parallel disks as shown in Fig. 1.The upper disk is moving towards the stationary lower disks under uniform magnetic field strength applied perpendicular to disk.The fluid conducts electrical energy as it flows unsteadily under magnetic force field.The fluid structure is everywhere in thermodynamic equilibrium and the plate is maintained at constant temperature.The details of the governing equation and non-dimensional parameters have been described in [6] which can be introduced under stated assumptions as: The relevant boundary conditions are given as Using the following dimensionless quantities The dimensionless equations are given as and the dimensionless boundary conditions are given as
Method of solution by homotopy perturbation method
The comparative advantages and the provision of acceptable analytical results with convenient convergence and stability [12][13][14][15][16] coupled with total analytic procedures of homotopy perturbation method compel us to consider the method for solving the system of nonlinear differential equations in ( 9)- (11).
The basic idea of homotopy perturbation method
In order to establish the basic idea behind homotopy perturbation method, consider a system of nonlinear differential equations given as with the boundary conditions where A is a general differential operator, B is a boundary operator, f (r) a known analytical function and Γ is the boundary of the domain Ω.
The operator A can be divided into two parts, which are L and N, where L is a linear operator, N is a non-linear operator.Eq. ( 15) can be therefore rewritten as follows By the homotopy technique, a homotopy U(r, p) : Ω × [0, 1] → R can be constructed, which satisfies or In the above Eqs.( 18) and ( 19), p ∈ [0, 1] is an embedding parameter, u 0 is an initial approximation of (15), which satisfies the boundary conditions.Also, from ( 18) and ( 19), we will have or The changing process of p from zero to unity is just that of U(r, p) from u 0 (r) to u(r).This is referred to homotopy in topology.Using the embedding parameter p as a small parameter, the solution of Eqs. ( 18) and ( 19) can be assumed to be written as a power series in p as given in Eq. ( 22) It should be pointed out that of all the values of p between 0 and 1, p = 1 produces the best result.Therefore, setting p = 1, results in the approximation solution of ( 14) The basic idea expressed above is a combination of homotopy and perturbation method.Hence, the method is called homotopy perturbation method (HPM), which has eliminated the limitations of the traditional perturbation methods.On the other hand, this technique can have full advantages of the traditional perturbation techniques.The series ( 23) is convergent for most cases.
Application of the homotopy perturbation method to the present problem
According to homotopy perturbation method (HPM), one can construct a homotopy for Eqs. ( 9)-( 11) as Taking power series of velocity, temperature and concentration fields, gives The substitution of Eq. ( 27) into Eq.( 24) yields The boundary conditions for Eqs. ( 30)-( 32) are The substitution of Eq. ( 28) into Eq.( 25) yields The substitution of Eq. ( 29) into Eq.( 26) yields The boundary conditions for Eqs. ( 38)-( 40) are Solving Eq. ( 30) and applying the boundary condition (33) leads to Also, on solving Eq. ( 34) and applying the boundary condition (37) yields And the solution of Eq. ( 38) with the application of the boundary condition ( 41) is On solving Eq. ( 31) and applying the boundary condition (33), one arrives at Also, on solving Eq. ( 35) and applying the boundary condition (37) gives It can easily be shown that the solution of Eq. ( 39) after the application of the boundary condition (41) yields In the same way, f 2 (η), θ 2 (η) and ϕ 2 (η) in Eqs. ( 32), ( 36) and ( 40) are solved using the boundary conditions ( 33), ( 37) and ( 41), respectively.Although, the resulting solutions and the other subsequent solutions are too long to be shown in this paper, they are included in the simulated results shown graphically in the results and discussion section.
Results and discussion
Table 1 shows the comparison of the results of the numerical method (NM) and the homotopy perturbation method used in this work.From the results, it could be inferred that the results of the presents work agrees with the results of the numerical method using shooting method with the sixth-order Runge-Kutta method.In order to further establish the accuracy of the solution of HPM, the results of the present study (in the absence of slip parameter) are further compared with the results of the numerical method using shooting method with six-order Runge-Kutta method.The values of local Nusselt number, Nu L and local Sherwood number, Sh L have been calculated for various values of Nb and Nt.An excellent agreement is found between the two set of results as shown in Table 2. Therefore, the use of the homotopy perturbation method for the analysis of the double diffusive models is justified.
The obtained analytical solutions are reported graphically to show the influence of various fluid parameters.The effect of slip parameter is illustrated in Fig. 2 where it is shown that radial velocity component increases with an increase in slip parameter near the lower disk i.e. η > 0 and η < 0.5 ( not accurately determined) and reverse is the case as it approaches the upper disk, i.e., η > 0.5 and η < 1 (not accurately determined) which can be physically explained due to increase in slip components leads to a corresponding decrease in shear stress.A reverse trend is observed clearly from Fig. 3 which shows the effect of increasing Hartmannparameter (M), it is shown that at increasing values of M the velocity decreases slightly near the lower disk and as the upper disk is approached the velocity increases slightly due to the increase in boundary layer thickness caused by the Lorentz or magnetic force field.As squeeze parameter (S) increases which is demonstrated in Fig. 4 the radial velocity component increases, though effect is maximum at the lower disk and minimum at the upper disk.Fig. 5 depicts the effect of increasing pressure term (C) on the fluid flow; it is shown here that with increasing C a very slight increase in velocity component is observed.
Temperature jump effect (γ) on temperature profile is shown in Fig. 6, where it is depeicted that as γ increases temperature distribution increases towards the lower disk where it decreases towards the upper disk.In the absence of slip i.e. γ = 0 it is observed that temperature distribution equals one at the lower plate and zero at the upper plate.Influence of thermophoresis parameter (Nt) is demonstrated in Fig. 7 which depicts that increasing values of Nt, the temperature distribution is maximum at the lower disk, only to fall rapidly towards the upper disk.The effect of squeeze parameter (S) on temperature distribution is seen in Fig. 8 which illustrates at increasing values S temperature at the lower disk reduces while temperature at the upper disk increases which can be physically explained as increase in S leads to a corresponding decrease in kinematic viscosity and vice versa.It is obvious from Fig. 9 that increasing pressure term (C) as no significant effect on temperature distribution.
Influence of Lewis number (Le ) is observed in Fig. 10 where it is depicted that increasing Le evokes a corresponding increase in concentration distribution at the region closer to the lower disk, though decreases rapidly as it moves towards the upper plate.Thermophoresis parameter (Nt) effect is observed in Fig. 11, at increasing values of Nt the concentration profile increases significantly but towards the upper plate there is a steady reduction.Fig. 12 depicts the effect of Brownian motion parameter (Nb) as observed at increasing value of Nb concentration profile decreases significantly, which falls rapidly as it approaches the upper disk at suction.Also when squeeze parameter (S) is increased, the effect is illustrated in Fig. 13 where the concentration distribution increases significantly near the wall close to the lower disk but falls rapidly as the upper disk is approached during suction.
Conclusion
In this study, the flow, heat transfer and concentration characteristics of an electrically conducting nanofluid under magnetohydrodynamics with pressure gradient under the influences of slip and temperature-jump conditions have been analyzed using the homotopy perturbation method.Analytical solutions were reported graphically to demonstrate the influence of various flow parameters on the flow phenomena.Effects of parameters such as thermophoresis, Brownian motion, Lewis number and pressure gradient on flow and heat transfer was investigated.This study is significantly important in the in energy conservation, friction reduction and micro mixing biological samples.
NomenclatureA
suction/injection parameter β dimensionless slip velocity parameter γ dimensionless temperature jump parameter N t thermophoresis parameter N b Brownian motion parameter Le Lewis number P r Prandtl's number M Hartmann parameter C pressure gradient term S squeeze number ϕ dimensionless nano conc.parameter θ dimensionless temperature w axial velocity component u radial velocity component η similarity variable τ ratio of nanoparticle heat capacity to base fluid σ electrical conductivity υ kinematic viscosity α thermal diffusivity D B thermophoretic diffusion coefficient
Table 2 .
Comparison of the values of the local Nusselt and local Sheerwood number for various values of Nb and Nt with β = γ = 0 | 3,479 | 2017-12-30T00:00:00.000 | [
"Physics"
] |
Software and hardware platform for testing of Automatic Generation Control algorithms
Development and implementation of new Automatic Generation Control (AGC) algorithms requires testing them on a model that adequately simulates primary energetic, information and control processes. In this article an implementation of a test platform based on HRTSim (Hybrid Real Time Simulator) and SCADA CK-2007 (which is widely used by the System Operator of Russia) is proposed. Testing of AGC algorithms on the test platform based on the same SCADA system that is used in operation allows to exclude errors associated with the transfer of AGC algorithms and settings from the test platform to a real power system. A power system including relay protection, automatic control systems and emergency control automatics can be accurately simulated on HRTSim. Besides the information commonly used by conventional AGC systems HRTSim is able to provide a resemblance of Phasor Measurement Unit (PMU) measurements (information about rotor angles, magnitudes and phase angles of currents and voltages etc.). The additional information significantly expands the number of possible AGC algorithms so the test platform is useful in modern AGC system developing. The obtained test results confirm that the proposed system is applicable for the tasks mentioned above.
Introduction
AGC systems must be constantly developed due to changes in market conditions, increasing degree of automation in the energy sector, development of computing systems, increasing number of Phasor Measurement Units (PMU), necessity of improving the reliability of power systems and changes in network configuration.
At the moment the AGC system of Russia operates on base of CK-2007 which is the main SCADA system used by the System Operator of the United Power System of Russia.Making changes to the settings and adding new algorithms requires testing the AGC system on adequate power system model.One of the complexes that allow real-time closed-loop testing of AGC system in composition with SCADA system is HRTSim, which was developed at the Tomsk Polytechnic University.
HRTSim description
The concept of HRTSim (Hybrid Real Time Simulator) simulation is based on the use of three modeling approaches: analog, digital and physical, each of which achieves maximum efficiency in solving individual subtasks.A detailed description of the concepts and its hardware application is presented in [1][2][3].Modelling of primary processes in power equipment is done by analog computational units that solve differential equations describing this power equipment.Power system automatics are simulated on microcontrollers in hard real-time.Switching of equipment and interconnection of units into a general model of a power system is carried out on the physical level.Such a solution ensures continuous simulation of primary processes.
Adjusting of power system and power equipment parameters and automation settings as well as the continuous collection and processing of simulation results is carried out by specialized software -VmkServer.
VmkClient software allows to control operating conditions by changing the automatic setpoints values, switching devices positions and parameters of the generation equipment and the load.Also, it is possible to control the operating conditions according to the predetermined algorithms -the mode scenarios.
Interaction with third-party software or SCADA systems is done through the OffsiteClient component.
All the processes are simulated in real time, which allows to solve such problems as training of dispatch personnel, closed-loop testing of relay protection devices and local emergency control automatics, data exchange with SCADA systems for scientific, practical and educational purposes and much more.
SCADA СК-2007 features
Operational and informational software complex (OIC) CK-2007 is a high-performance real-time SCADA platform which is designed for receiving, processing, storing, and transmitting telemetric, reporting and scheduled information on the operation of energy facilities, networks and systems and provide flexible access to different users and external automated systems.The complex is widely used in Russian Federation as the basis for the construction of operational technological control and management systems in situational centers, data centers and control centers in the power industry for generating, network, marketing companies and system operators.
The complex is delivered with EMS, DMS and MMS applications in various configurations and also has Application Programming Interface (API) and Data Access Component (DAC) library which allows to implement additional modules to solve specific technological tasks [4][5].
Implementation of AGC systems test platform
The AGC test platform is composed of two complexes HRTSim and OIC CK-2007.The first plays a role of a controlled object -a power system, and provides an interface for reading/writing of operating parameters in real time via a specialized software interface and a user application.The second complex is the basis for building operational and technological management systems in the electric power industry, it collects state information from control objects, and provides all the necessary tools for implementing third-party modules for further processing.Support for the IEC 870-5-104 protocol was implemented using the lib60870.NET library written in C#.The library is free to use for testing, evaluation, educational purposes and open-source projects under the terms of the GPLv3 license, for commercial projects commercial license is needed [6].The library allows to implement both a client and a server which can exchange telemetry data, interrogation, time synchronization and control commands.Also a C version of the library is available.
The telemetry data is transmitted once a second or by interrogation command from a controlling station (OIC).The data is being filtered so the repeating values are not retransmitted.It is possible to configure dead band in order to reduce excessive traffic between HRTSim and OIC and to test the impact of accuracy of telemetry data coming to SCADA system on AGC system operation.Datasets for transmission and reception are configured using the graphical user interface shown in Figure 2. Preparation of the AGC system (transferring an existing AGC system to the test platform and adding of new parameters and algorithms); Configuration of two-way communication protocol; Setting up of OIC tools to record state variables and AGC system control actions.
At the second stage, all test scenarios are run sequentially, the results are analyzed, and a conclusion about the readiness and applicability of the AGC system are made.
Results
As was already mentioned above, in addition to practical tasks related to the interaction with the unified AGC system of Russia, research tasks can be solved.As an example of a research task, an external program for OIC CK-2007 has been developed that partially reproduces the functionality of the AGC system.The program implements the PI-regulator algorithm that calculates the change of active power generation that is needed to maintain the active balance at the frequency of 50 Hz.The obtained value is distributed between the generators under AGC according to the given proportional coefficients and recorded in the CK-2007.When a variable in CK-2007 corresponding to the output of any of the generators updates, its value automatically sends via the IEC 60870-5-104 protocol and records in the HRTSim through OffsiteClient.
The AGC program is written in C# using the .NET Framework 4.6.1.The reading and writing of information from the OIC is carried out using the Data Access Component library (DAC), which is included in CK-2007 complex.All necessary regulatory parameters involved in the process of operation of the AGC system are put on the form of the program, which allows you to customize the operation of the system for the needs of a particular task, and also use the developed program for scientific and educational purposes.For example, in Figure 3 graphs of frequency and active power response on load switching of generators participating in the primary and secondary control are shown.The experiment was done twice -with AGC system turned on and off.As can be seen from the graphs, the developed program successfully restores the frequency of the power system that is modelled in the HRTSim to a value set by the setpoint, the processes occurring during the primary and secondary frequency control are fairly accurately reproduced.The value of the frequency exceeded the permissible limit of 50 ± 0.2 Hz in the Russia, but recovered to an allowable value of 50 ± 0.05 Hz in about 40 seconds.The results are explained by the small time constant of the AGC and the low capacity of the modelling power system (even a small change in the load power creates a significant relative imbalance), which does not allow to fully apply the requirements described in State Standard R 55890-2013 [7] to the power system being modeled in HRTSim.
Conclusion
The use of software and hardware tools mentioned above for modeling real-time processes in power systems allows testing control algorithms of AGC systems in a closed loop (including the AGC system that is used in operation in Russia).Testing of AGC systems together with the SCADA that is used in operation allows to simplify the transfer of AGC parameters from the test server to the real one and vice versa by automating the process and avoiding transfer errors.
Available in HRTSim extended set of primary state variables, in comparison to the real power system, allows to perform research tasks.CK-2007 Application Program Interface allows to implement third-party software applications not only of AGC systems, but also of other centralized automation systems, for example, Centralized Automatic Emergency Control or Centralized Voltage Control systems.Interaction via standard protocols and the availability to remotely launch VmkClient allows to territorially distribute the system and remove the function of technical support of HRTSim from the end user of the system.
Fig. 1 .
Fig. 1.Automatic Generation Control systems test platform On the base of OffsiteClient two-way information exchange between HRTSim and OIC has been set up according to the IEC 60870-5-104 protocol, which makes it possible to transmit state information to OIC and to implement remote control of the generators that are modelled in the HRTSim.Support for the IEC 870-5-104 protocol was implemented using the lib60870.NET library written in C#.The library is free to use for testing, evaluation, educational purposes and open-source projects under the terms of the GPLv3 license, for commercial projects commercial license is needed[6].The library allows to implement both a client and a server which can exchange telemetry data, interrogation, time synchronization and control commands.Also a C version of the library is available.The telemetry data is transmitted once a second or by interrogation command from a controlling station (OIC).The data is being filtered so the repeating values are not retransmitted.It is possible to configure dead band in order to reduce excessive traffic between HRTSim and OIC and to test the impact of accuracy of telemetry data coming to SCADA system on AGC system operation.Datasets for transmission and reception are configured using the graphical user interface shown in Figure2.
Fig 2 .
Fig 2. IEC 60870-5-104 protocol configurator for HRTSim An AGC system testing process is carried out in two stages: preparatory and basic.At the first stage, the test object and testing facilities are being prepared: The power system model configuration; Development and verification of testing scenarios;
Fig. 3 .
Fig. 3. Frequency and active power response on load switching of generators participating in the primary and secondary control | 2,525.6 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Beam coupling in 2 × 2 waveguide arrays in fused silica fabricated by femtosecond laser pulses
We demonstrate the coupling of a 2 2 × waveguide array produced by a femtosecond laser in fused silica. The coupling constants of the waveguide array are obtained by measuring the ratio of output power of each waveguide by the coupled-mode theory. The variation of the coupled power between four waveguides as a function of the propagation distance is investigated experimentally and theoretically. ©2007 Optical Society of America OCIS codes: (140.3390) Laser material processing; (140.7090) Ultrafast lasers; (230.7370) Waveguides. References and links 1. K. M. Davis, K. Miura, N. Sugimoto, and K. Hirao, “Writing waveguides in glass with a femtosecond laser,” Opt. Lett. 21, 1729-1731 (1996). 2. K. Miura, J. Qiu, H. Inouye, T. Mitsuyu, and K. Hirao, “Photowritten optical waveguides in various glasses with ultrashort pulse laser,” Appl. Phys. Lett. 71, 3329-3331 (1997). 3. C. B. Schaffer, A. Brodeur, J. F. Garcia, and E. Mazur, “Micromachining bulk glass by use of femtosecond laser pulses with nanojoule energy,” Opt. Lett. 26, 93-95 (2001). 4. D. Liu, Y. Li, R. An, Y. Dou, H. Yang, and Q. Gong, “Influence of focusing depth on the microfabrication of waveguides inside silica glass by femtosecond laser direct writing,” Appl. Phys. A. 84, 257–260 (2006). 5. T. Pertsch, U. Peschel, F. Lederer, J. Burghoff, M. Will, S. Nolte, and A. Tünnermann, Opt. Lett. 29,468 (2004). 6. D. Homoelle, S. Wielandy, A. L. Gaeta, N. F. Borrelli, and C. Smith, “Infrared photosensitivity in silica glasses exposes to femtosecond laser pulses,” Opt. Lett. 24, 1311-1313 (1999). 7. A. M. Streltsov and N. F. Borrelli, “Fabrication and analysis of a directional coupler written in glass by nanojoule femtosecond laser pulses,” Opt. Lett. 26, 42-43 (2001). 8. K. Minoshima, A. M. Kowalevicz, I. Hartl, E. P. Ippen, and J. G. Fujimoto, “Photonic device fabrication in glass by use of nonlinear materials processing with a femtosecond laser oscillator,” Opt. Lett. 26, 15161518 (2001). 9. W. Watanabe, T. Asano, K. Yamada, K. Itoh, and J. Nishii, “Wavelength division with three-dimensional couplers fabricated by filamentation of femtosecond laser pulses,” Opt.Lett. 28, 2491-2493 (2003). 10. Y. Kondo, K. Nouchi, T. Mitsuyu, M. Watanabe, P. G. Kazansky, and K. Hirao, “Fabrication of longperiod fiber gratings by focused irradiation of infrared femtosecond laser pulses,” Opt. Lett. 24, 646-648 (1999). 11. E. N. Glezer, M. Milosavljevic, L. Huang, R. J. Finlay, T.-H. Her, J. P. Callan and E. Mazur, “Threedimensional optical storage inside transparent materials,” Opt. Lett. 21, 2023 (1996). 12. E. N. Glezer and E. Mazur, “Ultrafast-laser driven micro-explosions in transparent materials,” Appl. Phys. Lett. 71, 882 (1997). 13. W. Watanabe, T. Toma, K. Yamada, J. Nishii, K. Hayashi and K. Itoh, “Optical seizing and merging of voids in silica glass with infrared femtosecond laser pulses,” Opt. Lett. 25, 1669 (2000). 14. W. Watanabe and K. Itoh, “Motion of bubble in solid by femtosecond laser pulses,” Opt. Express. 10, 603 (2002). 15. A. Szameit, D. Blomer, J. Burghoff, T. Pertsch, S. Nolte, and A. Tunnermann, “Hexagonal waveguide arrays written with fs-laser pulses”, Appl. Phys. B: Lasers Opt. 82, 507 (2006). 16. A. Szameit, J. Burghoff, T. Pertsch, S. Nolte, A. Tünnermann, and F. Lederer, “Two-dimensional solitons in cubic fs laser written waveguide arrays in fused silica,” Opt. Express. 14, 6055 (2006). 17. B. C. Stuart, M. D. Feit, S. Herman, A. M. Rubenchik, B. W. Shore, and M. D. Perry, “Optical ablation by high-power short-pulse lasers,” J. Opt. Soc. Am. B 13, 459 (1996). #79486 $15.00 USD Received 29 Jan 2007; revised 22 Mar 2007; accepted 5 Apr 2007; published 20 Apr 2007 (C) 2007 OSA 30 Apr 2007 / Vol. 15, No. 9 / OPTICS EXPRESS 5445 18. M. Lenzner, J. Krueger, S. Sartania, Z. Cheng, Ch. Spielmann, G. Mourou, W. Kautek, and F. Krausz, “Femtosecond Optical Breakdown in Dielectrics,” Phys. Rev. Lett. 80, 4076 (1998).
Introduction
Many researchers have investigated the interaction of intense femtosecond laser pulses with a wide variety of materials.Structural modification both of the surface and inside the bulk of transparent materials has been demonstrated.By focusing ultrashort laser pulses inside optical transparent materials a localized and permanent increase of the refractive index can be achieved.When the sample is moved with respect to the laser beam a refractive index profile like in a buried waveguide can be produced.A variety of devices in glasses such as waveguides [1][2][3][4][5], couplers [6][7][8][9], gratings [10], and binary storages [11][12][13][14] have been demonstrated.The fabrication of photonic devices is based on nonlinear absorption around the focal volume of femtosecond laser pulses.Combination of multiphoton absorption and avalanche ionization allows one to deposit energy in a small volume of the material surrounding the focus, creating hot electron plasma; by a mechanism that is still under investigation, transfer of the plasma energy to the lattice generates a local increase of the refractive index.In contrast to the conventional techniques this method can be applied to practically all transparent materials and offers the opportunity to produce two-dimensional waveguide arrays, and these arrays can be served as two-dimensional integrated photonic devices in communication systems.In order to utilize the two-dimensional waveguide arrays fabricated by femtosecond laser pulses for future applications, many researchers not only have studied their linear characters [5,15] but also the nonlinear characters [16].
In this paper we have presented and characterized a 2 2 × waveguide array produced by a femtosecond laser in fused silica, including cross-sectional image of the waveguide area by optical microscope and the correlative analysis.We provide a detailed characterization and description of the waveguide arrays and study the variation of the coupled power between four waveguides as a function of the propagation distance by the coupled-mode theory, these results can provide the basis for future applications for two-dimensional integrated optical devices.
Experiment on waveguide formation
For the fabrication of the waveguide arrays we used an amplified Ti:sapphire laser system with a central wavelength of 800 nm, a repetition rate of 1 kHz, an on-target pulse energy of 350 nJ and a pulse duration of about 120 fs.The sample was mounted upon a computercontrolled three-axis positioning system.The laser pulses were focused into a polished fusedsilica sample by a long working distance 50× microscope objective with a numerical aperture of 0.5.The focal plane inside the sample was about 1500 m μ ~1480 m μ deep.We wrote each waveguide along the x direction at the speed of 50 / m s μ for 4 times.A schematic of the setup is given in Fig. 1 and the microscopic image of the end facet of the sample is shown as the inset.The 2 2 × waveguide array was fabricated with waveguide to waveguide spacing (measured from center to center) of 20 m μ .The length of the end facet of waveguide was measured about 14 m μ in the z direction, while the width was about 4 m μ in the y direction.
The minimum spacing between waveguides using this technique is determined by the longer length of the end facet of waveguide, which can be reduced to ~ 10 m μ by compensation for the focusing aberration.In order to characterize the refractive index change and the propagation loss of our waveguides, a 5 mm long waveguide was written.The refractive index change n Δ of the waveguide was estimated to be 3 2 47 10 20 % by a nondestructive method [4].And the propagation loss of the waveguide was measured to be ~0.56dB/cm at 632.8 nm.
Results and discussion
From the microscopic image of the end facet of waveguide array given as the inset in Fig. 1, it can be seen that the cross-sectional profile shows an elliptical shape.The formation of this shape is possibly due to two causes.The first is that longitudinal intensity distribution of a Gaussian beam focused by an objective lens with NA of 0.5 has a Rayleigh distance much larger than its beam waist.The second reason is that formation of spherical aberration resulting from both the objective lens itself and the refraction at the interface between the air and the fused silica substrate makes the practical longitudinal size of the focal point much longer.In order to understand the refractive index profile better, we compared the cross section of the waveguide of the experimental results with that calculated by numerical simulation.For low-repetition-rate (1-100-kHz) systems the local index gradient is produced by a single pulse, and cumulative effects can be neglected.Therefore, since the material modification is due to energy transfer from the free electrons to the lattice, the size and shape of the material volume modified by the femtosecond pulse can be, to a first approximation, assumed to be equal to those of the region in which free electrons are generated.The evolution of the free-electron density, ( ) t ρ , in a medium exposed to an intense laser pulse can be described by the following rate equation [17,18] / ( ) ( ) ( ) where α is the avalanche coefficient, which is 2 4 / cm J [18] for fused silica, + is the number of absorbed photons, is the Planck's constant, ω is the laser frequency, and the function ( ) Int x yields the integer part of x.For an elliptical Gaussian beam, the intensity distribution near the focal spot can be expressed as Where 0 ω is the beam waist, In order to analyze the guiding properties of the waveguide array, a He-Ne laser at 632.8nm was used.The input power was small so that the nonlinear effects can be neglected.The light was coupled into only one waveguide with a 10× microscope objective (NA=0.25),coupled out by a 20× objective and projected onto a CCD-camera.A schematic diagram of the setup is shown in Fig. 3.The intensity distributions at the output facet of the 2 2 × waveguide array are displayed for different positions of the input beam in Fig. 4. To model the optical responses of the arrays we have used a coupled-mode approach and by considering only the nearest waveguides coupling.If the waveguides are marked in Arabic numerals as inset in Fig. 1, the amplitudes ( ) m a x (m=1, 2, 3, 4) of the electric fields propagating in the waveguides obey the following equations The coupling between adjacent guides induces the transverse dynamics.Energy exchange is caused by the overlap of the evanescent tails of the guided modes, which enter into Eq.( 3) through coupling constants h c , v c for the horizontal, vertical directions, respectively.The last term describes the nonlinear Kerr effect, with a coefficient γ .At low powers, the nonlinear term of Eq. ( 3) can be ignored, the ordinary differential equation is then analytically integrable.If the first waveguide is excited with unit power, the solution of the Eq. ( 3) is Thus, the output powers of the four waveguide, * ( ) ( ) ( ) P x a x a x = , after propagation through the waveguide arrays are: 1 ( ) cos ( )cos ( ) 2 ( ) sin ( )cos ( ) 3 ( ) cos ( )sin ( ) 4 ( ) sin ( )sin ( ) The ratio of the output power of waveguide is measured as follows: ( ) / ( ) sin ( ) / cos ( ) 1.309 The sample is 10-mm long, thus the coupling coefficients 1 0.853 cm − = can be obtained.The coupling constants are achieved by the ratio of output power of each waveguide.This method is different from that in the Ref. [5].Moreover, the coupling in the horizontal and vertical directions is almost equal, even though the image profile of waveguide shows high asymmetry as shown in the inset of Fig. 1.The reason for this unexpected behavior is that the different rates of decay of the evanescent field of the guided mode in the two orthogonal directions.The mode is broader and decays faster along the vertical direction, while along the horizontal direction the mode is narrow and decays slowly.Consequently, the overlap integral between the modes in the horizontal and vertical directions can be matched.In order to visualize how the intensity of the four waveguides varies, the relation of power and the propagation distance is plotted by numerical method.Figure 5 shows the output power of each waveguide as a function of propagation distance x when the waveguide marked Arabic numeral 1 was excited.It can be clearly seen that the output power of the initial excited waveguide becomes minimal when the beams propagating 10mm, while the output power of the diagonal waveguide marked Arabic numeral 4 becomes maximal.The splitting ratio is about 18:23:26:33 for the four waveguides, which is an important detail in the design of twodimensional integrated optical devices.Furthermore, it can be seen that the output powers of waveguides numbered 2 and 3 should be equal if h v c c = in Eq. ( 5), the difference between the h c and v c may be due to the asymmetry of the waveguide which shape is nearly elliptical.
Conclusion
In conclusion we have demonstrated and characterized a 2 2 × waveguide array produced by a femtosecond laser in fused silica.Using the coupled-mode theory, we calculated the coupling constants of the waveguide array through the ratio of output power of each waveguide and demonstrated the variation of coupled power in each waveguide as a function of the waveguide length.These results may pave the way for the realization of new applications using femtosecond nonlinear materials processing.
Fig. 1 .
Fig. 1.Scheme of the writing process in transparent bulk material using fs laser pulses.Inset: Microscopic image of the end facet of the sample of the 2 2 × waveguide array marked in Arabic numerals.The waveguide to waveguide spacing is 20 m μ .
Fig. 5 .
Fig. 5.The coupled powers of waveguide as a function of propagation distance x. | 3,211.6 | 2007-04-30T00:00:00.000 | [
"Physics",
"Engineering"
] |
Waste bone char-derived adsorbents: characteristics, adsorption mechanism and model approach
ABSTRACT The increase in meat consumption will result in a significant amount of bone being generated as solid waste and causing pollution to the environment. By pyrolysis or gasification, waste bones can be converted into bone char (BC), which can be used as an adsorbent for removing pollutants from wastewater and effluent gas. The purpose of this study is to critically appraise results from pertinent research and to collect and analyse data from studies on BC adsorbent applications from experimental, semi-empirical, theoretical and contextual viewpoints. Detailed descriptions of the theoretical adsorption mechanism, as well as possible interactions between pollutants and BC surface, were provided for the removal of pollutants. The study provides insights into the effect of synthesis conditions on BC's physicochemical properties and strategies for improving its adsorption capacity as well as future outlooks to guide research and support the development of green and cost-effective adsorbent alternatives to tackle water pollution. Additionally, this review discusses the application of BC to remove contaminants from water and soil, outlines strategies for regenerating pollutant-saturated BC, interprets the adsorption kinetics and isotherm models used in BC sorption studies, and highlights large-scale applications using packed-bed columns. Consequently, we proposed that when selecting the optimum isotherm model, experimental data should be used to substantiate the theory behind the predicted isotherm. Therefore, error functions combined with non-linear regression are the most effective method for obtaining and selecting optimal parameter values for adsorption kinetics and isotherm models. GRAPHICAL ABSTRACT
Introduction
Developing low-cost adsorbents for treating industrial effluents and wastewater has become a rising concern for environmental researchers.The removal of organic and inorganic contaminants from aqueous solutions is efficiently accomplished through adsorption.Physicochemically, adsorption occurs when contaminants accumulate on surfaces of porous solids due to their large internal surface area and surface chemistry.To date, a lot of research progress and efforts have been put into exploring waste bones, which opens up a new opportunity for wide-ranging applications, such as hydroxyapatite for tissue engineering, hierarchical porous carbon for energy storage, phosphate source for soil remediation, heterogeneous catalyst and adsorbent for the treatment of contaminated gas, water and soil [1,2].On one hand, adsorbents made of carbon are important in a variety of environmental technologies, including the purification and separation of gases, the purification of drinking water, the degradation of pollutants, the treatment of wastewater, and soil remediation [2,3].In contrast, consumption of beef and meat is accumulating bones as solid waste, requiring proper solid waste management in order to eschew public health issues during decomposition of the organic content [1].On the other hand, from an agro-environmental and economic perspective, the conversion of waste bone-to-materials, such as hydroxyapatite and bone char (BC) adsorbents is of great interest.The outstanding physicochemical properties of BC make it an ideal adsorbent for environmental pollution control [4].Therefore, BC derived from waste bones for water treatment promotes a circular economy and is an effective method of reducing environmental impact [2,5,6].BC can be produced through calcination, carbonisation via pyrolysis or gasification process.The produced BC mostly contains about 90% calcium phosphate in form of hydroxyapatite and 10% carbon [7].Hence, as an adsorbent, BC contains a porous structure of hydroxyapatite with carbon distributed throughout it.Thus, the surface of BC is heterogeneity or homogeneous depending on the distribution of carbon on hydroxyapatite [8].
Unlike organic pollutants which can be degraded into harmless end products, metallic ions and anionic contaminants cannot be degraded.It has been proven that adsorption processes are costefficient and effective methods of removing metallic ions [9].There have been promising results with BC as an effective and low-cost adsorbent for metal ions and anionic pollutants like fluoride and phosphorus removal [9][10][11][12].There is a significant correlation between the physicochemical properties of the surface of the BC and the pH of the solution when it comes to adsorption capacity [13].Aside from that, it is widely accepted that the thermal conditions used to produce BC, such as its temperature, residence time and atmosphere, can significantly influence its properties physicochemically [4,10].An article published in 2019 reviewed the use of BC as a green sorbent for removing fluoride from drinking water [14].However, the mechanism of metal ions and anionic element pollutants uptake during adsorption on BC has not been completely elucidated, prompting this review work.To predict the mechanisms of BC adsorbent for different contaminants, experimental data from the adsorption process must be modelled.Conversely, it is possible to improve the adsorption capacity of BC using suitable activation methods by modifying surface functional groups in a way that increases selectivity towards specific contaminants, or by acid/alkali treatment method that increases surface area, pore volume and provides a range of pore size distribution [14][15][16].In spite of its lower surface area than activated (C), BC still contributes to high metal ions adsorption.
As an inorganic material, BC has high adsorption capacity and can efficiently remove metallic pollutants and other organic/inorganic elements or compounds from water [17,18].The performance of a BC adsorbent, especially its porous network structure and surface chemistry, greatly influences the efficiency of the adsorption process.These physicochemical properties of BC are strongly influenced by synthesis conditions [1,10].By optimising the pyrolysis conditions of bovine bone, a 143% increase in the metal uptake of the BC, ranging from 68.3 to 119.4 mg/g was achieved [10].Based on the results, the pore size distribution rather than surface area determines the adsorption capacity.There is a need for porous materials capable of adsorbing mesoporous molecules.Finding an adsorbent that selectively adsorbs pollutants in this size range is a challenge.Mesopores control the adsorption process kinetics rather than micropores, which govern the amount of adsorbed contaminants.Additionally, adsorption is enhanced by the presence of surface functional groups.In a nutshell, adsorption is generally influenced by both the speciation of the adsorbate in solution (i.e. its different chemical forms) and the characteristics of the adsorbent (i.e.microstructural properties and surface chemistry).It has been found that, BC (82%) is the most effective adsorbent for the removal of fluoride ion compared to adsorbent from coal, wood, activated carbons obtained from coal, petroleum coke and wood, charcoals produced from Q. mongolica, Q. pubescens, Q. phillyraeoides and C. obtuse [19].Waste bones derived BC and hydroxyapatite materials have been applied for removing the following contaminants via adsorption, endotoxins [20], fluoride from drinking water [4,13,21,22], arsenic (V) [23][24][25], metallic ions such as Mn 2+ , Fe 2+ , Ni 2+ , Cd 2+ , Cu 2+ , Zn 2+ , Hg 2+ , etc. from wastewater [10,[26][27][28], phosphorous from wastewater [29,30], volatile organic compounds (VOCs) [31], separation via adsorption of ethanol, propanol, and butanol from aqueous solutions [32], removal of methylene blue from wastewater [33[34], and organic pollutant degradation [35].BC is a low-cost adsorbent with large surface area and also a plausible support material for photocatalysts [33].It is generally believed that adsorption is a result of electrostatic and non-electrostatic interactions between the adsorbate species and the surface of the bone char.In addition to BC adsorbent surface chemistry, specific pore size distributions of bone char contribute to selective adsorption by means of size exclusion, especially for cations contaminants.There are various mechanisms by which BC adsorbent surfaces can interact with pollutants molecules, including Van der Waals forces, hydrogen bonding, ion exchange, and electrostatic attraction [1,36].
From an environmental perspective, waste bone is converted into BC and used as an adsorbent to remove contaminants from wastewater or soil, thereby protecting the environment and offering waste management strategy.There are a number of parameters that affect BC's longevity and efficiency as an adsorbent, such as the initial contaminant concentration, the type of bone, the thermal method and conditions used for its production, and the removal capacity.This comprehensive review on the application of BC for purification and separation through adsorption captures the physicochemical properties of BC that makes its applicable as an adsorbent, the mechanisms of different contaminants adsorption onto the surface, the kinetic models of the removal rate, and the isotherm models for the equilibrium stage.Also, presented and discussed is different isotherm models used to describe adsorption data for bone derived adsorbents, which is a critical aspect in the design and control of adsorption process.Previously, a review on fluorine removal from drinking by BC has been reported [14], waste bone-tomaterials and their application has been reported elsewhere [1], analysing and interpreting adsorption isotherms [37,38], and adsorption equilibria of metal ions on BC has also been reported in the literature [39].Consequently, explanation of the physical meaning of the kinetic models and the methods for solving them, which is rarely discussed in the literature is covered in this study.This review, therefore, synthesises knowledge in the application BC for contaminant removal from water and soil, highlight new topic area for further research, presents the physical interpretations of kinetic and isotherm models used in BC adsorption studies, and discusses contemporary issues in adsorption.
Preparation and physicochemical characteristics of bone char
For the production of bone char, many types of bones can be used, such as those from camels, swine, ostriches, cows, chickens, goats, porcine, turkeys, and bovines [40,41].These bones can be classified into two categories: hard bones (i.e.bovine, camel, etc.) and soft bones (i.e.fishbone, chicken, etc.).Gasification and pyrolysis are the two main methods of producing bone char (BC).A pyrolysis process is the thermal degradation of a waste bone under oxygenlimited atmosphere producing BC residue and biooil (250°C-850°C), whereas gasification involves the partial oxidation of bone biomass at high temperatures to produce a gaseous energy carrier (i.e.syngas) and BC.Unlike pyrolysis, gasification occurs at much higher temperatures, resulting in the production of gases.Pyrolysis produces BC with varying physiochemical characteristics that are influenced by various factors, including heat rate, gas atmosphere, and residence time [1].Also, pyrolysis yields more BC than gasification.In contrast, calcination can be used to produce BC adsorbent, particularly hydroxyapatite material.Adsorption capacity and catalytic properties of BC are influenced by the surface properties and production conditions, including temperature and residence time.During BC synthesis, bone apatite minerals become dehydroxylated from its hydroxyapatite when high temperatures are applied.Residence time, heating rate, and purging gas are critical factors that affect the quality of BC produced at different thermochemical conversion temperatures [14].
The effect of pyrolysis temperature on produced BC: (a) yield and surface area, (b) pore volume and size, and (c) acid and basic sites are shown in Figure 1.The data used for this plots were obtained for devilfish BC produced through pyrolysis at temperatures range from 400°C to 800°C, and consequently used as an adsorbent for removal of fluoride from drinking water [42].The result shows that the yield of BC decreases as the pyrolysis temperature increase, while the surface area increases to maximum at 600°C (163 m 2 /g) before decreasing with further increase of pyrolysis temperature from 600°C to 800°C.It is worthy to note that the BC porous structure developed considerably as a result of pyrolysis.The total pore volume of the produced BC followed a similar trend with the surface area, whereas, the average pore diameter increases as the pyrolysis temperature increased.Hence, at temperatures of 600°C-800°C, BC's specific area and pore volume decreased due to dehydroxylation of its hydroxyapatite [14,42].The increase in pore size from meso-to macro-pores resulting in the decrease in surface area as temperature increased beyond 600°C, suggests that about 5% of the total surface area is composed of macropores, while 95% is composed of mesopores and micropores [43].It is believed that the pore structure and surface properties changes as a function of carbon content during charring [8].Thus, the carbon content is a major determinant of BC adsorbent textural properties.It is well known that the functional groups such as hydroxyl on the BC surface develop electrical charges when exposed to an aqueous environment (MOH ↔ M + + OH¯or MOH ↔ MŌ + H + , M denotes BC surface).When in suspension, the pH at which the net charge on the surface of BC is zero is called the point of zero charge (PZC).In the raw bone powder, the point of zero charge (pH PZC ) is 7.01, but as the pyrolysis temperature increases from 400°C to 800°C , it increases from 6.74 to 8.46 [42].In BC adsorbents, PZC controls how easily pollutant ions are adsorbed.In spite of the fact that BC is an amphoteric material, the basic sites of BC were stronger than the acid sites (Figure 1c).The acid sites strength decreased as the pyrolysis temperature increased from 400°C to 800°C , while maximum basic sites strength was recorded at 500°C.BC adsorbent material with Ca/P less than or equal to 1.5 shows a higher acidity and low basicity, whereas Ca/P ratio greater than 1.67 shows a high basicity and lower acidity [2].In addition to the Ca/P ratio, the acid and basic sites can be ascribed to evolution of hydroxyapatite formation and the decomposition of carbonate and other organic matter during pyrolysis.However, through protonation and deprotonation of the existing functional groups in BC, the adsorbent surface charge can be controlled.These results demonstrate that the surface area, pore size distribution as well as acid and basic sites can be controlled through pyrolysis temperature.As a result of this analysis of the influence of synthesis conditions for preparing BC, the temperature at which bone wastes were pyrolysed would significantly affect the ability of the BC to adsorb pollutants.An analysis of the surface topology and morphology of BC revealed a solid phase with an irregular, compact and rough structure, with very few cavities [32].This proves the heterogeneous nature of BC particles in terms of particle porosity, shape, and visible macropores, with the surface mainly composed of oxygen (50%), calcium (18%), carbon (17%), and phosphorus (12%), corresponding to carbon, calcium phosphate, and hydroxyapatite [22].
In spite of the fact that activated alumina showed superiority over BC breakthrough curves by approximately 200 bed volumes or 1.5 days, the BC adsorbent surface area had higher fluoride concentrations per square metre than activated alumina, which suggests that maximising BC surface area could improve adsorption [22].The surface area of BC can be increased through pre-treatment with dilute acid/ alkali and consequently optimised pyrolysis temperature.The main components of BC include hydroxyapatite [Ca 10 (PO 4 ) 6 (OH) 2 ], calcium carbonate [CaCO 3 ], calcium phosphate [Ca 3 (PO 4 ) 2 ], and carbon [2].BC exhibits a combination of micro-, meso-, and macroporosity, which makes it a good candidate for separation by adsorption and as a catalyst support.The properties of bone char are similar to those of activated C (non-polar adsorbent) and apatite (polar adsorbent) [17].In other words, the mineral constituents within BC may influence both physicochemical properties and adsorption performance.But as a result of thermal sintering at high temperatures (>850°C), porosity and surface area could decrease.
In Figure 2, the nitrogen sorption isotherm of crushed cow bone and its BC as well as the pore size distribution (PSD) of fish BC are displayed.The adsorption-desorption hysteresis pattern indicates that both are type-IV isotherm, suggesting mesoporous structures.In bone char, the total gas uptake increased at a relative pressure below 0.5 (P/P 0 ), indicating that microporous pore structures were formed due to carbonisation (Figure 2a).It is obvious H3 hysteresis loop appeared in the range of P/P 0 = 0.3-0.9,confirming mesoporous material.Furthermore, the BC adsorbent shows a greater total gas uptake than raw crushed bone, which translates into more pores, increase pore volume and surface area.Hence, compared to crushed cow bone, which has a specific surface area of 0.96 m 2 /g, a total porosity volume of 0.001 cm 3 /g, and an average pore size of 6.49 nm, its char has 95.9 m 2 /g, 0.236 cm 3 /g, and 9.863 nm [44].This suggests that charring of bone improves surface area and pore structure.Figure 2b shows that the PSD is dominated majorly mesopores followed by macropores and few micropores.BC contains a wide range of pore sizes (1.7-75 nm), with a predominant mesopore size distribution of 2 nm to 50 nm, as well as few micropores (<2 nm) or macropores larger than 50 nm [33].Based on the classical Barrett-Joyner-Halenda (BJH) model adsorption cumulative distribution of pore size in BC, micropores accounted for 3.59%, mesopores accounted for 88.54%, and macropores accounted for 1.31%, suggesting a mesopore-dominated pore structure [45].Despite the fact that micropores can offer the most adsorptive sites and dominate the adsorption capacity of adsorbents, macropores and mesopores also play an important role in adsorption.However, depending on the size of the pollutant, the diffusion resistance could increase if the micropore is too narrow, resulting in low diffusion rates.Since mesoporous structure is dominant in the BC, it makes intraparticle diffusion the most common rate-limiting step during the adsorption process.In comparison to untreated pork BC, the specific surface area increased by about 80% and porosity enhanced by acid treatment such as H 2 SO 4 and H 3 PO 4 [15].Hence, acid or alkali can improve the porous structure and textural properties of BC, which is strongly to its applicability and adsorption capacity.It has been reported that the larger the specific surface area of BC, the more fluoride ions adsorbed [19].Table 1 shows typical specific surface area, pore volume and pore size of BC derived from different bone sources.Studies have demonstrated BC to be low-cost adsorbents that have excellent ion-exchange characteristics, high adsorption capacities, mesoporous structure, and a large surface area in the range of 20-120 m 2 /g depending on bone source.
By means of Fourier-transform infrared (FTIR) spectroscopy it was found that in addition to the main cation Ca 2+ , BC consists of trace elements like Na + , Mg 2+ , and K + , and major functional groups, such as hydroxyl (OH − ), carbonate (CO 3 ) in its structure [50,51].Figure 3 shows the FTIR spectroscopy showing the functional groups in BC, as X-ray diffraction (XRD) of fishbone BC produced at pyrolysis temperatures of 300°C to 900°C as well as the Ther-moGravimetric (TG) and Differential ThermoGravimetric (DTG) thermographs for raw cattle bone powder under a carbon dioxide atmosphere.The peaks associated with PO 3− 4 functional group appeared at 1032, 962 and 563 cm −1 (Figure 3a), respectively [49].The peak of the hydroxyl (OH¯) group occurred at 3443 cm −1 , whereas, the peaks at 1408 and 880 cm −1 represent the carbonate functional group (CO 2− 3 ).Therefore, in designing an efficient contaminants removal adsorbent from waste bone, surface properties and the conditions of BC production, such as temperature and residence time, seem to be crucial.This is because it has been reported that as pyrolysis temperatures rise above 700°C, the hydroxyapatite of BC is dehydroxylated, thereby decreasing its anionic pollutant such as fluoride adsorption capacity [4].It has been reported that BC surface area, negative charge, and the number of oxygen-containing functional groups increased after hydrogen peroxide pre-treatment, despite a significant reduction in organic matter content [52].
The crystallinity of the BC increases as the pyrolysis temperature increased from 300°C to 900°C, evident in the intensity and sharpness diffraction peaks (Figure 3b).It can be observed in Figure 3b, that the diffraction peaks at 2θ values of 25.93°, 31.82°,39.56°, 46.64°, 49.53°, and 53.23°, respectively, match the (002), ( 211), ( 130), ( 222), (213), and (004) diffraction peaks of hydroxyapatite powder (HAP).In addition, as pyrolysis temperature increases the conversion of bone powder into hydroxyapatite material increase.A higher surface oxygen-containing functional group was observed in BC samples with lower crystallinity, leading to a better adsorption capacity [52].However, the TG and DTG shown in Figure 3c, describe the thermal response of raw bone powder to temperature during thermochemical process (e.g.pyrolysis) for BC production.The weight loss in the temperature range 20°C−200°C which is about 6.2 wt% can be credited to loss of adsorbed volatile molecules and moisture, thermal degradation of organic materials such as collagen, proteins, and fat tissue occurred in the temperature between 200 and 576°C with about 22.4 wt% weight loss and for temperatures ranging from 576°C to 880°C decomposition of carbonates and partial dehydroxylation process of Table 1.Textural characteristics of bone chars (BCs) from different bone sources.
Parameter
Raw cattle bone [47] Cattle BC [47] Pig BC [48] Fish BC [46] Sheep BC [49] Cow BC [44] Ca hydroxyapatite happens, resulting in around 6.6 wt % weight loss [1,53].In accordance with the total mass loss percentage with the TGA of about 35.2 wt%, the pyrolysis yield should be around 40%-62% depending on temperature.This explains the changes in the yields and textural properties of BC observed and reported in Figure 1.Hence, the thermal behaviour of waste bone powder will prove helpful in arriving at the optimum thermochemical temperature and atmosphere for BC production.
BC is a valuable adsorbent because its ability to exchange cations in aqueous solutions with a wide range of metal ions such as alkalis, alkaline earth metals, and transition metals [5].Generally, the acidic sites are primarily attributed to PO-H from HPO 4 2− species (Brønsted sites) and Ca 2+ cations (Lewis sites), whereas basic sites are accredited to surface functional groups PO 4 3− and OH − (and perhaps other phases such as CaO, Ca(OH) 2 , and CaCO 3 ) [1,5,6].The strength of the acidic and basic sites reported in the literature was 0.29 and 0.62 meq/g for BC Fija Fluor [13], and 2.83 mmol/g (acidic site) and 0.51 mmol/g (basic site) for commercial BC [19], respectively.By protonating and deprotonating the hydroxyl groups on the surface of hydroxyapatite of the BC, basic and acid sites are formed [13].The acid and basic sites distribution on the surface of a BC adsorbent is influenced by the Ca/P ratio.The BC adsorbent surface chemistry and specific pore size distributions play a role in selective adsorption.For instance, carbonate and hydroxyl in bone char is exchanged with fluoride ions during fluoride removal by BC.In other words, the surface functional groups of BC play a crucial role in regulating the adsorption capacity as well as its porous structure.In summary, the molecular interactions between a pollutant and BC depend on both the properties of the wastewater/solution (temperature, pH, and ionic strength/concentration) and the properties of the bone char itself (surface chemistry, surface charge, and textural properties).Therefore, BC adsorbent's main characteristic is its selectivity, which is (hydrogen peroxide treated bone powder), BCH-300 (hydrogen peroxide treated bone powder and pyrolysed at 300°C), BC-300 to BC-900 (BC produced at pyrolysis temperature 300°C-900°C) and hydroxyapatite powder, HAP, [52] and (c) ThermoGravimetric (TG) and Differential TG (DTG) plots for raw cattle bone powder under a carbon dioxide atmosphere [53].
attributable to the variation in surface affinities for different contaminants, which enhances separation via adsorption.
BC adsorption mechanisms
Adsorption involves the mass transport of contaminants from a liquid phase to a solid phase adsorbent, which in this case is BC.The adsorption process consists primarily of four stages: (1) mass transfer of pollutants from the bulk of the solution to the exterior film surrounding of the adsorbent, (2) transport of pollutants across the external liquid film boundary layer to external surface sites, (3) through intra-particle diffusion, pollutants diffuse within the pores in the adsorbents, and (4) adsorption of pollutants on the adsorbent's internal surface [25,54].The pollutant removal rate can be controlled by any of these processes.The final step is the interaction between contaminant and adsorbent, which an exchange process between the contaminant and the active sites of the adsorbent.It is, however, important to acknowledge that in a fully mixed batch system, mass transport from the bulk solution to the exterior surface is typically very fast, neglecting the transfer of contaminants from the bulk solution to the exterior film surrounding the adsorbent.In summary, the mass transport and adsorption on BC involves external mass transfer (film diffusion), internal diffusion (intra-particle diffusion) and adsorption on active sites, which is illustrated in Figure 4.In summary, adsorption process can be divided into three stages: (1) fast uptake of contaminants by the adsorbent, (2) gradual adsorption due to active adsorption sites on BC's surface, macropores, and mesopores, and (3) contaminant diffusion into micropores and achieving equilibrium adsorption.Therefore, the BC adsorbent abstracts one or more solutes in a solution or gas mix to its surface and holds them by intermolecular forces or bonds.There are two types of adsorption: physisorption and chemisorption, depending on the nature of the interaction between the pollutant and BC surface.As a result of the physical and chemical interactions between pollutants and the BC adsorbent, adsorption of pollutants is a complex process [56].Understanding the pollutant removal mechanisms is important for research and improving BC adsorbent's performance.The surface charge of BC is caused by the interactions between the surface functional groups and the ions present in the solution [13].Therefore, an important factor in explaining the adsorption of ions on BC is the surface charge that depends on the type of ions present in solution, the surface properties, and the pH of the solution [1].The Van der Waals force, hydrogen bonding, ion exchange, and electrostatic attraction are all mechanisms that BC adsorbent surfaces can use to interact with contaminants molecules in wastewater [36].In fact, surfaces of hydroxyapatite (HA) in BC contain P-OH functional groups that act as sorption sites [57].However, the mechanism of sorption postulated for metal ions is ion-exchange with Ca 2+ ions in the hydroxyapatite component of BC [58].Deydier et al. [59] reported that metal ions binding to HA derived from WABs comprises three successive steps, which are surface complexation and calcium hydroxyapatite of metal ions, Ca 10 (PO 4 ) 6 (OH) 2 dissolution and then precipitation of slow metal diffusion/substitution of Ca.Hence, dissolution/precipitation produces first a solid solution (Ca/MHA) until all the Ca in the BC has been substituted by the metal ions M. It was found that ion exchange of lattice Ca with Cd in aqueous solution, surface complexation between oxygen-containing groups, and electrostatic interactions between the positively charged Cd 2+ and the negatively charged BC surface and oxygen-containing functional groups (e.g.CaOH + 2 or POH + 2 ) formed as a result of oxidation of organic matter were responsible for the adsorption and removal of Cd 2+ by fishbone BC [52].Also, it has been proven that metal ions adsorption onto BC adsorbent is dependent on the size of the ions and the BC pore size distribution [26].Metal ions are adsorbed more closely and strongly with a smaller ionic radius and a greater valence.The ion exchange mechanism with Ca 2+ depends on the functional groups of BC and also the metal ion sizes.In contrast, the higher the hydration of an ion, the farther it is from BC adsorption surface, and the weaker its affinity [39].
The removal of F − ion by BC adsorption was due to ion-exchange with OH − functional group in the structure to form the fluorapatite (Ca₁₀(PO₄)₆F₂), which is enhanced by the large surface area and pore volume [14].As part of the ion exchange process, anions on the surface of the BC adsorbent are replaced with fluoride ions in liquid solutions.The anions on the BC adsorbent surface can be replaced with fluoride ions include OH¯and CO 3 ion exchange between OH − and F − is most reported mechanism of removal.This affirms the observation reported in the article by Abe and co-workers [19], that fluoride ions adsorbed onto BC are chemical in nature, since the amount of fluoride ions adsorbed increases with increasing temperature and decreasing pH.Recently, a similar observation was reported by Cruz-Briano and colleagues [42].As a result, the adsorption of fluoride ion onto BC is endothermic.Similarly, the electrostatic interactions between the BC surface and the F -ions play a crucial role in the removal process depending on the pH induced surface charges (positively charged surface will increase affinity for F -) [13].Electrostatic interaction, however, is strictly associated to the BC's surface charge, which is influenced by BC adsorbent's point of zero charge (pH PZC ) and solution pH value.The BC adsorbent becomes protonated and positively charged when the pH value falls below its PZC, and the deprotonated and negatively charged when pH is above PZC.In a nutshell, electrostatic charges transmitted by ionised pollutant molecules are influenced by the pH of the medium.Considering that BC has both basic and acidic sites, it would be charged when placed in an aqueous solution, allowing the ions to interact with the surface functional groups.On the other hand, methylene blue removal by BC decreased with increasing temperature, indicating that the adsorption process is exothermic [33].This process involves cationic/anionic contaminants attracted to BC surface charges (-/+ve charges), leading to their adsorption.It has been reported that the presence of chloride ion increased the rate of fluoride ion uptake by BC, which was investigated by adding sodium chloride into solution to analyse the effects of another anion on the adsorption of the fluoride ion onto BC [19].Thus, BC with acidic pH would support the adsorption of anions.However, dehydroxylation of the hydroxyapatite component influences ligand exchange in the BC adsorption mechanism [60].This suggests that the adsorption of fluoride on BC occurs mostly due to electrostatic interactions between positively charged sites and fluoride ions, but chemisorption can also occur.As determined from X-ray photoemission spectra (XPS), fluoride bonded to calcium to form fluorite (CaF 2 ), while hydroxyapatite resulted in fluorapatite (Ca 10 (PO 4 ) 6 F 2 ) [13,21].Following equations can be used to describe fluoride's chemisorption on hydroxyapatite of BC [21].Hence, Ca phosphate has been shown in the bones to perform two important functions: adsorption and/or ion exchange, as shown in the reaction Schemes in equations 1-3.
; Ca 10 (PO 4 ) 6 (OH) 2 + 2F ; Ca 10 (PO 4 ) 6 F 2 + 2OH (2) Based on this mechanism, metal ions (where; M 2+ = Cd 2+ , Mn 2+ , Fe 2+ , Co 2+ , Ni 2+ , Cu 2+ , etc.) are adsorbed on the hydroxyapatite of BC surface via an ion exchange mechanism with Ca 2+ ions leading to the formation of a new structure [M x Ca (10−x) (PO 4 ) 6 (OH) 2 ], where x is the adsorbed heavy metal [10].Adsorption of metal ions on BC surfaces may be enhanced by adjusting pH to induce electrostatic attraction between metal ions in wastewater and the charge on BC surfaces.Therefore, ion exchange and electrostatic attraction are key mechanisms responsible for metal ion adsorption on BC.The adsorption mechanism of various pollutants removal from wastewater or soil remediation using BC can be summarised as a combination two or more of these surface physiosorption and chemisorption, ionic exchange, adsorption and precipitation, electrostatic interaction, chemical complex, cation-π bonds, co-precipitation, and the formation of organo-metallic complexes and precipitates [7].In Lewis acid-base interaction mechanism, bases donate pairs of electrons while acids accept, noting that metal ions are Lewis acids (e.g.Ca 2+ ) and anionic pollutants such as OH -, F -are Lewis bases.Based on this mechanism, the acids will react with bases to share electrons (removing the pollutant), resulting in no change in oxidation number.By protonation and deprotonation reactions occurring on the surface of BC, the adsorption sites of hydroxyapatite are formed that adsorb cations and anions pollutants [61,62].Protonation of surface functional groups such as carbonate, phosphate, hydroxyl, etc. will take place at solution pH levels below the point of zero charge (pH PZC ) of BC [62].This results in a positive surface charge on the BC.Thus, the BC having basic pH would favour the adsorption of positively charged pollutants and vice versa.Consequently, hydrogen bonds and π-π electron donor-acceptor interactions have been proposed as the mechanism between toluene and BC materials during adsorption for its removal [31].As a result of the functional groups and specific ligands on BC surfaces, a diversity of metals can interact with them to form their corresponding complex solid mineral phases.So, designing BC adsorbents and adsorption systems will require a thorough understanding of their adsorption mechanisms.Therefore, modelling the adsorption equilibrium data and characterisation of BC adsorbent before and after adsorption would be an excellent means of obtaining the adsorption mechanisms.The common BC adsorption mechanisms are summarised in Figure 5.Among the adsorption mechanisms, chemical adsorption involves the formation of chemical bonds and monolayer of adsorbed molecules (i.e.there is no further adsorption at a site once an adsorbate molecule occupies it), physical adsorption involves van der Waals forces, multi-layers and interaction of adsorbed molecules, electrostatic interaction based on the electrical force between two (dis) similarly charged ions, and ion-exchange involves exchange of ionisable cations between the BC and the contaminant (Figure 5).Under the influence of van der Waals forces, comtaminant molecules attach to the BC adsorbent surface in a process called physisorption.Dipoledipole interactions are the main component of van der Waals forces.In electrostatic removal mechanisms, depending on the BC surface charge positively and negatively and the pH levels, the anionic functional groups in solution interact with the surfaces.Also, a significant role in the removal process is played by electrostatic interaction between the metal ions and BC surface depending on the medium pH.As the pH of the solution changes, the isoelectric point of the BC surface changes, which affects its electrical attraction to contaminant species [18].Conversely, the exchange of cations presents on BC surfaces, in ion exchange mechanisms depend on the functional groups of BC as well as the metal ion sizes.Several transition metals and metallic ions pollutant possess a strong binding ability to these exchange sites.By modelling the equilibrium adsorption data with appropriate isotherm equation, analysing the BC adsorbent before and after adsorption, applying molecular dynamics, and calculating density functional theory (DFT), the adsorption mechanisms can be determined [18].The most convenient and widely used method for modelling adsorption data is fitting of isotherm models.
Kinetic models
The kinetics of adsorption depicts the rate of contaminant uptake on the BC adsorbent.An adsorbent design and control depend heavily on understanding the dynamic behaviour of the adsorption system.In order to apply adsorption by BCs to industrial scales, it is imperative to study the rate at which pollutants are removed from aqueous solutions.Through the use of kinetic models, chemical reactions, diffusion control, and mass transfer mechanisms can all been explored during BC adsorption process [29].To accurately evaluate contaminants' adsorption rates, kinetic models are essential.Adsorption processes are commonly analysed using experimental kinetic data to determine the effect of the external film boundary layer, internal diffusion resistance and adsorbent surface sorption.
The percentage removal (%R) of the contaminants by the BC can be calculated using equation (4).Whereas the amount of contaminant adsorbed per unit mass of the adsorbent, q e (mg/g), can be estimated using equation 5.
Where; C 0 denotes initial concentration of contaminant (ppm), C e is the equilibrium concentration of contaminant (ppm), V is the volume of contaminant solution (L) and m is the mass of adsorbent (g).
The Ritchie rate equation ( 6) is defined when a number of surface sites of the adsorbent, n, are occupied by each contaminant.Form this generic equation, the Lagergren pseudo-first-order kinetic model and the second-order kinetic model can be predicted.
Where; θ = q t /q e .q e is the amount of the contaminants adsorbed at equilibrium per unit weight of the adsorbent (mg•g -1 ); q t is the amount of contaminants adsorbed at any time, t (mg•g -1 ).
Prior to adsorption, the surface coverage of the adsorbent is assumed to be zero (θ 0 = 0).However, adsorption that is preceded by diffusion through a boundary, when θ 0 = 0 and t 0 = 0, the kinetics is then described by equation ( 8) for the pseudo-firstorder equation first proposed by Lagergren in 1898.
The Lagergren rate constant, K 1 , is the slope of the plot of ln (q eq t ) against t (time) as shown in equation (8).Even if this plot is highly correlated with the experimentally obtained adsorption data, if the intercept does not equal the natural logarithm of the equilibrium adsorption of metal ions, the adsorption is unlikely to be first-order.In such occasion, the pseudo-second-order model mechanism given in equation ( 9) should be evaluated with the experimentally obtained adsorption data.
Where; h 0 = K 2 q 2 e h 0 is the initial adsorption rate.If the second-order kinetics is applicable, then the plot of t/q t against t in equation ( 10) should give a linear relationship from which the constants q e and h 0 can be determined.It also suggests several mechanisms are involved in the adsorption process.However, in these kinetic models, the adsorbed amount q e changes with temperature (i.e. a thermodynamic equilibrium quantity), so the temperature dependence of the rate constant needs to be accounted for in the models.Hence, the activation energy can also be estimated with the Arrhenius equation and rate constants for various temperatures.Using the Arrhenius equation (11), the activation energy (E a ) and pre-exponential factor (A) of the adsorption process can be determined numerically.Other adsorption kinetics models used to study BC adsorption kinetic include Ritch-second-order, Elovich equation, and intra-particle diffusion model [63], as shown in Table 2.
Where; K ads denotes adsorption rate constant, T is temperature (K), and R constant.The plot of ln K ads Table 2. Adsorption kinetic models.
Adsorption kinetic model Equation Remark
Intra-particle diffusion model q t = k p t 0.5 + C q t is plotted against t 0.5 to obtain a straight line, which does not necessarily pass through the origin.k p denotes intra-particle diffusion rate constant (g/mg min) and the intercept of the plot, C, reflects the boundary layer effect or surface adsorption.The larger the intercept, the greater the contribution of the surface adsorption in the rate-limiting step.The resistance to film diffusion increases as the intercept increases.However, if intra-particle diffusion is rate-limited, then plots of pollutant uptake q t vs. t 0.5 would result in a linear relationship.
1.The first step involves instantaneous adsorption or external surface adsorption.2. In the second step, film/intra-particle diffusion controlled adsorption.3. A slow adsorption rate occurs at the final equilibrium step, when the solute ions move slowly from large pores to micropores. 4. If the plot of pollutant uptake q t vs. t 0.5 passes through the origin, then intra-particle diffusion is the sole ratelimiting step [63].Elovich model dq t dt = a exp(−bq t ) or where a, b denotes constants of experimental data.The constant a can be regarded as the initial rate since dq t /dt ≈ a as q ≈ 0. Whereas, b is related to the extent of surface coverage and activation energy for chemisorption (g/mg).The assumption of t >> t 0 and validity of the model is checked by the linear plot of q t vs. ln (t).
1.It is often used in adsorption kinetics to describe how chemicals adsorb (chemisorption) on adsorbent.2. As the surface coverage increases, the chemisorption of pollutants on BC surfaces may decrease without any desorption of products.3. System-compatible with heterogeneous surfaces for adsorption.
Ritchie model q e q e − q t = at + 1 Note in most cases the intercept is slightly greater than 1.This is due to: (1) adsorption start time may not coincide with the time the pollution was adsorbed, or (2) the chemisorption occur rapidly initially on favour sites on the surface of the adsorbent.
1.This is an alternative to the Elovich equation for chemisorption of pollutant on n surface sites.2. The rate of adsorption is dependent on the fraction of sites unoccupied at any time.
versus 1/T is linear from which E a and A would be determined.
There are, however, some issues with the application of pseudo-first-order as well as pseudosecond-order kinetic models.Firstly, both models lack specific physical meanings and are empirical models [64].It is therefore challenging to establish the mass transport mechanisms by these empirical kinetic models.Several adsorption kinetic models, their physical meanings, applications, and methods of solving them were presented by Wang and Guo [64].
Isotherm models
An adsorption isotherm model describes the relationship between the equilibrium concentrations of contaminants in the liquid-phase and the equilibrium amount of adsorption in the solid-phase at a given temperature [18].When adsorption and desorption processes reach equilibrium, this is called adsorption equilibrium.In a nutshell, the amount of adsorbed contaminants from the solution equals the amount of desorbed contaminants from the adsorbent.Adsorption isotherms can be used to establish the appropriate correlation for equilibrium curves to optimise sorption system design [25].In general, equilibrium data are correlated using suitable theoretical or empirical isotherm models showing the solid-phase concentration plotted against the solution-phase concentration [65].There are many mathematical forms of adsorption isotherm models, some based on simplified physical descriptions of adsorption and desorption, while others are empirical correlations.
The isotherm models of Langmuir and Freundlich were used to fit the experimental adsorption equilibrium data of pollutants on BC.According to the Langmuir model, adsorption takes place in homogeneous monolayers in which the adsorbate is attracted to all sites equally.In contrast, BC primarily contains carbon and hydroxyapatite distributed randomly on its surface, which violates Langmuir's assumption of homogeneity [39].Whereas, Freundlich isotherm, adsorption occurs on heterogeneous surfaces of adsorbent as multilayer adsorption process.Based on Freundlich's isotherm model, the active sites and energy of a heterogeneous surface are distributed exponentially.
Figure 6 shows the interactions between adsorbent surface and contaminants.Generally, adsorption mechanisms include chemical adsorption which involves the formation of chemical bonds between the contaminant and adsorbent, physical adsorption due to van der Waals attraction, and ion exchange (Figure 6b).The adsorption data can be described by isotherm models, which are critical for designing an adsorption system.Furthermore, models of adsorption isotherms help predict the mechanisms of adsorption and estimate the maximum adsorption capacity, which is important in evaluating adsorbent performance.The adsorption equilibrium model commonly used is the Langmuir-Freundlich (Sips isotherm) equation 12, which assumes that the maximum adsorption occurs when a single layer of contaminant covers the surface [66].Based on the assumption that the contaminant species occupy n sites of the adsorbent (Figure 6a), adsorption rate is proportional to naked surface (1−θ), while desorption rate is proportional to covered surface (θ).The rate equation can be derived by merging the forward adsorption rate step and the reverse desorption step as follows.
The Redlich-Peterson and Sips adsorption isotherm models combine Langmuir and Freundlich models with three parameters.The parameters of the Sips isotherm model (equation 13) are controlled by temperature, pH, and change in concentration.Sips isotherm model follows Freundlich isotherms at low contaminant concentrations.At high concentrations, it exhibits the monolayer adsorption behaviour of the Langmuir model.Adsorption isotherm models must be fitted with experimental adsorption data to understand and predict adsorption behaviour.However, the difference between the Sips and the Langmuir isotherm equations is the heterogeneity factor n, generally less than 1 and when n is equal 1, the surface is homogeneous (Langmuir isotherm).This parameter, therefore, indicates how heterogeneous an adsorbent surface is based on its value, the smaller values are more heterogeneous.Based on the most widely used adsorption isotherm models, Table 3 summarises some isotherm models applicable to BC adsorbents for the removal of contaminants from wastewater.It is clear that Langmuir isotherm model described the equilibrium data for cations contaminants, while Freundlich isotherm model for anions such as fluoride and phosphate.This evidence demonstrated that the adsorption process of fluoride, phosphate and toluene was not just simple mono-layer adsorption, unlike metal ions which are mostly single-layer adsorption.Also, the pseudo-second-order rate equation describe the kinetics of metal ions adsorption on BC.To develop the specific model that describes experimental data regarding adsorption, Lagergren and Langmuir kinetic models are used as the basis.The Langmuir equation applies to adsorption with less than monolayer coverage, making it more suitable for chemisorption studies since it only involves monolayer coverage (i.e.only one molecule can be adsorbed at each site (Figure 5b)) and constant adsorption Table 3. Reported isotherm models relevant to BC adsorbents.
Adsorption isotherm models Remarks
Freundlich Isotherm (linear form) logq e = logK F + 1 n logC e K F denotes adsorption capacity (L/mg), C e the concentration of pollutant at equilibrium (mg/g), and 1/n is adsorption intensity, which provides information about the energy distribution and the heterogeneity of the adsorbent sites (1/n can range from 0 to 1).
1. Adsorbents' surfaces are heterogeneous, and the distribution of active sites and their energies is exponential.2. The Freundlich equation does not predict an adsorption maximum, which is one of its major limitations.Second, the equation has no theoretical basis, purely empirical one.3. The Freundlich isotherm model is used to describe the adsorption of molecules arranged in multilayers with interaction between them.Temkin Isotherm (linear form) b denotes Temkin constant relating to the heat of adsorption (J/mol), R gas constant, T temperature, and K T is Temkin isotherm constant (L/g) 1.According to the Temkin isotherm model, adsorption heat decreases linearly with increasing adsorbent coverage.2. Upon adsorption, binding energies are uniformly distributed up to a maximum.3. Multilayer adsorption ignores extremely small and extremely large concentration values, but considers interactions between the adsorbent and the contaminant.Langmuir isotherm (linear form) q m denotes the maximum monolayer adsorption capacity (mol/g), K L the Langmuir constant (L/mol), and q e and C e are the adsorption capacity (mol/g) and equilibrium concentration (mol/L).This model converts to Henry's model at very low concentrations (K L C e ≪ 1).
1.The Langmuir isotherm model assumes that adsorption occurs at homogeneous sites within an adsorbent (i.e.all sites are equal, resulting in equal adsorption energies).2. Adsorption proportional to the fraction of adsorbent surface which is expose, while desorption occurs as a function of the fraction of adsorbent surface which is covered by pollutants.3. Pollutants adhere to a specific site on the surface of the adsorbent, forming a single-layer/monolayer. 4. Adsorption occurs at constant energy, and pollutant molecules do not migrate or interact on the surface.Redlich-Peterson (R-P) isotherm (linear form) K R denotes R-P constant (L/g), a R the R-P constant (L/mg) and β the exponent which is the slope of the plot, value between 0 and 1.
1.It includes both the features of Langmuir and Freundlich isotherms to form an empirical adsorption isotherm of three parameters.2. Its versatility allows it to be used in either homogeneous or heterogeneous systems.3. Unlike ideal monolayer adsorption, the adsorption mechanism is mixed.
energy.Also, a molecule/contaminant that is adsorbed does not interact with another.However, physical adsorption does not only occur on monolayers, but can also occur on subsequent layers (Figure 5b).Thus, molecular interactions are possible due to the multilayered structure of adsorbed molecules/contaminants.As a result, several other adsorption isotherm models have been developed, detailed reviews of adsorption isotherm models have been reported in the literature review by Ayawei and co-workers [37] and also in recent times by Wang and Guo [18].The Langmuir isotherm model has been used to describe the removal of metal ions from solution via BC [10,26], while the Freundlich isotherm model for fluoride, toluene and phosphate removal [13,29,31].find the adsorption mechanism and the best fit model, all adsorption models should be applied to the contaminant adsorption on BC data set.Some of these isotherm models, such as Freundlich, Sips, Temkin, are empirical models without substantial theoretical support.Optimum isotherms are those that best fit the experimental data, with high coefficients of determination (R 2 ) or low values of other statistical parameters, such as nonlinear chi-squares (χ 2 ) and residual sums of squares errors (SSE).
Recently, Yang and co-workers [46], found that by modifying fish BC by chitosan-Fe 3 O 4 , the saturated adsorption capacity achieved for the chitosan-Fe 3 O 4 -BC to Cd(II) was 64.31 mg/g, which was 1.7 times than unmodified BC.This suggests that modifying BC with an appropriate dopant, the adsorption capacity and affinity can be enhanced significantly.This is consistent with previous study reported by Nigri and colleagues [67], they found that acid treatment and Al-doped cow BC exhibited highest sorption capacities for fluoride than their unmodified counterparts.The summary of variant contaminants, kinetic and isotherm models used to describe adsorption data by different BC source is shown in Table 4. Consequently, Langmuir and Elovich models accurately described experimental data of Remazol brilliant blue R (RBBR) adsorption onto BC, showing monolayer adsorption on homogeneous surfaces Pseudo-second-order rate equation and Langmuir fit.
1. Due to size and pH conditions, Cu 2+ adsorb the most on cow BC.
1.The adsorption capacity is a function of flow rate, bed height and adsorption cycle times.2. The transport behaviour of As(V) in BC can be modelled by convection dispersion equation.[24] Cattle bones F¯Freundlich isotherm model fit.
1.A pH decrease from 12 to 3 dramatically increased the adsorption capacity of BC. 2. BC has a fluoride adsorption capacity that is 1.3 times lower compared to IRA-410 polymeric resin and 2.8 times greater compared to Alcoa F-1 activated alumina.[13,21] Camel bone Hg(II) Langmuir isotherm model fit.
Based on batch kinetics, a rapid uptake was observed during the first five minutes, followed by a slow diffusion process within the particles. [69] Fish bone Cd 2+ Pseudo-second order kinetics model and the Langmuir model fit.
Based on the adsorption results, it appears that chemical adsorption controls the adsorption rate, and that monolayer adsorption is the dominant one. [46] Cow F¯Freundlich isotherm slightly better than Langmuir model fit.
Fluoride sorption capacity was unaffected by the presence of chloride, sulfate, and nitrate ions in solution, the presence of carbonate or bicarbonate ions negatively affected it. [67] Sheep bone P Pseudo second-order kinetic and Freundlich isotherm models fit.
1. BC was effective in removing phosphate from aqueous solutions, especially at concentrations between 2 and 100 mg/L.2. The optimum pH is 4, and higher pH decreased the adsorption rate significantly.As temperature increases from 20°C to 40°C sorption capacity increase, illustrating endothermic adsorption. [29] Bovine bone Zn 2+ , Cd 2+ , and Ni 2+ Pseudo-second order kinetic equation and Sips isotherm model fit.
1.The removal performance followed the trend Cd 2+ > Zn 2+ > Ni 2+ .2. The ion-exchange between the calcium, from the hydroxyapatite structure of BC, and the metal ions in solution played an important role in the adsorption process. [10] Bovine bone Volatile organic compounds (VOCs) Pseudo second-order kinetic and Freundlich isotherm models fit.
As a result of the modification of H 3 PO 4 , there was an effective acceleration of the adsorption process after activation of K 2 CO 3 . [31] (20.6 mg/g) with chemisorption [68].In a study of cobalt sorption onto BC, the Freundlich isotherm model proved to be an accurate description of the equilibrium adsorption data [69].It was found that ion exchange with Ca 2+ in the hydroxyapatite of the BC is the main mechanism of Co 2+ removal and multilayer coverage on the heterogeneous surface.
In another study, an ox BC was activated and functionalised with magnetite nanoparticles to remove reactive 5G blue dye via adsorption [70].It was found that the functionalised BC adsorbent followed pseudo-first-order kinetics, with equilibrium experimental data fitting better to the Langmuir model, with q max of 91.58 mg/g controlled by both intraparticle diffusion and surface diffusion.Furthermore, a bone char (surface area 114.15 m 2 /g) produced via pyrolysis of cattle bones which was surface treated with acetone was investigated for the adsorption of 17β-estradiol from aqueous solutions [71].The result showed that the maximum adsorption capacity of 17β-estradiol on BC was 10.12 mg/g and uptake can be predicted by the pseudo second order kinetic equation, while intraparticle diffusion was the rate limiting step.Both Langmuir and Freundlich isotherm models fitted the experimental adsorption equilibrium data.With the help of an ethanol-water solution, the saturated BC was regenerated and adsorption capacity was restored to 85% of its initial capacity in the third cycle.Using a batch agitation system, Cu(II) and Zn (II) ions sorption were studied on BC (surface area 100 m 2 /g and pore volume 0.225 cm 3 /g) around pH 5 [72].The uptake mechanisms are by means of adsorption and ion exchange (the chemical formula after ion exchange is Ca 10−x M x (PO 4 ) 6 (OH) 2 , where M denotes the divalent metal ions Cu 2+ or Zn 2+ ) from the solutions.It was shown that the metal ions sorption rates are primarily controlled by pore diffusion, and a numerical film-pore diffusion model was developed to describe this process.In the study, Langmuir's model and a film-pore diffusion mass transport model were used to correlate experimental data from adsorption isotherms and batch kinetics.There was an excellent Freundlich isotherm model for the adsorption of methylene blue [33].
correlation between the theoretical model and the experimental data.
The BC adsorbent material is an environmentally friendly, non-toxic, affordable, and straightforward alternative to chemically synthesised or modified adsorbent materials.BC uses its distinctive and abundant ionic polarity sites such as Ca 2+ , PO 4 3− , CO 3 2− , OH − , etc. to adsorb pollutants.Using cationic Malachite Green (MG) dye and anionic Sunset Yellow (SY) dye, Li et al. [73] investigated the adsorption and desorption behaviours of bovine rib BC.According to the results, MG adsorption increases as pH increases, but SY adsorption decreases with pH, with optimum pH values of 7 and 3, respectively.It was established that the adsorption mechanism of ionic dyes on BC is mainly determined by the interactions between pore filling, electrostatic interaction, chemical bond formation, and ion exchange.Recently, the application of calf BC to remove humic acid from water was reported [74].According to the results, prepared calf BC with a surface area of 112 m 2 /g shows an adsorption capacity of 38.08 mg/g (HA = 20 mg/L, pH = 4.0).Data from the adsorption experiment fit well with the pseudo-second-order model and Langmuir isotherm showing that monolayers are formed during adsorption.To recycle the humic acid saturated calf BC, a significant regeneration was achieved with the aid of NaOH treatment.Since BC can be regarded as an adsorbent with a complex chemical structure constituted of 9%-11% of amorphous carbon-phase, 7%-9% of calcite and 70%-76% of hydroxyapatite (i.e.Ca 10 (PO 4 ) 6 (OH) 2 ), it could be anticipated that the functional groups of both organic and inorganic phases may well interact with pollutants [60].BC prepared by pyrolysis was evaluated for the adsorption of toxic dental clinic pollutants such as fluoride, mercury and arsenic as the target pollutants [60].It was found that the degree of hydroxyapatite dehydroxylation affected the adsorption mechanism through ligand exchanges involved in the removal of these pollutants by BC.Adsorption of fluoride and mercury on the surface of the BC was observed to be an endothermic multimolecular process, while that of arsenic on was monolayer interaction with the adsorption sites.Results showed that dehydroxylation of hydroxyapatite and decomposition of BC carbonates resulted in a significant reduction of the metal ions uptake of this adsorbent at carbonisation temperatures greater than 700°C.In metal ions adsorption with BC, metal-oxygen interactions from hydroxyapatite play a relevant role in an ion-exchange process.For understanding the adsorption mechanism in a BC system and predicting its behaviour, the experimental adsorption data must be fitted with the appropriate adsorption isotherm model.In recent time, nitrogen-functionalised BC was synthesised using pristine bone char and ammonia hydroxide for surface modification [46] The nitrogen-functionalised BC was applied to study the adsorption of multiple aquatic pollutants such as representative heavy metal Cr(VI), nuclide U(VI), and methylene blue.The results showed that nitrogen-functionalised BC exhibited excellent maximum adsorption capacities for heavy metal Cr(VI) (339.8 mg/g), nuclide U(VI) (466.5 mg/ g), and methylene blue (338.3mg/g) at pH 5.0 and temperature 293 K, demonstrating a significant potential in the adsorption of multiple types of pollutants.Analysing multiple spectroscopic technologies, it was discovered that electrostatic attraction, surface complexation, precipitation, cation exchange, and cation-π and π-π interactions are the main adsorption mechanisms.As a result, the functionalization of BCs will enhance their performance and multifunctionality in removing pollutants.
Figure 7 shows typical adsorption data and isotherm models for metal ions and methylene blue.The uptake of metal ions increases with time, reaches a maximum after 20 min and equilibrated in 60 min.However, the order of affinity for BC and adsorption can be summarised as thus: Cu 2+ > Ni 2+ > Fe 2+ > Mn 2+ .The reported amounts adsorbed at equilibrium (q e ) by the BC are as follows: 29.56 mg/g (Mn 2+ ), 31.43 mg/g (Fe 2+ ), 32.54 mg/g (Ni 2+ ) and 35.44 mg/g (Cu 2+ ) [26].This can be attributed to the size of the cations and the pore size distribution within the BC in addition to the ion exchange mechanism taking place.The maximum removal in terms of percentage was between 75% and 98% from Mn to Cu.As the hydrated ionic size decreases, metal ions are easily adsorbed (i.e.Cu 2+ < Ni 2+ < Fe 2+ < Mn 2+ ).This is because as the ionic size becomes smaller, the metal ion diffuses easily into the micropores of the BC.The pseudo-second-order kinetic equation and Langmuir isotherm model (R 2 range from 0.9987 to 0.9999) describe better the BC experimental adsorption data for these metal ions.In other words, the plateau values seen in Figure 7b represent monolayer coverage on the surface of the BC.Thus, carbon is well distributed throughout the porous hydroxyapatite structure as a uniform thin layer on the BC.Consequently, during the first few minutes (10 min), methylene blue rapidly adsorbs on BC, but from macropores to micropores, the diffusion path decreases within the narrow pore size during the second stage of slow adsorption.
It is evident from the plot of q vs. t 0.5 of the intraparticle diffusion model for intraparticle region, showed that methylene blue adsorption does not pass through the origin, implying the rate-limiting step is a combination of intraparticle diffusion and the boundary layer effect due to external diffusion from bulk solution to the BC surface [33].Since BC has a predominantly mesoporous structure, intraparticle diffusion was be the rate-limiting step during adsorption methylene blue.The results show that the pseudo-second-order kinetic model and Freundlich.This is expected as the BC surface is highly heterogeneous, the adsorption isotherm demonstrates a multi-layer adsorption methylene blue.Changing adsorption temperatures from 273 to 313 K results in a decrease in methylene blue, which suggests an exothermic process.However, sodium hydroxide can be used to regenerate BC [22].
Error analysis between experimental data and isotherm model
The optimal analysis of adsorption data requires error functions.By minimising the difference between experimental data points and the predictions of the model equations, the parameters of the kinetic and isotherm models can be determined.According to studies, linearising adsorption isotherms usually changes the error structure of adsorption data [75].When nonlinear equations are transformed into linear forms, their error structure is altered, and the standard least-squares assumptions of error variance and normality are violated.In the case of an isotherm equations with three or more parameters, a simple linear analysis cannot be performed.Therefore, nonlinear optimisation has been found to be the most effective method for arriving at the best isotherm equation [76,77].Hence, the purpose of nonlinear regression is to minimise (or maximise) the error between adsorption data and predicted isotherms using convergence criteria as an alternative to linear regression.The error analysis involves the application of statistical methods to evaluate the difference between experimental data and the isotherm predicted values.By comparing the squared errors, the best-fit isotherm model can be identified, which is the model that produces the minimal error.One of the objective functions used in the minimisation scheme has been the sum of the errors squared (SSE).The square of the errors increases as the concentration in the liquid phase increases, which is one of the bottlenecks of SSE function.So, a better fit is observed at high concentration.Others include Root Mean Square Error (RMSE), Hybrid fractional error function (HYBRID), Average Relative error (ARE), Marquardt's percent standard deviation (MPSD), Nonlinear chi-square test (χ) and Coefficient of determination (R 2 ).
where q Exp denotes the experimental adsorption capacity data, q Cal the theoretical adsorption kinetic/ isotherm model and N number of data points.
Hybrid fractional error function (HYBRID) was developed in order to improve the sum of the squares of the errors at lower liquid-phase concentrations.It therefore improves the fit at low concentration values compare to SSE method.The method includes number of degrees of freedom, which is N -P as a divisor.
This function includes the number of data points (N ), minus the number of parameters (P) in isotherm equation.
In the average relative error (ARE) formula, the fractional error distribution across all independent variables is minimised: Nonlinear chi-square test (χ): The coefficient of determination (R 2 ), the best fit is the isotherm model which R 2 is close to or approximately 1.
q Exp denotes measured experimental ion concentration; q cal the calculated ion concentration with isother models; N the number of experimental data points; and P the number of parameters in each isotherm model.
The isotherm parameters are likely to be different for each error function.Hence, the choice of error function can affect the derived isotherm parameters.The parameters of the isotherm models can be obtained by minimising the error functions.The minimisation of the difference between experimental measurements and the model calculated values can be performed using the solver add-in within Microsoft Excel, Python software, MATLAB or any other suitable data analysis software.In order to obtain the parameters involved in the isotherms, as well as the optimum isotherm, non-linear regression was found to be a more effective method than linear regression [77].MPSD was found to be the best error function in minimising the error distribution between experimental equilibrium data and predicted isotherms for two parameter isotherms, whereas R 2 was found to be the best error function for three parameter isotherms for the sorption of basic red 9 by activated carbon.As well as the size of the error function, experimental data should also be used to authenticate the theory behind the predicted isotherm.
Performance of BC in a packed-bed column
BC adsorption studies tend to be limited to batch studies or equilibrium experiments, which may make it difficult to extend these works to large-scale applications.In contrast to batch system, there are several advantages to using a packed-bed column adsorption system, including the ability to contact the BC adsorbent effectively with the fluid to be treated, the ability to treat large volumes in a short period of time (i.e.large-scale operation), the ability to regenerate the BC adsorbent in the same column, often more economical and effective, and the ability to achieve a high adsorption rate due to constant contact between the BC adsorbent and fresh fluid [78,79].Unlike batch adsorption system in which mixing prevalent, for packed bed system mass transport from the bulk fluid to the external surface of the BC adsorbent is generally slow.A packed-bed adsorption column's performance is determined by its breakthrough curve, which is the effluent concentration profile versus time (for a constant flow rate).The breakthrough curve can be obtained either by direct experimentation or by mathematical modelling for any given adsorption system.Generally, adsorbate concentrations leaving the packed-bed column are deemed to have reached the breakthrough point when the concentration reaches 5% of the initial concentration [79].Mathematical models such as empty bed contact time (EBCT), Bed Depth Service Time (BDST), Adams-Bohart (AB), Thomas (Th), Dose-Response (DR), and Yoon-Nelson (YN) can be applied to study the breakthrough curve behaviour of a packed-bed adsorption column.Due to the simplifications involved in these models and their semiempirical or empirical nature, the models do not provide full information regarding the design parameters.Therefore, mass transfer models derived from fundamental principles would allow for the simulation of breakthrough curves.Under varied experimental conditions such as influent concentration, inlet flow rate, pH, and bed height, the mass transport (MT) model was evaluated and compared to mathematical models, such as BDST, AB, Th, DR, and YN, to simulate the breakthrough curves of Hg (II) adsorption in a packed-bed adsorption column with ostrich BC [40].The results showed that the MT model produced the highest accuracy (R 2 = 99.31%,mean residual error (MRE) = 0.745% and normalised root means square error (NRMSE) = 6.15%), and the mathematical models performance can be summarised in the following order: Th > BDST > YN > DR > AB.It was found that maximum adsorption capacity was more sensitive to simulated breakthrough curves than apparent equilibrium constant and axial dispersion coefficient.
Pilot-and bench-scale fixed-bed adsorption systems were used to compare BC with commercial activated alumina for treating groundwater with 8.5 mg/L naturally occurring fluoride concentration [22].The results showed that both BC and activated alumina removed fluoride to below 0.1 mg/L.But at the pilot scale for BC and activated alumina, it was found that the fluoride breakthrough was reached within 450 bed volumes (3.1 days) and 650 bed volumes (4.5 days), respectively, when an empty bed contact time (EBCT) of 10 min was employed.This suggests that for industrial scale application using packed beds, BC can perform comparatively as activated alumina if proper strategy for improving its adsorption capacity is applied.On the other hand, it was discovered that fluoride concentrations were higher on BC than activated alumina per square metre of adsorbent surface area, suggesting that maximising BC surface area may enhance adsorption capacity.Methods of improving the surface area of the BC have been reported under the section titled strategies for enhancing adsorption capacity (Section 7).Mesquita et al. [79] reported a fixed-bed adsorption column with BC adsorbent to selectively and partly remove refractory organics, a complex mixture of long-chain hydrocarbons, aromatic compounds, carboxylic acids, amines and amides from electrodialysis concentrate effluent.The result showed that the maximum adsorption capacity increased with the increase in bed depth and reduction in flow rate.The bed depth service time (BDST) model predicted the breakthrough time satisfactorily (deviation near 10%) for C/C 0 of 0.55, 0.60, and 0.65, which can be considered very acceptable.It was also found that the C/C 0 ratios of 0.55, 0.60, and 0.65 could be scaled up to provide a removal efficiency of 45% in 16 days.Recently, Backward Bayesian multiple linear regression (BBMLR) was applied to study the adsorption efficiency of Cd(II) ions by ostrich BC in a packed-bed adsorption column based on the following operational variables consisting of pH, inlet Cd(II) concentration, bed height and feed flow rate [80].The performance of the BBMLR was evaluated using the coefficient of determination (R 2 ), NRMSE and MRE.Although the BBMLR model was more sensitive to pH, bed height, and flow rate, it showed excellent performance of NRMSE 6.69% for predicting Cd(II) removal in fixed-bed adsorption systems.BC's adsorption capacity increased as column height increased and decreased as flow rate and initial concentration increased.In another study, the Thomas model (equation 14) and Dose-Response model (equation 15) were used to evaluated the experimental test data, predict the breakthrough curve, and the mechanism of Pb(II) adsorption on BC in a fixed bed column [81].It was found that the breakthrough curve predicted by the Dose-Response model fit the experimental results well and was significantly better than the predicted results of the Thomas model.At an initial Pb(II) concentration of 200 mg/L, an inlet flow rate of 4 mL/min, and a column height of 30 cm, the maximum adsorption capacity was 38.466 mg/g, and a saturation rate of 95.8% was achieved.As a potential adsorbent, BC can be used to address the problem of water pollution with a packed-bed column on a large scale.
where k Th denotes Thomas rate constant (mL/mg min); q 0 the maximum concentration of the solute in the solid phase (i.e. the adsorption capacity of the adsorbent (mg/g)); m the mass of the adsorbent (g); F the flow rate (mL/min); C 0 the initial concentration (mg/L); and t the time.In the Thomas model, reaction kinetics and Langmuir isotherms are hypothesised, but sorption is controlled by interface mass transfer.
When b = q 0 m/C 0 , equation 15 becomes equation 16: Where; a and b are the model parameters and V is the solution volume.
Bed Depth Service Time (BDST) model (equation 17) Based on surface chemical reaction theory, the BDST model assumes negligible intraparticle diffusion and external mass transfer resistance.
where N 0 denotes the maximum adsorption capacity (mg/L), U 0 the linear flow velocity (cm/min), and K BD the adsorption rate constant (L/(mg min)).
Strategies for enhancing adsorption capacity of BC
Bone char (BC) can be viewed as a composite of two types of adsorbents (i.e.hydroxyapatite and carbon).
It is therefore likely that BC's performance and adsorption capacity will depend on the proportion of total surface provided by each component.There is no doubt that carbon content which is determined by the charring temperature and condition has an impact on BC's performance.Analysis of data from these studies show that BC adsorbents' capacity to remove pollutants is closely related to their physicochemical properties.BC adsorbent morphology and textural attributes such as particle size, pore size distribution, point of zero charge (pH PZC ), and specific surface area play a significant role in its performance.Therefore, improving the microstructure, controlling crystallinity, active site (acid/basic sites) and modification of composition will enhance adsorption capacity and performance.Charring temperature affects BC microstructural features, such as pore size distribution and specific surface area.Figure 8 shows the four categories of strategies that can be used to improve the adsorption capacity of BC.Porosity development is expected to be affected by temperature of pyrolysis/gasification of bone constituents.
A study of the effects of carbon distribution on hydroxyapatite on BC adsorption is required.It is also possible to increase specific surface area and enhance porous structures by using chemical or physical activation.The physical activation process is carried out at high temperatures (about 600°C) by treating the produced BC in the presence of oxidising gases such as steam, CO 2 , and air [43], in order to increase the surface area and pore size distribution.But in chemical activation the bones are treated with acid or alkali prior to or during carbonisation using acid, alkaline or organic solvent.The sorption capacity of bone char can also be enhanced by chemically modifying its surface.Consequently, it is possible to modify a BC's surface properties to achieve specific objectives using surface functionalization.Literature reports have found that hydroxyl and carboxyl groups on the surface of acetic acid-modified BC obtained from cattle and sheep bones effectively adsorb formaldehyde from air polluted with formaldehyde [16].Based on a study, a BC coated TiO 2 can be applied for photocatalytic activity using salicylic acid as a model water pollutant, giving results that are comparable to suspended TiO 2 nanoparticles [ 82].The results of this study suggest that BC can be used as a green, effective, cheap, and regenerative adsorbent to support photocatalysts for industrial wastewater pollutants degradation.Composition modification can be achieved through impregnation of active metal or incorporation of suitable dopant.BCs for fluoride adsorption from drinking water reportedly synthetised through metallic doping using aluminium and iron salts [83].It was reported that when aluminium sulfate was used to modify the composition and surface of BC, fluoride adsorption was enhanced by 600% and adsorption capacity of 31 mg/g.There is a possibility that these surface interactions could involve an ion exchange between the fluoride ion (F − ) and the OH − from Al-OH and Ca-OH bonds during water defluoridation.Furthermore, cerium species have been used to modify the surface chemistry of BC and its application as an adsorbent for fluoride investigated [84].The result showed that the incorporation of cerium (Ce 4+ ) enhanced fluoride adsorption properties of BC up to 13.6 mg/g.Adsorption of the Direct Brown 166 dye (DB-166) from aqueous solutions was investigated using a natural chitosan/BC composite [85].The maximum adsorption efficiency and capacity were found as thus 99.8% and 21.18 mg/g, respectively.The results indicate that impregnation and incorporation of dopants can modify BC's composition and surface chemistry.According to one study, nano-sized manganese modified BC surfaces provide 78 times higher adsorption capacity (maximum adsorption capacity 9.46 mg/g) for As(V) removal compared to uncoated BC, and the effectiveness of the removal increases linearly as manganese concentration increases from 0.025 to 14.5 mg/g [86].Furthermore, microcrystalline cellulose-modified bone char (MCC-BC) and BC were investigated as adsorbents for the removal of Pb (II) from solution, and it was found that BC had a maximum adsorption capacity of 89.9 mg/g, while MCC-BC had 115.7 mg/g [87].According to the results, the MCC-BC has a higher adsorption capacity, a shorter adsorption time, and a better adsorption and reusability performance than BC (without surface modification).The mechanisms of removal include chemical precipitation of Ca x- Pb (10−x) (PO 4 ) 6 (OH) 2 by ion exchange of Ca (II) and Pb (II), accompanied by the coordination of Pb (II) with hydroxyl groups on the BC surface, which fitted Langmuir model and single-layer chemical adsorption.Also, composites containing BC from cattle, Fe 3 O 4 nanoparticles (<50 nm) and chitosan biopolymer were reported to be effective in adsorbing As (V) in aqueous solutions at pH 2-11 [88].According to this study, the mechanism of adsorption involves electrostatic interaction between negatively charged As(V) species and positively charged Fe 3 O 4 − nanocomposite BC surface.On the other hand, BC produced by paralysing from 350°C to 700°C demonstrated a lower temperature causes a higher fluoride adsorption capacity [89].The dehydroxylation of the hydroxyapatite in BC is thought to be the main reason for the inverse correlation between fluoride adsorption and pyrolysis temperature.However, bones with charring temperatures below 300°C retain some organic matter, and also the specific surface area and pore structure of the produced BC is not well developed, as shown in Figure 1 [90].The physicochemical properties and mesoporous structure of BC can be improved through chemical activation using either H 3 PO 4 , H 2 SO 4 , KOH, NaOH or K 2 CO 3 and gasification reactions [15,91,92].Research has shown that acid or alkali treatment can increase porosity by about 30% and the surface area up to 234 m 2 /g (i.e.20%-30% increase) compared to non-alkali-treated in BC derived material [91].This confirms that acid or alkali treatment of BC adsorbent enhances pore structure network and surface area.The same literature also reported that treatment with KOH lowers the PZC of BC from 7.7 to -5.6, which favours the generation of acid functional groups on surface.By treating BC with H 2 SO 4 , a highly microporous material was obtained, suitable for adsorption of gaseous pollutants, but NaOH and K 2 CO 3 , on the other hand, resulted in a hierarchical porous structure, with a greater equilibration increase in microporosity and mesoporosity [92].Under the operating conditions used, it was observed that H 3 PO 4 was extremely aggressive, removing almost all the BC's pores.In another study, it was discovered that acid treatment at 0.2 mmol(H 2 SO 4 )/g(BC) increased surface area by about 80% due to enhanced microporosity by up to 263% compared to untreated BC [15].These results prove that with appropriate optimisation of synthesis method and a suitable treatment solvent, the porous structure and textural properties of the BC can be configured from microporosity to macroporosity depending on the molecular size of the pollutants and adsorption application.
Regeneration of bone char (BC)
For evaluating the application and lifespan of BC, regeneration methods are necessary, which reduce the cost of the process and reduce the amount of unwanted waste.The regeneration of an adsorbent plays an important role in its evaluation.This aspect has only been reported in a few studies, which may limit industrial applications of BC.Studies have been conducted on the thermal and chemical regeneration of fluoride saturated BC [93,94].As both methods can be performed in situ, unloading, transporting, and reconditioning of the adsorbent are eliminated.A sustainable adsorbent material is demonstrated by regeneration.It is critical to measure BC's regeneration capacity in order to assess its reusability and effectiveness in removing contaminants from water or gas purification.The thermal method involves heating the pollutant saturated BC under a purge gas flow, which removes the desorbed adsorbate and facilitate regeneration for multiple cycles.Saturated BC can also be regenerated thermally using boiling water or steam [93].As a result of regeneration, pollutants that are adhered to the surface of the BC move into its exterior, exposing more active surfaces for adsorption.In this thermal regeneration process, no chemical agents are used and no waste products are produced, so there are almost no emissions.The effect of regeneration temperature, regeneration duration, and regenerated BC adsorption capacity has been investigated and published in the literature for the thermal method [94].It was found that the regenerated BC with a temperature of 500°C for 2 h had the best fluoride sorption potential.The results of another study showed that 400°C was the optimal temperature for thermal regeneration for fluorine saturated BC [95,96].It has been found that fluoride ions were diffused into BC particles during thermal regeneration [95].As a result, fluorated hydroxyapatite is formed, indicating that fluoride has been incorporated into hydroxyapatite structures in BC due to thermal treatment.In another study, BC's structure remained unchanged even after five continuous regeneration cycles at 400°C, based on X-ray diffraction (XRD) patterns before and after [96].This suggests that the incorporation of fluorine into BC depends on the regeneration temperature applied.Also, the loss of some carbon component of the BC resulting from thermal regeneration is possible.However, analysis showed an increase in the degree of crystallinity of BC following thermal regeneration.After thermal regeneration, BC showed a slight decrease in adsorption capacity.Adsorption equilibrium cannot be completely reversed, as desorption depends on the mechanism by which the pollutant is adsorbed, whether physisorption or chemisorption on BC surface.
In contrast, regeneration of pollutant saturated BC can be carried out with the aid of suitable chemical solvents such as sodium hydroxide (NaOH) for fluorine saturated BC.Organic solvents and inorganic chemicals can both be used as chemical regenerating agents.Nigri et al. [97] reported the regeneration of fluorine saturated BC using NaOH (0.5 mol/L) solution.Observation of a 30% reduction in adsorption capacity after five cycles of adsorption/desorption was observed.A plot of the regeneration efficiency against concentration of NaOH showed that for 0.2 mol•dm −3 NaOH, an efficiency of 91.2% could be achieved [45].It is found that removing adsorbed F − ions from the BC surface is more efficient when a solution of NaOH is used at a high concentration.In other words, the mechanism of desorption involves ion-exchange of OH − group (NaOH) with fluoride ions on BC adsorbent.Among the factors influencing fluoride removal, pH played the biggest role.Hence, chemical solvent regeneration process is pH dependent.In fluoride removal, for instance, fluoride ions exchange with hydroxyl, carbonate, hydrogen carbonate, and phosphate ions of BC.Research has been conducted on the effectiveness of different sodium solutions (NaOH, Na 2 CO 3 , NaHCO 3 and Na 3 PO 4 ) in rejuvenating fluoride-saturated BC for reuse in fluoride removal from drinking water [98].The effect of temperature on rejuvenation process was also studied.It was found that NaOH exhibited the highest de-fluoridation effectiveness while the lowest was NaHCO 3 .Results show that BC releases more fluoride when the temperature is raised from 20°C to 60°C during regeneration.In 2020, Coltre et al. [99] reported the application of aqueous solutions with pH from 2 to 12 and alcoholic solutions of methyl, ethyl and isopropyl as regenerating agents in a batch system with a constant temperature of 30°C for BC saturated with blue BF-5G dye.As a result of improved regeneration efficiency, isopropyl and ethyl alcohols achieved about 21.0% and 19.5%, respectively.It is therefore reasonable to conclude that the interactions between the pollutant and the BC surface play a significant role in the selection of chemical solvents.With this regeneration method, the recovery of the adsorbate from the BC is possible.It is, however, a complicated process that can result in the release of harmful chemicals into the environment following chemical reactions.Consequently, the efficiency of regeneration is very low.
Future outlook
The demand for high-performance, sustainable, and renewable adsorbents, heterogeneous catalysts, and hierarchical porous carbon microstructures is increasing.To address today's environmental problems, such as clean water, clean energy and emission control, green and sustainable adsorbents and catalysts are critical.BC is more suitable for use as a catalyst support when its physicochemical properties are improved.The adsorption capacity of BC depends more on its surface area than on its hydroxyapatite content.To achieve this, a chemical modification of BC with an acid or alkali can potentially increase the surface area and pore distribution and increase its adsorption capacity.There is still a need to investigate the impact of different chemical treatments (i.e., alkali and acid treatments) on the textural properties of BC, including porosity, pore size distribution, and surface area.It would be interesting to investigate how to combine strong adsorptive BC with photocatalyst material to degrade organic pollutants.The study should include preparation, characterisation and investigation as a carrier for photocatalysts in a photocatalytic composite.This viewpoint suggests combining BC with photocatalytic materials such as ZnO and TiO 2 or a composite with other suitable photocatalytic precursor in the form ZnO/BC [100,101].A suitable photocatalytic material precursor will be precipitated on BC as part of the synthesis process for composite photocatalyst.Photocatalytic oxidation is used in green degradation technology to remove heavy metals and oxidise organic compounds.A synergistic effect has been demonstrated between the immobilisation of ZnO nanoparticles on BC and the photocatalytic degradation of formaldehyde, an air pollutant [102].As a result of the strong adsorption of pollutant molecules on BC surfaces, there would be a higher transport rate to the photocatalytic material.This would lead to a higher rate of photocatalysis as well as pollutant degradation.As a result, future outlook will include modifying bone-derived adsorbents into photocatalysts and applying them to degrade organic pollutants from industrial wastewater.
A quality BC should exhibit the following characteristics: (1) a high capacity for absorbing pollutants, (2) low organic matter content, and (3) no fecal contamination in storage or production.Investigation of the technicalities surrounding the collection of waste bones is essential to reducing the environmental impact and boosting economic benefits.Furthermore, the acidity or basicity of the BC surface plays a major role in the adsorption of molecules.BC's acidic-basic properties play a crucial role during adsorption, since it determines how molecules interact and adsorb onto the surface.It is necessary to examine the impact of the acid-base property of BC in order to gain a mechanistic understanding of the role surface chemistry on adsorption.Research is also needed to determine the impact of process type and conditions on acidity-basicity, as well as adsorption capacity, of BC produced.A study of the nature of thermodynamic parameters involved in pollutant adsorption onto BC.Additionally, further research is needed on how valorising methods, the effect of bone type (i.e.soft and hard) and source, and process conditions affect BC's physicochemical and textural properties in relation to its adsorption capacity.It is critical to note that the BC adsorbent's physicochemical properties are highly impacted by the conditions under which it is synthesised, and can vary significantly from pollutant to pollutant.In both an economic and technical point of view, it makes sense to tailor the physicochemical properties of BC adsorbents for water treatment and air purification.Variations in the synthesis routes and conditions lead to high variability in the physicochemical properties of the final BC.Further study of the effects of production conditions in the preparation of BC and the parameters that affect its adsorption properties to different pollutants from aqueous solutions and the air is required.BC adsorbent performance and production costs can be improved by optimising synthesis conditions and tailor adsorption properties based on pollutant.Likewise, further research is needed to determine how BC's functionalization affects adsorption capacity and catalytic performance.As far as environmental impacts of bone char production are concerned, no studies have been conducted.The current production process must be evaluated along with a cost-benefit analysis in order to determine if it is environmentally friendly or not.As well, life-cycle assessment and techno-economic modelling can be utilised to evaluate the cost implications of valorising waste bones on a large scale.
Conclusion
Waste bone biomass is one of the largest waste products in livestock production, and it can be recycled and utilised as hydroxyapatite material or bone char (BC) to ensure sustainable environment and solid waste management.Contaminants can be removed from water, gaseous effluents and soil through adsorption using BC produced via thermochemical processes such as calcination, pyrolysis, or gasification of waste bones.As a result of its lowcost and high efficiency, BC adsorption technology can be applied widely for the treatment of wastewater and contaminated soil remediation.BC has demonstrated mesoporosity and surface chemistry due to the presence of several functional groups, which enhances its adsorption capabilities in removing numerous contaminants from wastewater.BC adsorbs pollutants mainly through its hydroxyapatite component, and hydroxyapatite adsorption sites are formed by the subsequent protonation and deprotonation reaction depending on pH PZC and solution pH.This study critically analyses the results of relevant researches, and assembles and evaluates data related to the use of BC adsorbents based on both empirical and theoretical methodologies.By providing theoretical adsorption mechanisms, strategies for increasing BC adsorption capacity, and future outlook, this study serves as a guide for further research and development of environmentally friendly, and low-cost adsorbent to tackle water pollution and gas purification/air pollution problems.
Based on equilibrium sorption isotherms, the BC adsorption system's capacity and performance can be predicted.In nonlinear regression, the difference between experimental measurements and the mathematical model is minimised (or maximised).It is generally preferred to select the best-fitting isotherm model based on which error function produces the lowest error distribution between the model predicted and measured experimental values.It has been demonstrated that nonlinear regression is the best method for obtaining the parameters of isotherm equations and choosing the optimum model.In order to evaluate the accuracy of error functions in predicting the isotherm parameter values, as well as the optimum isotherm, an extensive and detailed study has to be conducted for BC adsorbent.
No studies have been conducted to determine the environmental impacts of bone char production.An assessment of these impacts along with a costbenefit analysis is necessary to determine whether the current production process is economical or harmful to the environment.Future research should therefore focus on life cycle assessment (LCA), which is a quantitative method that can be used to evaluate the environmental impact and economic analysis of BC production over its lifetime.The application of machine learning and artificial neural networks to modelling adsorption data and isotherms is another area to be investigated.Currently, BC adsorption studies are generally concentrated on batch or equilibrium studies, which may be difficult to cover to commercial-scale applications, so future studies should emphasise pilot-and fixed-bed adsorption experiments.Simulations and computational analyses of adsorption and diffusion on BC adsorbents for various adsorbates is an interesting topic requiring further exploration.There is a need to investigate further the components of BC carbon and/or hydroxyapatite responsible for the adsorption of a pollutant.Another area which needs further research is how functionalization of BC affects adsorption capacity and catalytic activity.
Figure 1 .
Figure 1.Effect of pyrolysis temperature on produced BC: (a) yield and surface area, (b) pore volume and size, and (c) acid and basic sites (Note, the plotted data was obtained from the article [42]).
Figure 4 .
Figure 4. Illustration of diffusion steps of pollutant from liquid to solid adsorption [55].
Figure 7 .
Figure 7. (a) Cow bone BC for removal of Mn 2+ , Fe 2+ , Ni 2+ , and Cu 2+ as a function contact time at C0 20 mg/L, BC dose 0.02 g, pH 5.1 and temperature, 298 K, (b) Langmuir isotherm model for the adsorption of Mn 2+ , Fe 2+ , Ni 2+ , and Cu 2+ cations [26], (c) Bovine bone BC removal of methylene blue from aqueous solution as a function of adsorption temperature on methylene blue adsorption and (d) Freundlich isotherm model for the adsorption of methylene blue [33].
Figure 8 .
Figure 8. Strategies for enhancing the adsorption capacity of BC.
Table 4 .
BC adsorption kinetic equation and isotherm models for some pollutants and different bone sources. | 20,258 | 2023-04-13T00:00:00.000 | [
"Engineering"
] |
BMC Medical Informatics and Decision Making
Background: The Intensive Care Unit (ICU) is a data-rich environment where information technology (IT) may enhance patient care. We surveyed ICUs in the province of Ontario, Canada, to determine the availability, implementation and variability of information systems.
Background
The intensive care unit (ICU) is a data-rich environment, where information technology may enhance patient care by improving access to clinical data, reducing errors, tracking compliance with quality standards, and providing decision support [1][2][3]. The presence of more sophisticated information systems in the ICU has been associated with improved care [4]. Despite these potential benefits, utilization of information technology in ICUs is variable, with approximately 10-15% of U.S. ICUs in 2003 having fully implemented clinical information systems [5]. In contrast, electronic health records in other practice settings are becoming well established. For example, almost all general practices in the United Kingdom are computerized [6], as are the majority in Australia [7].
In the province of Ontario, Canada, the Ministry of Health and Long Term Care provides medical services for the province's population of 11 million through 134 acute care not-for-profit hospital corporations and an annual budget over $30 billion Canadian [8]. This single payer system offers the opportunity for standardization of information systems, but little is currently known about the implementation and availability of information technology in the province. The ICU plays a central role in the flow of patients through the health system, as the destination of transfer of the sickest patients from the emergency room and operating room. Our objective in this study was to survey ICUs across the province to identify the availability of various types of information technology and the systems and vendors utilized. We believe that this information will be essential to integrate clinical information systems into a province-wide electronic record and to identify areas for quality improvement and future research.
Survey development and administration
We used survey methods (item generation and reduction and clinical sensibility testing) to develop a comprehensive, self-administered internet-based survey that addressed the utilization of information technology in the ICU. We piloted the survey on 3 local intensivists. Domains of interest included: (i) The spectrum of ICU clinical data accessible electronically (e.g. clinical, laboratory data, imaging, medications) (ii) Availability and ease of use of computers in the ICU (iii) Availability of decision support tools (iv) Availability of electronic imaging systems for radiology (picture archiving and communication systems, PACS) (v) Use of electronic order entry and medication administration systems (vi) Use of wireless or mobile systems in the ICU We generated an email list of Ontario ICU directors from pre-existing research and administrative email lists, as well as by a manual internet search and by contacting ICUs by telephone. Eligible ICUs were those that provided mechanical ventilation (level 3 care). We emailed an information package and link to the survey and also posted the link on a Canadian critical care listserver. The survey was carried out between May and October 2006. As an incentive, participants were offered the opportunity to enter in a draw for free registration at the Toronto Critical Care Medicine Symposium. Data were collected by a commercial internet survey application provider (Surveymonkey, Portland, OR). After the initial emailing, nonrespondents received a second email, followed by a personal communication by email or telephone.
The Mount Sinai Hospital Research Ethics Board approved the study. All answers were kept confidential.
Data analysis
Analysis was performed by hospital site for those hospitals with more than one ICU, since ICUs in the same hospital had identical information technology systems. We summarized categorical data with percentages. We used Fisher's Exact tests to analyze the association between IT availability (specifically the availability of PACS, use of an electronic Medication Administration Record and availability of computerized laboratory and imaging order entry) and university affiliation and ICU size. We carried out these statistical calculations using Statistical Analysis Software (SAS version 9.1, SAS Institute, Cary, NC). The Classification and Regression Trees (CART) method [9] was used to obtain the estimate of the optimal cut-point for the number of computers per ICU bed, and that cutpoint was bootstrapped 2000 times to obtain the 95% confidence interval. We used R software (version 2.4.0) for the CART analysis [10].
The majority of sites (94%) had electronic access to some component of patient clinical information, most frequently laboratory data and imaging reports (Table 1). PACS systems were available in 38 sites (76%), most of which (27 sites) reported having high definition viewing monitors available to the ICU. Few sites had the ability to capture data directly from patient monitors (7 sites, 14%) or from infusion pumps or ventilators (3 sites, 6%). Many sites reported the ability to access data remotely from elsewhere in the hospital and to a lesser extent from outside the hospital ( Table 2). The most common decision support tools reported were clinical calculators, pharmacopoeias, and links to web resources (Table 3). Only 4 sites (8%) reported using electronic medication administration systems. No association was demonstrated between university affiliation or ICU size and the availability of PACS, the use of an electronic medication administration or computerized order entry ( Table 4).
The hospitals used a large variety of software vendors. Fifteen clinical information system vendors were reported, the most frequent being Meditech (Westwood, MA), GE Healthcare (Bucks, U.K.) and Cerner (Kansas City, MO). Similarly, there was no conformity in the use of PACS vendors with 8 reported, the most frequent being Agfa Impax (Mortse, Belgium) and GE Healthcare.
Computer terminal availability in the ICU varied from as few as one computer (in small, 4 bed units) to at least one computer per bed (24% of units). Computer availability was reported as being insufficient in 21 sites (46%) (Fig 1). The perception that there are sufficient computers per bed is most differentiated by a cutpoint of 0.44 computers per bed (95% percentile confidence interval: 0.37, 1.13). Specifically, for those physicians who reported a computer to bed ratio less than 0.44, the majority, 10/12 (83%), indicated dissatisfaction. For those respondents who reported a computer to bed ratio greater or equal to 0.44, the majority, 23/37 (62%), indicated satisfaction. Multiple logins were often required to access clinical information, with the majority of sites (68%) needing 2 or more passwords. Wireless networks were installed in 23 units (46%) and mobile electronic tools were used in 28 sites (56%) ( Table 5). The policy regarding use of cellular telephones varied, with 28 sites (56%) prohibiting their use throughout the hospital and 24% allowing the use of cellular telephones in some hospital areas but prohibiting use in the ICU. A policy specifying a 1 meter distance from medical devices applied to 14% of sites and 10% reported unclear or changing policies.
Discussion
This self-administered internet-based survey of Ontario ICUs demonstrated a high prevalence of implementation of information technology, with 92% of sites having electronic access to laboratory results and medical imaging Our survey is the first inventory of information technology capacity in ICUs in a Canadian jurisdiction and provides data on a variety of information technology domains. Our response rate was typical for healthcare surveys [11]. Nonetheless, there are several limitations to this study. Common to all self-administered surveys, responses indicate self-reported rather than directly observed implementation of information technology.
This methodology aimed to identify the technology available to practising physicians, but may have failed to identify existing technology that was not in common use. The information was obtained from physician-users of the systems, rather than from information technology specialists or other users, such as nurses. Nurses knowledge of available technology and requirements may be very different to physicians. We did not survey individual intensive care physicians or nurses to understand their attitudes, knowledge and behaviour regarding this information technology.
While rapid advances in computing technology are evident in commerce and industry, healthcare has lagged behind [1,2]. Information technology has the potential to improve patient safety by optimizing access to information, specifically in operations with a high information and transaction load such as drug interactions and evaluation of monitoring data [1]. The potential benefits Relationship between the computer/bed ratio and physician satisfaction with the number of computers (n = 47) Figure 1 Relationship between the computer/bed ratio and physician satisfaction with the number of computers (n = 47). include reduced medication errors [12], improved practitioner performance [13], and enhanced diagnostic accuracy [14]. The degree of sophistication of computing technology in the ICU has been associated with improved outcome of quality improvement initiatives to reduce catheter related bloodstream infections [4]. Although the literature demonstrating a benefit of computerized clinical decision support is growing [13,14], several studies have documented the many impediments still to be overcome [15][16][17]. An expectation and benefit of an ICU clinical information system is that the time that a nurse spends with documentation should be reduced, allocating more time for patient care [18,19]. However, others have demonstrated an increased documentation time following the implementation of an ICU clinical information system [20]. Furthermore, new software systems can actually facilitate rather than reduce some types of error [21]. Appropriate assessment of any new technology prior to implementation remains essential [22].
An often quoted barrier to implementation of clinical information systems is the required change in culture to healthcare workers [23,24]. While this may be true in other clinical areas, our study demonstrates that computing systems are currently an integral component of the ICU. Having overcome these initial barriers, the next step is to introduce the more sophisticated and potentially more beneficial components of the clinical information system. The ICU is a data-rich environment at risk for data overload, and there may be significant benefit to safety and quality of care from decision support applications, medication administration systems and computerized order entry [24]. Our data confirm the lack of standardization of software across the province, with 15 vendors of clinical information systems being used. As new systems are implemented or updated, it is essential that standardization be addressed, to allow data transfer between systems and to reduce the potential for errors related to inadequate familiarity with the software [21].
Conclusion
Providing efficient, safe, individualized care in the datarich environment of the ICU can only be achieved with the use of information technology. We have demonstrated a significant level of early implementation in Ontario ICUs but further investment is needed. The variation in systems in use is concerning, and standardization and interoperability need to be addressed.
Key messages
• Almost all ICUs in the province of Ontario, Canada, have electronic access to some component of patient information, most frequently laboratory data and imaging.
• In contrast, use of decision support systems, electronic medication administration systems and full clinical information systems is very uncommon.
• Multiple different IT vendors are used, which may impair information exchange and interoperability. | 2,510.2 | 2007-01-01T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Analyzing the influential factors of process safety culture by hybrid hidden content analysis and fuzzy DEMATEL
Due to the complex nature of safety culture and process industries, several factors influence process safety culture. This paper presents a novel framework that combines the hidden content analysis method with Decision Making Trial and Evaluation Laboratory (DEMATEL) and Fuzzy logic to achieve a comprehensive set of influential factors and their relationship. The proposed methodology consists of two primary stages. Firstly, combined methods of literature review and Delphi study were used to identifying influential factors of process safety culture. Secondly, the Fuzzy-DEMATEL approach is employed to quantify and determine the relationships between different influential factors. A diverse pool of experts’ opinions is leveraged to assess the impact of each factor on others and process safety culture. In the first stage, 18 factors identified as influential factors on process safety. The findings of second stage revealed that eight variables were identified as causes, while ten variables were classified as effects. Also, the Organization management's commitment to safety factor had the greatest influence among all of the factors. As well as, the most significant interaction was associated with the risk assessment and management aspect. The integrated approach not only identified the influential factors, but also elucidates the cause-effect relationships among factors. By prioritizing factors and understanding their interconnections, organizations can implement targeted safety measures to improve process safety culture. Its effectiveness in quantifying qualitative data, identifying influential factors, and establishing cause-effect relationships make it a valuable tool for enhancing safety culture in process industries.
Analyzing the influential factors of process safety culture by hybrid hidden content analysis and fuzzy DEMATEL
Mohammad Ghorbani 1 , Hossein Ebrahimi 1* , Shahram Vosoughi 1 , Davoud Eskandari 2 , Saber Moradi Hanifi 1 & Hassan Mandali 1 Due to the complex nature of safety culture and process industries, several factors influence process safety culture.This paper presents a novel framework that combines the hidden content analysis method with Decision Making Trial and Evaluation Laboratory (DEMATEL) and Fuzzy logic to achieve a comprehensive set of influential factors and their relationship.The proposed methodology consists of two primary stages.Firstly, combined methods of literature review and Delphi study were used to identifying influential factors of process safety culture.Secondly, the Fuzzy-DEMATEL approach is employed to quantify and determine the relationships between different influential factors.A diverse pool of experts' opinions is leveraged to assess the impact of each factor on others and process safety culture.In the first stage, 18 factors identified as influential factors on process safety.The findings of second stage revealed that eight variables were identified as causes, while ten variables were classified as effects.Also, the Organization management's commitment to safety factor had the greatest influence among all of the factors.As well as, the most significant interaction was associated with the risk assessment and management aspect.The integrated approach not only identified the influential factors, but also elucidates the cause-effect relationships among factors.By prioritizing factors and understanding their interconnections, organizations can implement targeted safety measures to improve process safety culture.Its effectiveness in quantifying qualitative data, identifying influential factors, and establishing cause-effect relationships make it a valuable tool for enhancing safety culture in process industries.
Safety culture is an abstract idea that involves integrating individual and group perceptions, thought processes, emotions, and behaviors, which ultimately results in a specific approach to performing tasks within an organization.Safety culture is considered a subset of the overall organizational culture.It encompasses attitudes, beliefs, and perceptions that are acknowledged as standards and values among natural groups, determining their actions in response to hazards and risk control systems 1 .The safety culture is a critical element of an organizational culture, which has a direct impact on the attitudes and behaviors that are related to managing risks either by increasing or decreasing them 2 .This aspect creates personal responsibilities for individuals within the organization and human resource characteristics such as training, development, and adaptation based on attitudes, behaviors, standards, and values.The safety culture encompasses a set of dominant indicators, beliefs, and values about safety that the organization upholds 3 .
Regulatory institutions, universities, and government organizations have come to recognize the critical role of creating and maintaining a safety culture in preventing major accidents 4 .Several studies indicate that for developing a process safety culture, senior managers must be committed to ensuring the safety and wellbeing of their employees, particularly during times of production stress.However, the primary issue is that some obstacles hinder senior managers from demonstrating their values, attitudes, and commitments towards their employees 5 .In order to prevent major accidents, endeavors to enhance workplace safety have transitioned from regulating technical matters and individual mistakes to concentrating on organizational factors.After a series of
Literature review
Previous studies have identified management/leadership commitment and active employee participation as desirable indicators of safety culture [13][14][15] .Fernandez et al. conducted a study which identified three crucial components of organizational safety culture: management commitment, employee participation, and safety management system 16 .The results study of Alimohammadi et al. showed that safety culture in the detergent and cleaning industries is comprised of five dimensions: management commitment, training and information exchange, supportive environment, inhibitory factors, and prioritization of safety 17 .
Given the importance of the process safety culture, some researchers have investigated the factors influencing it.Table 1 presents some of these studies.
Based on the studies presented in Table 1, it is evident that multiple factors influence the safety process culture.Different studies have examined various factors and have focused solely on exploring these factors.In this study, an attempt has been made to identify a comprehensive set of influential factors.
Research method and material
The study consisted of two stages.During the first stage, we identified the factors that have an impact on process safety culture.In the second stage, we used the fuzzy-DEMATEL method to determine the interactions between these factors.Figure 1 depicts the proposed methodology.
Hidden content analysis (first phase)
In this study, the qualitative approach of hidden content analysis (thematic analysis) was utilized to identify the factors that influence process safety culture.Hidden content analysis is a qualitative research strategy that aims to develop a theory about a phenomenon by identifying its fundamental components and categorizing the relationships between these components within the context and process 22 .Because ensuring safety and preventing Repeated assessment of process safety in major hazard industries in the Rotterdam region (The Netherlands) Education and training, launching is safety culture improvement program, realizing concrete safety improvement, strengthening the involvement of personnel, leadership commitment, safety vs productivity, safety communication, participation, vision of senior management on the causes of incident, accident registration and analysis, learning from incident, managing contractor safety, role of supervisors with regard to safety, process safety vs occupational safety, maintenance management (and drift to danger), dealing with procedures, execution and follow-up of audits, complexity/resilience 18 The mediating role of safety management practices in process safety culture in the Chinese oil industry Leadership/management commitment, employee involvement, organizing responsibilities/procedures, safety training, inception and monitoring, communication and coordination 14 Assessing process safety culture maturity for specialty gas operations: a case study Leadership/management commitment, employee involvement www.nature.com/scientificreports/accidents necessitate an approach that is applicable in decision-making, this study utilized the hidden content analysis technique in two sequential steps as outlined below.
Literature review
In this step, the factors that have an impact on process safety culture were identified through a literature review.To accomplish this, relevant studies related to process safety culture were searched using key words include as Safety Culture, Process safety culture, and Process safety.The search was done in databases of Web of Sciences, Scopus, PubMed, and Google Scholar from the time between 2000 and 2023.The literature search strategy is illustrated in Fig. 2. In total, 42 studies were reviewed.In total, 42 article were selected to review.To extract factors, an open coding process was utilized.In open coding, factors are named without any constraints 23 .
Given the diversity of factors and for a more comprehensive data analysis, a three-member panel of safety experts was formed.These individuals were university professors with associated degree.The average age and work experience of them were 42 ± 1.38 and 8 ± 0.82, respectively.The selected articles, were reviewed carefully by experts and influential factors were extracted.
Developing Delphi method
In order to ensure the accuracy of factors influencing process safety culture and to identify all relevant factors, the Delphi method was used in conjunction with text analysis.
The Delphi method aims to collect and integrate opinions using a series of questionnaires or interviews to establish consensus through participant input.In addition, diverse opinions within a narrower scope are consolidated, as this technique supports continuous refinement.Key features of this method include participant anonymity, iterative and repeated multi-stage processes, controlled feedback, and statistical aggregation.Anonymity is a fundamental aspect of the Delphi method.Participants in a panel do not know each other, and under the influence of group dynamics, participants are not swayed by peer pressures.Therefore, participants are encouraged to express any important views, and this process is repeated until a reliable consensus is reached among the participants.Responses from each round are summarized and reported, allowing participants to reconsider their opinions in light of other participants' responses.Participants are allowed to change their initial responses.Each participant's responses in each round are collected and presented by combining the mean and standard deviation 24 .The Delphi method has been widely used in various fields 24 .Considering the unique characteristics of the Delphi method, this study utilized the technique to identify factors influencing process safety culture.The Delphi method was carried out in five steps as follows: www.nature.com/scientificreports/ Step 1: Establishing the expert panel Basically, there is no specific method for determining the number of participants or the preferred panel size for each study.The panel should consist of a group of selected experts without size limitation.However, since there is a need for specialized individuals who have the most knowledge and experience in the relevant field under consideration, the size of the group often remains relatively small 25 .According to Hogarth, the optimal number of members for utilizing the Delphi technique is between six to twelve individuals 26 .However, in typical conditions, a panel usually consists of 10-30 specialists 25 .
In this research, Delphi panel members were selected through purposeful non-probability sampling.Initially, experts and specialists were identified based on their work experience (10-15 years), expertise (academic members with associated professors' degree), and familiarity with the safety culture and process safety.From this list, 18 individuals were selected.The experts were contacted via phone calls and given a detailed explanation of the study's purpose and methodology.To uphold ethical standards, the participants were guaranteed that any information they provided would remain confidential.If the experts expressed their interest in taking part, they were extended an invitation to join the panel.Ultimately, a total of 13 subject matter experts with diverse backgrounds and expertise participated in the study as members of the expert panel.It should be noted that the appropriate number of panel members is another important consideration in forming the panel.According to the literature of this method, the 13 selected individuals were deemed suitable for the Delphi panel 25,26 .
Step 2: designing the Delphi Questionnaire The Delphi Semi structured questionnaire was designed based on the determined factors.This questionnaire utilized a five-point Likert scale with phrases 'not important,' 'somewhat important,' 'important,' 'very important' , and ' extremely important' to measure the factors.These phrases corresponded to scores of 1, 2, 3, 4, and 5, respectively.
Step 3: determine the number of Delphi rounds There are two statistical criteria for deciding whether to continue or stop Delphi rounds.The first criterion is the occurrence of a strong consensus among panel members, determined based on the value of the Kendall's coefficient of concordance (K ˃ 0.5).If such consensus does not occur, the constancy or minimal growth of this coefficient over two consecutive rounds indicates a lack of increase in agreement among members, and the polling process should be discontinued 27 .In This study the number of Delphi round determined using Kendall's coefficient.
Step 4: Validation and screening of factor This process is carried out by comparing the value of the acquisition of each factor with the threshold value S .The threshold value is determined through the mental inference of the decision-maker.In this study, the threshold value is considered to be three 28 .If the average score of each factor is less than 3, that index will be excluded.
Step 5: Identification influential factors on process safety culture In the first round, the prepared questionnaire was provided to the experts to score each factor based on a 5-point Likert scale.After experts completed the questionnaires, the average of the factors was calculated.
In the second round of Delphi, factors with an average score of less than three in the first round were eliminated.The confirmed factors from the first round, along with factors extracted by experts, were again presented to the experts through a questionnaire to score each index, similar to the first round.In this round, the average scores from the first Delphi round were also provided for individuals to make decisions based on the overall average.In this round, many experts confirmed their opinions from the first round.
In the three rounds of Delphi, a similar questionnaire from the two round was again provided to the experienced individuals to score each factor, similar to the previous rounds.In this round, as no factors were eliminated or added, the Kendall coefficient was calculated.Since the value of the Kendall coefficient was higher than 0.5, Delphi was stopped at this stage Following the four rounds, a total of 18 factors were determined to be influential on process safety culture.
Determining the relationship between influential factors using Fuzzy DEMATEL (second phase)
In this phase, the fuzzy-DEMATEL technique was utilized to establish the relationship between the identified factors.The variables considered in this section were the identified influential factors on process safety culture.DEMATEL is a reliable approach for examining relationships among system factors by consolidating group expertise.Its key attribute in the field of multi-criteria decision-making is its capacity to establish relationships and structures among factors 29 .One of its key advantages over other methods such as the Analytical Hierarchy Process (AHP) is that it captures mutual dependencies between system factors through an arrow diagram, which is often overlooked in traditional approaches 30 .To address the uncertainty associated with expert judgment, combining this method with the fuzzy concept yields benefits 31 .
Fuzzy-DEMATEL method is employed to consider multi-criteria interactions and calculate the weights of all criteria 32 .In this study, the Fuzzy-DEMATEL method served three main purposes.Firstly, to calculate the correlation matrix between the influential factors, identify causal factors, and determine the level of influence of each cause and finally establish the cause-effect model regarding the influential factors.The stages involved in performing this task are as follows: Step 1: Establishing the Subject Matter Experts (SMEs) panel Due to the role of the Delphi panel members in determining the influential factors on the process safety culture, and the good knowledge of them regarding the importance of each factor compared to others, they were invited to SMEs panel.Since all specialists had similar levels of knowledge in the field under investigation, equal weightage was assigned to each of them.www.nature.com/scientificreports/ Step 2: Acquiring SMEs' Knowledge Involving SMEs in determining relationship improves the investigation's quality and dependability, resulting in more precise findings.To achieve this, we applied a well-known linguistic approach that utilizes mental categorizations of language variables to tap into the expertise of SMEs.A language variable is a term for a spectrum of values represented by words or sentences in either natural or artificial language.The linguistic scales employed in this technique, along with their associated values, can be found in Table 2.This investigation utilized a range of triangular fuzzy numbers, a method that has been applied in numerous prior studies.
Step 3: Developing a DEMATEL questionnaire According to the 18 determined factors, a pairwise comparison matrix of 18*18 has been created.The questionnaires and guidelines for completion were emailed to experts.The experts were requested to express their views on the direct influence of each factor on one another, using the linguistic variables outlined in Table 2. Throughout the two-week data collection period, respondents had convenient access to researchers in case they had any questions about the criteria selection process.
Step 4: Data analyzing Once the experts finished creating the pairwise comparison matrices, the data was extracted and examined.To carry out the fuzzification process, qualitative opinions were initially transformed into fuzzy numbers that can be found in Table 2. Subsequently, the analysis followed the steps provided below.
Step 1: The fuzzy initial-direct relation matrix ( Ẽ ) is generated through the gathering of expert opinions from those involved in the study.The opinion matrices for each variable provided by the experts are averaged to create Ẽ .Equation (1), where 'p' indicates the number of experts participating in the study, is utilized to perform this procedure.
Step 2: In the second stage, the normalized direct correlation fuzzy matrix (F ) was derived.The process involved utilizing Eqs. ( 2) and (3).
Step 3: The total fuzzy matrix (T ) was derived by normalizing the direct correlation fuzzy matrix and applying Eqs. ( 4)-( 6).
Step 5: The calculations for the values of R and D for each variable are derived from the de-fuzzed matrix components of the overall relationship, utilizing Eqs. ( 8) and ( 9).The value of D indicates the degree of influence of each variable on other variables.Also, R indicates the influence of each variable on other variables.
Step 6: Establishing causal relationships.During this stage, the indexes of D + R and D − R were calculated.The D + R index signifies the extent of interaction between each factor with other factors, whereas the D − R index reveals the nature of the interaction.A higher D + R value indicates a stronger level of interaction with other factors.Furthermore, if the D − R value is positive, the considered factor plays a causal role, but if it is negative, the factor is seen as an effect 33 .
Factors affecting the process safety culture
This study was conducted with the aim of determining the factors influencing the process safety culture and their relationship.The method of hidden content analysis was used to identify the influential factors.In this method, the factors were identified through two stages of literature review and Delphi method.Based on the literature review, 29 influential factors were identified (Table 3).
Based on the proposed methodology, in the subsequent step, the extracted items were presented to the Delphi panel.Initially, the identified factors were provided to the experts to express their opinions and articulate any similar aspects about the factors.Experts were also asked to indicate if they had any factors in mind other than the ones mentioned.Based on expert opinions, the factors underwent a reviewed.As a result of this stage, 21 factors determined to design Delphi questionnaire.The reason for reducing the number of factors during this stage was the merging of some factors with one another due to their similarity in name or function.The panel of experts selected for this stage of the research comprised 13 academic individuals (Associated professors) who specialize in process safety.The experts' average age was 45.82 ± 2.14, and their average work experience was 12.57 ± 0.86 years.The Delphi study was applied three rounds.Table 4 displays the results of Delphi technique.
The Delphi technique finally identified 18 influential factors on process safety culture.
The relationships between identified influential factors
In the first phase 18 factors identifaed as influential factors on process safety culture.However, during the initial stage, the outcomes were qualitative, and it was unclear how each factor affected the process safety cultures.Additionally, the relationship between these factors remained uncertain.To overcome this limitation, the fuzzy-DEMATEL method was employed to quantify the results.DEMATEL is a powerful technique for analyzing the connections among system factors by utilizing collective knowledge.The expert panel for this phase of the study resembled the Delphi panel used in the first phase.Since the group was homogeneous, there was no need to assign different weights to the experts.By utilizing their expertise and the linguistic variables provided in Table 2, the experts indicated the direct impact of each factor on one another using the generated DEMATEL questionnaire.Given the vast number of factors involved, the identified factors were coded to facilitate their usage.Table 5 depicts the results of this coding process.Subsequently, the linguistic estimates were converted into fuzzy numbers, resulting in a matrix of direct relationships known as Table 6.This matrix demonstrates the direct influence of factor i on factor j. When performing the calculations, the diagonal elements of the matrix were set to zero if i = j 34 .
After obtaining the fuzzy direct relationship matrix, the normalized fuzzy direct relationship matrix and the total fuzzy relationship matrix were calculated and prepared.In the next step, to convert the total fuzzy relationship matrix into a comparable value, defuzzification was done on the total relationship matrix (Table 7).The average value of the de-fuzzified total relationship matrix (0.113) was used as the threshold.Therefore, an impact score of one factor on another factor ≥ 0.113 indicates a significant effect of the causal factor on the effect factor.
In the next step, the D and R values were calculated to determine the cause-and-effect variables.The sum of the values of each row in the De-fuzzified matrix of the total relationship (D) indicates the degree of influence of each variable on other variables (Table 8), while the sum of the values of each column (R) indicates the influence of each variable on other variables 35 . (7) The values of D + R and D − R are another important part the results obtained from the fuzzy DEMATEL method.The value of D + R indicates the degree of interaction between the desired variable and other variables, while the value of D − R indicates the type of interaction of each variable with other variables.A higher value of D + R suggests a greater level of interaction that the variable has with other variables.Additionally, if the value of D − R positive, it means that the desired variable plays a causal role, whereas a negative value D − R indicates that the desired variable plays an effect role 35 .The D + R and D − R values can be found in Table 8.
Finally, the influential network relations map (INRM) of factors was created (Fig. 3).In this diagram, D + R and D − R represent the level of interaction and influence, respectively.
Based on the results (Fig. 3), eight variables were identified as causes, while ten variables were classified as effects.In the context of the fuzzy DEMATEL technique, causes refer to variables that significantly impact the system.In the context of process safety culture, causes can be understood as variables that directly contribute to creating the safety culture.As evident from Fig. 3, the most influential factor is related to the commitment of organization managements to safety.
Factors affecting the process safety culture
The first phase of study identified 18 influential factors on process safety culture.In the following, these factors have been discussed separately.
Organization management's commitment to safety
In various industries and organizations, there has been discussion about the importance of researching management's commitment to safety as a crucial factor in establishing a culture of safety.The level of importance and priority that managers place on occupational safety plays a significant role in promoting a positive safety culture 36 .One of the most important steps that can be taken to establish a culture of safety is clear communication with employees about safety priorities.This involves informing them of the high priority given to safety, emphasizing that it is regarded as the top business issue.The level of priority assigned to occupational safety within a business is directly linked to the management's commitment to health and safety concerns 37 .Such a commitment can take many forms, including identifying and evaluating potential risks, complying with health www.nature.com/scientificreports/
Open and frank safety communication
Effective communication is essential within industrial organizations to promote a culture of safety.When employees are encouraged to speak openly and honestly, they can identify potential hazards and suggest ways to improve safety.This type of communication also allows employees to share information and encourage safe habits among their colleagues.Establishing open lines of communication between managers and employees can significantly improve a company's approach to process safety.By fostering an environment where employees feel comfortable expressing their concerns and collaborating with management, organizations can make better decisions and implement more effective safety measures.Ultimately, creating a culture of open and frank communication can contribute to a reduction in workplace injuries and accidents.Effective safety communication is crucial for reducing employee accidents in the workplace.It goes beyond simply sharing information about workplace safety, as it also has the power to influence employees' behavior and attitudes towards safety 38 .Effective safety communication has been shown to have a positive impact on safety performance.Insufficient or inappropriate safety interaction between employees and senior management may be due to a lack of emphasis on constructive communication and feedback about workplace safety.One reason why safety communication might be weak is the absence of a positive safety culture within an organization 39 .A study conducted by Parker and his colleagues found that establishing open communication channels between managers and employees can result in significant improvements to process safety culture.The research showed that such communication can lead to higher job satisfaction among employees, increased organizational commitment, and improved safety performance 40 .
Employee participation and commitment
The influence of employee participation and commitment on process safety culture is a critical issue in safety management.Safety literature has demonstrated that with the combined efforts of managers and employees, a positive safety culture can be established.Therefore, one-sided actions taken by managers (e.g., providing and implementing safe work laws, procedures, plans, and policies) to ensure healthy and safe working conditions are not sufficient.It is also crucial for employees to comply with these regulations and guidelines and actively participate in safety-related matters.Employee participation refers to actions that may not directly contribute to an individual's safety, but instead help create an environment that supports occupational safety.Safety participation refers to voluntary actions that go beyond the scope of employees' safety-related responsibilities 41 .Extraordinary behaviors encompass actions such as supporting fellow workers, voluntarily participating in safe practices, attending safety meetings, promoting safety programs in the workplace, taking initiative to improve safety, and enhancing overall safety in the workplace.
Contractor management
The influence of contractor management on process safety culture is a critical issue in health and safety within the industry.Research has shown that increasing transparency in contractor management can lead to significant improvements in process safety culture.Risk analysis, proper planning, and providing information to contractors can increase awareness and emphasis placed on safety issues, resulting in an improvement in process safety culture.Additionally, involving contractors in decision-making and project implementation processes can help highlight the importance of safety issues and enhance process safety culture.Choosing contractors with adequate expertise and experience in the relevant areas of a project can enhance safety and promote a culture of safety.Experienced contractors in the field of health and safety can further support a culture of safety in the workplace.Moreover, enhancing the training and awareness of contractors can significantly improve process safety culture.Raising contractors' awareness about safety hazards can result in increased safe behaviors and improvements in safety culture.www.nature.com/scientificreports/Safety policies and regulations Safety policies and regulations are a significant factor in shaping the safety culture of processes.Given the technical complexity of chemical industries, it is essential to have operational methods for managing work.A study conducted on Norway's oil and gas industry revealed that safety regulations and policies constitute the primary components of their safety management approach 42 .Incorporating safety measures and adhering to safety policies in refining industries can enhance the safety culture of processes.
Incidents reporting system
The incident reporting system is another factor that impacts safety culture.It is a vital tool to enhance process safety culture in industries.This system consists of procedures and processes used to gather, analyze, and report events and errors related to the process.It identifies weaknesses and offers suggestions for enhancing the process safety system.
Analysis and learning from incidents
Analysis and learning from incidents are a crucial tool in improving the safety culture within process industries.Through this tool, adverse events and incidents are systematically examined to identify the contributing factors that led to their occurrence.This enables prevention of similar incidents in the future.The impact of analysis and learning from incidents on process safety culture can be classified into two categories-positive and negative developments.In the positive category, a thorough analysis of contributing factors can help prevent similar incidents in the future, leading to improved process safety.In addition, incident analysis can be utilized as an educational tool to enhance the safety culture in process industries.However, improper use of incident analysis and lessons learned may have negative consequences such as rendering the activity ineffective or leading to further mistakes and errors.Therefore, to improve the process safety culture in these industries, it is crucial to use incident analysis appropriately and share the results with employees.Furthermore, it is essential to improve the organizational culture, and increase knowledge and awareness among employees in the field of process safety to ensure sustained effectiveness of this tool.
Access to process information
Having access to process and safety information is another factor that can contribute to improving the safety culture in process industries.Accurate and comprehensive information about industrial processes and associated risks enhances employee understanding of incident causes and enables them to adopt appropriate safety measures during their work processes.Furthermore, access to process information empowers managers to identify process strengths and weaknesses, and take necessary preventive measures to improve process safety 43 .Furthermore, having access to process and safety information can be an effective educational tool for promoting process safety culture.This approach helps employees become familiar with potential hazards in their work processes and understand appropriate preventive measures for avoiding accidents 44 .
Monitoring/inspection
Monitoring and inspection have been recognized as two key factors in enhancing process safety culture.These activities can be conducted both internally and externally, with internal monitoring and inspection being carried out by employees and managers within the organization, while external monitoring and inspection are performed by government and independent organizations outside of the company.By conducting monitoring and inspection, process strengths and weaknesses can be identified, leading to improved process safety.Additionally, these activities can foster trust and confidence among employees in regards to process safety, leading to increased accuracy in performing tasks.Studies have demonstrated that performing monitoring and inspection activities can be an effective solution for improving process safety culture in process industries.
Maintenance management
One of the factors that had a significant impact on the safety culture of the process examined in this study was maintenance management.This factor encompassed activities such as planning, executing, controlling, and maintaining and repairing machinery, equipment, and systems.In general, carrying out periodic and preventive maintenance on machinery and equipment can reduce the likelihood of potential hazards and accidents occurring during the production process, thereby enhancing reliability and safety culture.
Education and training
Education and training are considered fundamental in improving process safety culture, and have received significant attention from industrial researchers.Regular and structured education and training programs foster employee commitment to safety culture and reduce the likelihood of industrial accidents.Studies have shown that creating a safety culture requires adequate education, and organizations can only achieve their goals by encouraging individuals to acquire practical knowledge and skills.Therefore, it is necessary for education to be continuous to establish a safety culture 45 .
Simplification or avoidance of complexity
Simplification is one of the factors that plays a role in the safety culture of processes.The utilization of sophisticated equipment and complex control systems can contribute to an increase in human error, risks, and accidents.As a result, streamlining processes and avoiding complexity can promote the enhancement and advancement of occupational process safety.Simplification is considered one of the fundamental principles of inherent safety.
The implementation of a simplification strategy for equipment and procedures can enhance safety by minimizing operator error 46 .This approach has the potential to increase work morale, job satisfaction, trust in the organization, and ultimately foster a positive attitude towards safety concerns.Furthermore, it can facilitate the ease of working with equipment and procedures, thereby promoting the betterment of safety culture.
Process safety vs. personal safety
Process safety versus personal safety was another important factor in shaping process safety culture.Process safety encompasses all methods, standards, and processes used to ensure safety at the level of industrial systems, whereas personal safety refers to the set of activities carried out by individuals to perform their work processes safely.As process safety emphasizes controlling potential hazards in industrial systems and maintaining safety, it can significantly contribute to reducing the number of safety incidents within an organization 47 .Several studies have demonstrated that a robust safety system at the organizational level can enhance overall safety and promote a positive attitude towards safety among employees.As a result, process safety is deemed to be one of the factors contributing to the development of process safety culture 48 .
Risk assessment and management
Risk assessment and management is another factor involved in the safety culture of a process.One of the significant effects of risk assessment on the safety culture of a process is the increased awareness of employees about the hazards present in the work environment.Risk assessment can identify various hazards in industrial processes, and the information obtained from this process can serve as a strong foundation for designing and implementing safety processes at the organizational level.As such, risk assessment can act as a catalyst to strengthen the safety culture at the organizational level.Considering the significant role of risk assessment in process safety, it has been regarded as the cornerstone of process safety management.Risk assessment enhances employees' awareness and knowledge of hazards, and provides them with the necessary means to deal with them effectively.Consequently, this can lead to improving employees' safety behaviors, promoting positive attitudes towards safety measures, increasing their participation in safety-related activities, and strengthening the safety culture at the organizational level.
Incentive and punishment system in safety field
The incentive and punishment system is an effective approach for enhancing safety culture in industrial settings.By reinforcing the importance of safety and accident prevention, the incentive and punishment system boosts employees' awareness of these critical issues.Through implementation of a robust incentive and discipline system, employees recognize that safe behaviors are rewarded while violation of these behaviors results in punishment.This increased awareness leads to a positive attitude towards safety, promotes greater employee participation in safety initiatives, and ultimately enhances safety culture.Because the design and implementation of an effective incentive and punishment system to improve safety culture in process industries depends on the specific conditions and work environment of each organization, it should not be viewed as a universal solution.Nevertheless, studies indicate that having an incentive and punishment system in place within an organization can lead to improved safety culture 49 .
Safety permit system
The safety permit system is a widely used safety policy in many process industries to prevent accidents and occupational hazards.Under this system, a safety permit must be obtained before carrying out hazardous processes to confirm that all the necessary safety conditions for the process, such as using safety equipment and protective coverings, have been provided.By reducing accidents, this system enhances employees' positive attitude towards safety culture.Moreover, the presence of such systems raises the importance of safety from employees' perspective and helps improve safety culture.
Perceived organizational support for safety
The findings of this study indicate that perceived organizational support is one of the influential factors in shaping safety culture.Perceived organizational support refers to the degree to which employees believe that their participation, health, and well-being are valued by the organization 50 .When employees sense that the organization cares about them and provides them with adequate safety equipment, they are more likely to comply with safe and low-risk behavior.Perceived organizational support has a direct and indirect impact on safety outcomes by bolstering organizational identification.An organization that regards the preservation of safety and health as a moral value effectively communicates this value to its employees, thereby reducing the likelihood of their deviation from established safety standards 51 .
Change management
The change management process is another influential factor in shaping safety culture.By change management, we refer to the consideration of safety issues in implementing changes to industrial processes.Risk assessments should be conducted before and after changes to identify and control hazards, and employees should receive necessary training on the changes.Change management can enhance safety culture by increasing transparency and communication within the organization.By providing transparent and honest information to employees about the changes made and receiving feedback from them, the level of trust and satisfaction of employees with the organization can be increased, leading to improvement in safety culture.In addition to building trust, providing training and raising awareness among employees on safety issues is also crucial in the change management system.By offering appropriate training and guidance to employees on safety, change management can enhance the organization's safety capabilities and strengthen its safety culture.Furthermore, having such systems in place can increase the perceived importance of safety within the organization from the employees' perspective, leading to improved attitudes towards safety and an overall increase in safety culture 52 .
The relationships between identified influential factors
In the second phase, the relationship between identified factors examined using Fuzzy-DEMATEL technique.
Based on the results of this phase, eight variables were identified as causes, while ten variables were classified as effects.Results revaluated that organization management's commitment to safety had the highest weight among factors.
Management's commitment to safety is a key player in creating and maintaining a safety culture in any organization.When managers explicitly commit to the safety of employees and the work environment, it influences all members of the organization.A culture of safety is developed by implementing specific standards, procedures, and processes that are designed to ensure safety by managers.In general, it can be stated that the implementation of all identified influential factors depends on the commitment of organizational management.Therefore, this factor is considered a key player and one of the most influential factors.
Open communication is another factor that has been identified as a cause.One of the most important methods for engaging employees in creating a safety culture is to create an open environment for providing safety opinions and suggestions.Employees should feel confident that their opinions and ideas for safety improvements are valued and taken seriously by the organization.Furthermore, for the implementation of factors such as access to information, Incentive and punishment system, training, incident reporting systems, risk assessment, contractor management, permit systems, etc., open communication between different levels of the organization is necessary.Therefore, this factor is essential for ensuring the implementation of other factors and is considered a cause.
Employee participation is a fundamental factor in creating and strengthening a safety culture in organizations.When employees are actively involved in safety processes and contribute their perspectives, they will collaborate as a cohesive team to implement identified influential factors.Without employee participation, the implementation of factors that create a safety culture will remain enveloped in ambiguity.Therefore, this factor is also considered a cause.
Safety policies and regulations play a crucial role in creating and maintaining a culture of safety in organizations.These laws and standards consist of guidelines and requirements that are designed to preserve and enhance the safety of employees and the work environment.They also play a role in determining and validating safety processes.They encompass various stages, ranging from hazard identification and risk assessment, safety measures, workplace and facility design, selection and use of safety equipment, to safety training for employees and reporting accidents and incidents, outlining necessary processes and requirements.Considering the direct impact of this factor on the creation of other factors and, subsequently, the establishment of a safety culture, this factor is also considered a cause.
Another factor that is considered a cause is the factor of access to process information.For the implementation of certain identified factors, such as risk assessment and management, training, monitoring and inspection, etc., access to process safety information is necessary.Transparency is a key aspect in management, which plays a vital role in improving employee participation.How can we expect employees to participate in the implementation of process safety measures if they do not have information about them?Therefore, access to process information is considered a key factor.
The implementation of many identified factors, such as permit systems, work procedures, incident reporting, change management, etc., requires training.Therefore, this factor is also considered a cause.Another factor that has been identified as a cause is perceived organizational support.This type of support refers to the messages, behaviors, and measures provided by managers and top levels of the organization, demonstrating that safety culture and well-being are valued and prioritized.Since employees play a primary and key role in creating a safety culture, positive perception in this area from the organization can have a significant impact on strengthening a positive attitude towards safety among employees.
In the context of the fuzzy DEMATEL technique, effects refer to variables that are influenced by the system.Regarding process safety, effects can be seen as variables that contribute to the creating of culture.The factors identified as effects are influenced by the causal factors, and it can be said that they play an indirect role in creating a safety culture.
Improving safety culture can be achieved by prioritizing cause factors, as suggested by the study results.Nevertheless, it is important to remember that all the mentioned factors are significant and should not be disregarded.
The results reveals that the Organization management's commitment to safety factor had the greatest influence among all of the factors.Management's commitment to safety can serve as a behavioral model for employees within the organization.When employees perceive that their organization values safety, they are more likely to strengthen their own personal beliefs about safety and be more cautious in their safety-related behaviors.
The social exchange theory is used to explain how management behavior shapes employee perceptions and influences employee behavior 53 .According to the social exchange theory, voluntary behavior is prompted by the norm of reciprocity, meaning that individuals learn about social norms regarding their obligations as much as they fulfill mutual behaviors in official commitments.When individuals fulfill their social commitments, the process of exchange takes place.In terms of safety, when supervisors and managers convey their interest in safety to employees by valuing safety improvement, employees believe that the organization has a positive orientation towards safety, which in turn increases the likelihood of stimulating or exchanging ideas among employees regarding safety issues 54 and participation in other safety-related activities 55 .
Management's commitment to safety has a significant impact on the prioritization and decision-making processes related to resource consumption, business process design, and safety standards.As a result, since management's commitment to safety affects employees' positive attitudes toward safety and the provision of resources and processes related to safety, it is clear that this factor has the greatest influence on other factors.In other words, without management's commitment to safety, it cannot be expected that other factors effective in creating a safety culture within the organization will emerge, because these factors are reliant on management's attitude and decision-making regarding safety.A safety culture can only be effectively developed when leaders and employees actively participate in organizational safety management.If there are no essential management methods such as goals, policies, initiatives, organizational structure, and resource allocation, the development of a safety culture cannot be achieved at its fullest potential 56 .Safety management practices refer to the policies, strategies, procedures, and activities that organizations implement to ensure the safety of their employees.These practices are essential components of an effective organizational safety management system.Safety management systems and practices are precursors to the development of a process safety culture.Wiegmann and his colleagues carried out research in the aviation industry, and their findings indicated that organizational commitment was one of the most robust elements of safety culture, while reward systems were among the weakest aspects of the studied organization's safety culture 57 .Based on the obtained results, the safety culture can be enhanced by improving the organizational management's attitude towards safety.To improve the management's attitude, it is recommended to conduct training courses and provide economic justification for investment in the safety domain.
The study findings revealed that the most significant interaction was associated with the risk assessment and management aspect.The process of risk assessment and management is a crucial element in maintaining a safety culture, which involves the review and analysis of existing hazards, their severity assessment, and providing safety solutions to mitigate such risks.Given its importance, this process must be conducted with utmost care and coordination.In order to execute the risk assessment and management process effectively, there is a requirement for adequate information on processes, materials, and procedures.For this reason, communication with other safety factors like safety information, standard procedures, safety training, incident reporting, research, safety inspections, etc., is imperative.Furthermore, risk assessment and management itself is a constituent of other factors like permit issuance, change management, and maintenance and repair.To provide complete protection against hazards and risks in the work environment, this process must be carried out with precision and attention to all factors related to safety culture.
Improving safety culture can be achieved by prioritizing factors with a higher impact score, as suggested by the study results.Nevertheless, it is important to remember that all the mentioned factors are significant and should not be disregarded.
In this study, we have used the term 'process safety culture' interchangeably with 'safety culture.' Given that the identified factors exist in all industries and work environments, these factors are generalizable and applicable to safety culture and other environments as well.
Conclusion
In this research, a comprehensive analysis strategy has been introduced to examine the influential factors of process safety culture.By integrating hidden content analysis, DEMATEL, and fuzzy sets techniques, a robust and quantitative assessment of these factors has been achieved, leading to valuable insights on how to improve process safety culture.The key contributions and findings of this study are noteworthy: i. Qualitative results: Through a combination of literature review and the Delphi technique, a comprehensive set of influential factors of process safety culture was identified.ii.Quantification of qualitative results: By combining hidden content analysis with fuzzy DEMATEL, the qualitative results obtained from hidden content analysis were quantified.This conversion of linguistic estimates into fuzzy numbers allowed for a more precise and reliable analysis, reducing ambiguity in expert judgments.iii.Identification of cause-effect relationships: The proposed approach facilitated the identification of causeeffect relationships among the different factors involved in creating process safety culture.Through the fuzzy DEMATEL technique, influential factors and their interrelationships were determined, providing deeper insights into the causality of process safety culture.iv.Improved analysis reliability: By utilizing fuzzy sets, the analysis achieved higher levels of accuracy and reliability in assessing the impact of various factors on creating process safety culture.The fuzzy DEMA-TEL method enhanced the robustness of the results, enabling organizations to make informed decisions based on more reliable data.v. Enhanced improvement of process safety culture: The integration of hidden content analysis and fuzzy DEMATEL allowed organizations to identify and prioritize the main influential factors closely related to safety culture.By addressing these critical factors, organizations can effectively reduce barriers and enhance required actions, leading to an improved process safety culture.
In conclusion, the developed strategy combining hidden content analysis, DEMATEL, and fuzzy sets proves to be a valuable and effective approach for analyzing influential factors of process safety culture and enhancing it in process industries.The ability to quantify qualitative data, identify cause-effect relationships, and prioritize influential factors provides a comprehensive and actionable understanding of the causality of process safety culture.By adopting this approach, industries can proactively address vulnerabilities, mitigate barriers, and continuously improve their safety culture.
Figure 3 .
Figure 3. INRM of factors, the relationship between factors.
Table 2 .
Linguistic phrases and corresponding fuzzy numbers.
Table 3 .
The influential factors of process safety culture based on the literature review.
Vol.:(0123456789) Scientific Reports | (2024) 14:1470 | https://doi.org/10.1038/s41598-024-52067-7www.nature.com/scientificreports/and safety laws and regulations, providing employee training, establishing appropriate facilities and technical equipment, promoting a culture of safety, and fostering a sense of motivation to comply with safety protocols.Research has indicated that the establishment of a safety culture within an organization is directly linked to management's commitment to safety.
Table 4 .
The influential factors of process safety culture based on the Delphi technique.
Table 5 .
Results of coding the factors influencing process safety culture.
Table 8 .
The values of D, R, D + R, and D − R. Significant values are in bold. | 11,293.8 | 2024-01-17T00:00:00.000 | [
"Engineering"
] |
Cytosine base editors optimized for genome editing in potato protoplasts
In this study, we generated and compared three cytidine base editors (CBEs) tailor-made for potato (Solanum tuberosum), which conferred up to 43% C-to-T conversion of all alleles in the protoplast pool. Earlier, gene-edited potato plants were successfully generated by polyethylene glycol-mediated CRISPR/Cas9 transformation of protoplasts followed by explant regeneration. In one study, a 3–4-fold increase in editing efficiency was obtained by replacing the standard Arabidopsis thaliana AtU6-1 promotor with endogenous potato StU6 promotors driving the expression of the gRNA. Here, we used this optimized construct (SpCas9/StU6-1::gRNA1, target gRNA sequence GGTC4C5TTGGAGC12AAAAC17TGG) for the generation of CBEs tailor-made for potato and tested for C-to-T base editing in the granule-bound starch synthase 1 gene in the cultivar Desiree. First, the Streptococcus pyogenes Cas9 was converted into a (D10A) nickase (nCas9). Next, one of three cytosine deaminases from human hAPOBEC3A (A3A), rat (evo_rAPOBEC1) (rA1), or sea lamprey (evo_PmCDA1) (CDA1) was C-terminally fused to nCas9 and a uracil-DNA glycosylase inhibitor, with each module interspaced with flexible linkers. The CBEs were overall highly efficient, with A3A having the best overall base editing activity, with an average 34.5%, 34.5%, and 27% C-to-T conversion at C4, C5, and C12, respectively, whereas CDA1 showed an average base editing activity of 34.5%, 34%, and 14.25% C-to-T conversion at C4, C5, and C12, respectively. rA1 exhibited an average base editing activity of 18.75% and 19% at C4 and C5 and was the only base editor to show no C-to-T conversion at C12.
Introduction
The CRISPR-Cas9 editing system/complex consists, in its basic form, of a guide RNA (gRNA) and a Streptococcus pyogenes nuclease SpCas9 enzyme, which generate a targeted double-stranded DNA break, leading to the formation of insertions and/or deletions (indels) via the activation of the non-homologous end joining (NHEJ) DNA repair pathway frequently resulting in frameshift of the reading frame and loss of gene function (LOF) (Jinek et al., 2012a).Basic CRISPR-SpCas9-mediated gene editing has been further developed into cytidine base editors (CBEs), where single targeted cytosines are converted into thymines (C-to-T) (Komor et al., 2016) and later expanded to include targeted adenine-to-guanine (A-to-G) adenine base editors (ABEs) (Gaudelli et al., 2017) and C-to-G base editors (Kurt et al., 2021).Base editing (BE) was first and mainly employed in mammalian systems (Komor et al., 2016;Nishida et al., 2016;Gaudelli et al., 2017;Komor et al., 2017;Koblan et al., 2018;Thuronyi et al., 2019;Koblan et al., 2021;Kurt et al., 2021) but have since been adjusted to plants, including crops such as rice (Shimatani et al., 2017;Zong et al., 2017;Zong et al., 2018;Jin et al., 2019;Li C. et al., 2020;Hua et al., 2020;Xiong et al., 2022), wheat (Zong et al., 2017;Zong et al., 2018), maize (Zong et al., 2017) potato (Zong et al., 2018;Veillet et al., 2019a;Veillet et al., 2019b;Veillet et al., 2020a;Veillet et al., 2020b), and tomato (Shimatani et al., 2017;Veillet et al., 2019b;Veillet et al., 2020b).BE has been introduced and tested in potato protoplasts using Agrobacterium-mediated delivery of integrative constructs followed by editing analysis of regenerated explants (Zong et al., 2018;Veillet et al., 2019a;Veillet et al., 2019b;Veillet et al., 2020a;Veillet et al., 2020b) and using PEG-mediated delivery of non-integrative constructs into potato protoplasts (Zong et al., 2018).Both approaches included targeting Granularbound starch synthase (StGBSS), where Agrobacteriummediated delivery generally conferred high C-to-T conversion, some indel formation, and undesired C-to-A and C-to-G conversions in the explants examined (Veillet et al. (2019a), and delivery to protoplasts, in one instance, conferred an average of up to 18%-20% of C-to-T editing (Zong et al., 2018).Prime editing (PE) is a recent additional editing tool that allows controlled editing directly into the target site through the use of a reverse transcriptase and a specialized prime editing guide RNA (pegRNA), which confers the targeting and editing specificity and the binding capability to the nickase (nCas9) of the prime editing complex (Anzalone et al., 2019).PE has interesting potential within clinical applications (Surun et al., 2020;Geurts et al., 2021;Happi Mbakam et al., 2022a;Happi Mbakam et al., 2022b;Tremblay et al., 2022) and in crop breeding (Jiang et al., 2020b;Li H. Y. et al., 2020;Lin et al., 2020).Implementation of PE in plants on a wider scale, however, has proven difficult, perhaps due to the mode of action and the complex pegRNA structure (Zhao et al., 2023), underpinning the continued relevance of base editing.However, the applicability of base editing on a wider scale is constrained by moderate targeting specificity and efficiencies, which, to some degree, may be alleviated by design and efficiency optimizations of the BE construct at hand.Here, we further developed a non-integrative CRISPR/ SpCas9 construct, optimized and custom-made for potato protoplasts via replacement of the standard AtU6-1 promotor with a native potato StU6-1 promotor, to generate and compare three CBE constructs with different origins of the deaminase.When targeted to the granule-bound starch synthase (GBSS) 1 gene and tested on protoplasts of the cultivar Desiree, the three BEs generally conferred high C-to-T base editing efficiencies with, in one instance, 43% C-to-T conversion of a single cytosine.
Strains and cultivars
Potato (Solanum tuberosum) cultivar Desiree plantlets were grown and maintained in vitro on medium A, as described in the work of Nicolia et al. (2015) and Nicolia et al. (2021).The potato plants were grown in a Fitotron growth cabinet model SGC 120 from Weiss Technik with a diurnal rhythm of 16/8 h, 24 °C/ 20 °C, 70% humidity, at a light intensity of 65 μE.
Codon-optimized nucleotide sequences including assembly overhangs are provided in Supplementary Information.
Oligonucleotide primers
Primers were ordered from TAG Copenhagen A/S (https://www.tagc.com) and are listed in Supplementary Table S1.For working applications, 5 pmol/μL dilutions in Milli-Q water were prepared.
Gibson assembly mix transformation and sequence verification
extracted using the E.Z.N.A.(R) Plasmid DNA Mini Kit I (D6943-02) from Omega Bio-tek according to the manufacturer's instructions and sequenced by EZ-sequencing services provided by Macrogen to ascertain the correct sequence.
Large-scale plasmid editor purification and preparation for transformation
Following confirmation of the correct sequence, plasmids were amplified in E. coli and isolated by CTAB large-scale-prep plasmid phenol extraction and then diluted to a concentration of 1 μg/μL to be used for protoplast transformation.
Protoplast isolation and transformation
Media used for isolation and transformation include medium B, plasmolysis solution, medium C, wash solution, sucrose solution, transformation buffer 1, transformation buffer 2, PEG solution, and medium E, with recipes outlined in the work of Nicolia et al. (2021).Protoplast isolation was carried out as described in the work of Nicolia et al. (2015) and Nicolia et al. (2021).The intactness and purity of isolated protoplasts were checked by light microscopy and diluted to a concentration of ca.1.6 × 10 3 protoplasts/μL in transformation buffer 2.Then, 110 µL protoplasts (ca.1.6 × 10 3 protoplasts/μL) in transformation buffer 2 were gently mixed with 10 µL (1 μg/ μL) of base editing plasmid, and 110 µL 25% PEG solution was added, gently mixed, and incubated for 3 min at RT.Transfection was stopped by adding 6 mL of wash solution and then spun at 500 RPM for 5 min (minimum acceleration and deceleration), RT, the wash solution was carefully removed, and 1 mL of ½ medium E (diluted with 0.4 M sorbitol) was added.Protoplasts were then incubated in the dark for 2 days at 60 RPM, RT, which yielded optimal editing when using the original construct (Johansen et al., 2019).Following incubation, the protoplasts were harvested by spinning for 3 min at 4000 RPM, and the pellet was re-dissolved in 50 µL of Milli-Q water, frozen in N 2 , heated for 15 min at 96 °C, and stored at −20 °C.The protoplast slurry was thawed, placed on ice, and then, vortexed prior to entering as a template in PCR amplifications.
PCR amplification and product purification
PCR amplification of the target region of GBSS1 was performed using 6.25 pmol of primer 472 and 6.25 pmol of primer 384, 12.5 µL 2 X CloneAmp TM HiFi PCR Premix from Takara, and 1 µL of protoplast slurry (ca.1.6 × 10 3 protoplasts/ μL) in a total volume of 25 µL.PCR cycle parameters were 2 min at 98 °C, followed by 40 cycles of 10 s at 98 °C, 15 s at 64 °C, and 30 s at 72 °C, followed by 2 min at 72 °C.PCR products were purified using the BioLine ISOLATE II PCR & Gel Kit or NucleoSpin Gel and the PCR Clean-up Mini kit from Macherey-Nagel according to the manufacturer's recommendations.PCR products were then sent for sequencing (Sequencing directly on PCR products).
Indel detection amplicon analysis
PCR amplification of the GBSS1 target region was performed using 6.25 pmol of primer 475 and 6.25 pmol of primer FAM481 (5' end labeled with fluorescein amidite (FAM)), 12.5 µL 2 X CloneAmp TM HiFi PCR Premix from Takara, and 1 µL of protoplast slurry in a total volume of 25 µL.PCR cycle parameters were 2 min at 98 °C, followed by 40 cycles of 10 s at 98 °C, 15 s at 64 °C, and 30 s at 72 °C, followed by 2 min at 72 °C.PCR amplicons were wrapped in aluminum foil and stored at −20 °C until being subjected to indel detection amplicon analysis (IDAA) analysis at COBO Technologies Aps, Denmark, where the fluorescently labeled fragments were run on a sequenator 3500xL Genetic Analyzer (Applied Biosystems) and separated according to size by capillary electrophoresis, with a separation resolution down to fragments differing ±1 bp in length as described in the work of Yang et al. (2015).
Restriction digestion
StyI digestions were performed in a total volume of 10 μL containing 80 ng of PCR fragment DNA, 1 μL 10 x CutSmart Buffer (New England BioLabs), and 2 U StyI enzyme (New England BioLabs) and incubated at 37 °C for 3 h.Then, 4 U of StyI enzyme was additionally added and incubated for 1 h.BsrI digestion was performed in a total volume of 10 μL containing 80 ng of PCR fragment DNA, 1 μL 10 x NEBuffer 3.1 (New England BioLabs), and 2 U BsrI enzyme (New England BioLabs) and incubated at 65 °C for 2 h.
Sequencing directly on PCR products
Editing was also analyzed by Sanger sequencing, using the EZseq sequencing services provided by Macrogen, directly on PCR products using 20 ng of purified PCR product (PCR amplification and product purification) and 25 pmol of primer 589.It should be noted that for direct sequencing on PCR amplicons of the protoplast cell pool, discernable/readable sequence chromatograms were only obtained when using ca.20 ng of purified PCR product as opposed to the 50-75 ng recommended by Macrogen EZ-seq.
Data analysis
Editing efficiency was determined by analyzing sequence chromatograms using the EditR software (Kluesner et al., 2018).IDAA chromatograms were obtained using the online software VIKING (https://viking-suite.com/).
Results
Earlier, we used CRISPR/Cas9 for knockout of the GBSS 1 target gene in potato (Solanum tuberosum) (cultivar Desiree and Wotan), where the CRISPR/Cas9 components were transiently expressed from plasmids delivered by polyethylene glycol (PEG)-mediated transformation to protoplasts and explants regenerated from single edited protoplast cells (Johansen et al., 2019).Here, the target region of StGBSS1 (5' UTR, exon 1, intron 1, including length and singlenucleotide polymorphisms (SNPs)) in the potato cultivars Desiree and Wotan were sequenced and mapped, providing the allelespecific foundation for gRNA and diagnostic PCR primer designs for targeting and editing scoring of the CBE editors A3A, rA1, and CDA1 (Figure 1) in the present study.
Nickase and cytidine base editing activities were tested by the transient expression of the SpCas9/StU6-1::sgRNA1 nickase construct or the C-to-T base editors A3A, rA1, and CDA1 using PEG transformation of isolated potato protoplasts (cell pool) of cultivar Desiree, which were then cultured for 2 days as described in the work of Nicolia et al. (2015) and Nicolia et al. (2021) and outlined in Materials and methods, after which the target region was PCR-amplified, and each PCR amplicon was analyzed by both IDAA and amplicon sequencing, including EditR analysis, for potential nuclease-induced indels and C-to-T base editing activity.First, the SpCas9 in the construct SpCas9/StU6-1::sgRNA1 (Johansen et al., 2019) was converted into a nickase (nCas9) by changing the aspartic acid (Asp10) into alanine (Ala10) (D10A) (Jinek et al., 2012b) through the use of site-directed mutagenesis.The absence of nuclease activity from nCas9 was confirmed by full digestion of the BsrI restriction site situated 3 bp upstream of the protospacer adjacent motif (PAM) and confirmed by IDAA (Yang et al., 2015;Bennett et al., 2020), which displayed a PCR amplicon with unchanged length (see Supplementary Information) and sanger sequencing of PCR products (data not shown).Effect of placement of the three deaminases, A3A (human hAPOBEC3A), rA1 (rat evo_rAPOBEC1), and CDA1 (sea lamprey Petromyzon marinus, evo_PmCDA1), and the use of different linkers between fusion partners have been investigated earlier (Nishida et al., 2016;Zong et al., 2018;Tan et al., 2019;Thuronyi et al., 2019;Choi et al., 2021;Huang et al., 2021).In the present study, the deaminase was fused to the N-terminal of nCas9 because the three deaminases have been proven to be functionally active in this design and in order to enable comparison between the three CBEs.The CBEs, A3A, rA1, and CDA1, were initially scored for editing activity by checking for destruction of the StyI restriction site (C 4 C 5 WWGG) 11 bp upstream of the PAM site (TGG), where conversion of either or both of the two cytosines C4 and C5 would lead to resistance to StyI digestion (see Supplementary Information).C-to-T editing efficiencies of A3A, rA1, and CDA1 were confirmed and scored by direct sequencing and quantified using the EditR software (Kluesner et al., 2018), with A3A having the best overall activity with an average 34.5%, 34.5%, and 27% C-to-T conversion at C4, C5, and C12 in the target (gRNA) sequence (GGTC 4 C 5 TTGGAG C 12 AAAAC 17 TGG), respectively, whereas CDA1 showed an average C-to-T conversion of 34.5%, 34%, and 14.25% at C4, C5, and C12, respectively.rA1 showed an average C-to-T conversion of 18.75% and 19% at C4 and C5 and was the only base editor to show no C-to-T conversion at C12. C17 conversion was not observed for any of the three base editors.All three base editors showed stable conversion rates with at least 21% C-to-T conversion for C4 and C5 (a single exception being rA1 replicate 2), and A3A and CDA1 showed an average 34% conversion rate for C4 and C5.The highest C-to-T conversion was observed for A3A replicate 4, which showed 39%, 43%, and 36% for C4, C5, and C12, respectively (Figures 2B-D).Neither indel formation, as evidenced by IDAA and direct sequencing results (Supplementary Information), nor unintended C-to-A or C-to-G changes, as evidenced by direct sequencing results (Figure 2B and Supplementary Information) and EditR analysis (Figure 2C and Supplementary Information), were encountered in the present study, which, however, was confined to the protoplast pool.Direct sequencing identified two allele-specific SNPs (Figure 1 and Supplementary Information), indicating amplification of the four alleles.Detailed information regarding constructs, protoplast isolation, PEG-mediated transformation and incubation, restriction enzyme, IDAA analyses, and direct sequencing on the protoplast cell pool is provided in Materials and methods and Supplementary Information.
Discussion
The use of CRISPR-based precise gene editing, including base and prime editing, for crop improvement has recently been reviewed (Butt et al., 2020;Gurel et al., 2020;Mishra et al., 2020), with a particular focus on potato protoplasts, e.g., provided in the work of Hofvander et al. (2022).
Here, we further developed a CRISPR/SpCas9 construct optimized for potato, where replacement of the standard Arabidopsis thaliana AtU6-1 promotor driving the expression of the gRNA, with the endogenous potato StU6-1 promotor, resulted in a 3-4-fold increase in editing efficiencies at the protoplast cell pool level (Johansen et al., 2019), into CBE constructs.We used three different CBE constructs, in which either of three deaminases, A3A (human hAPOBEC3A), rA1 (rat evo_rAPOBEC1), and CDA1 (sea lamprey Petromyzon marinus, evo_ PmCDA1), were C-terminally fused to a SpCas9 nickase (nCas9) and uracil-DNA glycosylase inhibitor (UGI), which were combined with the native potato StU6-1::gRNA-1 cassette expressing the gRNA.Each CBE was targeted to exon 1 of the GBSS1 gene and transformed into protoplasts of potato cultivar Desiree with their base editing conversions scored.All three constructs displayed high C-to-T conversion activities, peaking at C4 and C5 in the target (GGT C 4 C 5 TTGGAGC 12 AAAAC 17 TGGTGG) sequence, with A3A, CDA1, and rA1 displaying average C4 and C5 C-to-T conversions of 34.5% & 34.5%, 34.5% & 34%, and 18.75% & 19%, respectively.A3A and CDA1 displayed 27% and 14.25% C-to-T conversion at C12, and rA1 showed no C12 C-to-T conversion, which is in agreement with the fact that the rAPOBEC1 deaminase, from which rA1 is derived, has previously been reported to be inefficient in a GC context (Zong et al., 2018).The importance of the sequential location of Cs and different sequence preferences for different deaminases have been highlighted in other studies (Tan et al., 2019;Tan et al., 2020;Huang et al., 2021).
With the exception of a single replicate, all three CBEs conferred >= 21% C-to-T conversion of C4 and C5 in the target sequence (see Figure 2C rA1-2), with an average 34% C-to-T conversion for C4 and C5 for both the A3A and CDA1, which, to our knowledge, are the highest average C-to-T conversions obtained when employing PEG-mediated delivery into potato protoplasts.In comparison, Zong et al. (2018) obtained, in one instance, an average of up to 18%-20% C-to-T editing when targeting the StGBSS1 gene in potato protoplasts (Zong et al., 2018).Protoplasts transformed with non-integrative constructs will, unlike agrobacterium-transformed plants that may display chimerism (Faize et al., 2010), generate single-protoplast-cellderived genetically uniform explants and enable a potential replacement of plasmid with ribonucleoprotein (RNP), thereby excluding the presence of DNA in the entire editing process.
The averaged significantly higher editing efficiency obtained in the present study may be attributed to the use of the native potato StU6-1 promoter, driving the gRNA, which appeared rate limiting in Johansen et al.'s (2019) study, although differences in CBE construct architecture and composition or methodology may also be contributing factors.Zong et al., 2018 pioneered the implementation of C-to-T base editing in plants using the human APOBEC3A-based and the rat APOBEC1-based cytidine deaminase construct (Zong et al., 2018).The APOBEC3A-and APOBEC1-based CBEs were delivered into cells of potato, rice, and wheat by PEG-mediated transformation of protoplasts, Agrobacterium-mediated transformation of callus, or biolistic delivery into immature embryo cells.In most experiments, the human APOBEC3A-based CBE outperformed the rat APOBEC1based CBE, a tendency which was confirmed for the A3A and rA1 CBEs generated and tested in the present study.
Distribution of C-to-T conversion across the target sequence, i.e., editing frequencies at C4, C5, C12 and C17, seemed to be somewhat similar to what has been reported for other base editing constructs (Huang et al., 2021).However, careful construct design/ architecture, e.g., adjustments of flexible linker lengths, may elevate a desired target position accuracy (Tan et al., 2019;Tan et al., 2020).In addition, the development of base editors with alternative PAM specificities expands the freedom to operate and may potentially affect precision (Veillet et al., 2020a;Veillet et al., 2020b).The base editing efficiencies presented here were obtained via transient nonintegrative PEG transformation of the protoplast cell pool level, where the A3A CBE, in one instance, conferred 43% C-to-T conversion of C5.
CBEs have, in some settings, been reported to additionally generate C-to-G or C-to-A conversions, although at lower frequencies than the targeted C-to-T conversions (Komor et al., 2017) and indels, whereas in one study, 75% of explants transformed with an agrobacteriummediated integrative CBE construct were found to contain indels (Veillet et al., 2019b).Similar undesired conversions or indel formation, e.g., as evidenced by direct sequencing, EditR analysis, and IDAA, were, within the resolution of the analytic methods applied, not encountered in the present study, which, however, was confined to the protoplast cell pool.
PE enables controlled generation of small insertions, deletions, or base substitutions as part of the prime editing guide RNA (pegRNA) and was originally described as a tool for correcting DNA in humans in relation to disease (Anzalone et al., 2019).PE has also been applied in plants, such as rice (Li H. Y. et al., 2020;Lin et al., 2020) and maize (Jiang et al., 2020a), with moderate success, highlighting the importance of testing a range of pegRNAs.Recent implementation of PE in the model plant Physcomitrium patens and tetraploid potato also pinpointed limitations of the technology, which need to be overcome before PE may become a versatile efficient tool in precision plant breeding (Perroud et al., 2022).The PE repertoire has, as in the case of the base editor repertoire, been expanded with alternative PAM specificities (Kweon et al., 2021).
The construct design and protocols for scoring C-to-T base editing presented in this study may readily be converted into A-to-G base editors (ABEs), probably with comparable efficiencies.Thus, for now, and with the editing efficacies obtained in this study, BE still remains a competitive relevant tool in the toolbox for precise plant breeding.
FIGURE 1
FIGURE 1 GBSS 1 target gene (cultivar Desiree).Exon 1 of the GBSS 1 target gene with both length and SNPs between the four alleles in the cultivar Desiree is indicated.The target gRNA sequence GGTC 4 C 5 TTGGAGC 12 AAAAC 17 TGG (blue box), with target cytosines (Cs) in green, PAM in red, and the diagnostic restrictions sites StyI and BsrI, is indicated.White numbered boxes depict exons, while stars indicate SNPs or size polymorphisms between the four alleles.Arrows indicate diagnostic PCR primers for editing scoring, amplifying the target region inside or outside the length polymorphisms.The figure is based on and adapted from the work of Johansen et al. (2019). | 4,998 | 2023-08-30T00:00:00.000 | [
"Biology"
] |
Krüppel-Like Factor 2 Is a Gastric Cancer Suppressor and Prognostic Biomarker
Gastric cancer (GC) is a common digestive tract tumor. Due to its complex pathogenesis, current diagnostic and therapeutic effects remain unsatisfactory. Studies have shown that KLF2, as a tumor suppressor, is downregulated in many human cancers, but its relationship and role with GC remain unclear. In the present study, KLF2 mRNA levels were significantly lower in GC compared to adjacent normal tissues, as analyzed by bioinformatics and RT-qPCR, and correlated with gene mutations. Tissue microarrays combined with immunohistochemical techniques showed downregulation of KLF2 protein expression in GC tissue, which was negatively correlated with patient age, T stage, and overall survival. Further functional experiments showed that knockdown of KLF2 significantly promoted the growth, proliferation, migration, and invasion of HGC-27 and AGS GC cells. In conclusion, low KLF2 expression in GC is associated with poor patient prognosis and contributes to the malignant biological behavior of GC cells. Therefore, KLF2 may serve as a prognostic biomarker and therapeutic target in GC.
Introduction
Gastric cancer (GC) ranks ffth among common cancers and third among cancer-related causes of death [1]. China is a high-incidence area for GC [2]. Despite great improvements in medical equipment and treatment, overall survival remains low in GC patients, and many patients are in advanced stages once diagnosed [3,4]. Increasing evidence suggests that H. pylori infection is a major risk factor for the development of GC, but the incidence of H. pylori-related GC is signifcantly reduced due to vaccination of patients or taking drugs to treat H. pylori infection, while the number of GC patients due to genetic variants is increasing [5]. Pereira et al. showed that transcriptional alterations and copy number variations (CNV) of GSDMC in GC are associated with stronger tumor aggressiveness [6]. In addition, an increasing number of studies have found that abnormal genomic expression in GC patients is associated with poor patient prognosis [7][8][9]. However, due to genetic heterogeneity and complex patterns of genetic span across diferent tumor stages, so far, no specifc biomarker has been able to accurately diagnose or predict the prognosis of GC [10]. Many patients lack specifc diagnostic biomarkers and targeted therapies, resulting in signifcantly reduced survival times. Terefore, it is important to explore new diagnostic and prognostic biomarkers as well as to develop targeted therapeutics for the diagnosis and treatment of GC.
Krüppel-like factor2 (KLF2) is an important member of the Krüppel-like factor (KLF) family, which plays an antitumor role in many cancers [11]. KLF family members are involved in cell diferentiation, proliferation, migration, and pluripotency as transcriptional regulators [12][13][14]. KLF has been reported to be prone to genomic changes. Yin et al. showed that the downregulation of KLF2 expression in nonsmall cell lung cancer is associated with a poor prognosis for patients [15]. In GC, KLF2 is inhibited by the regulation and expression of long chain noncoding RNA (lncRNA) and microRNA (miRNA), thus promoting the progress of GC [16,17]. A recent study showed that downregulation of KLF2 expression in human cancers is caused by epigenetic silencing of histone methyltransferase EZH2 [11]. Tese studies suggest that KLF2 as a tumor suppressor gene is closely related to tumor development. However, the relationship between KLF2 and GC and its efect on GC has not been fully elucidated. Terefore, it is worth further revealing the relationship between KLF2 and GC, its clinical characteristics, and its role.
In recent years, bioinformatics analysis of large sample data combined with clinical studies and molecular experiments to identify reliable genetic biomarkers is another important means to study tumor pathogenesis and discover new therapeutic targets. Wei et al. demonstrated that miR-486-5p is a tumor suppressor of GC and inhibits gastric cancer cell growth and migration by downregulating fbroblast growth factor 9, through the TCGA database and cell experiments [18]. In addition, Chu et al. used bioinformatics techniques combined with clinical data to analyze the expression of thrombospondin-2 (TSP2) in GC tissues and its correlation with clinicopathological features and clinical prognosis of GC patients, indicating that TSP2 is a potential marker and therapeutic target for the prognosis of GC patients. In vitro experiments further demonstrated that TSP2 promoted the proliferation and migration of HGC-27 and AGS GC cell lines by inhibiting the VEGF/ PI3K/AKT signaling pathway [19]. Terefore, the aim of this study was to investigate the mechanism of KLF2 expression in GC, the correlation of clinical features, and the efect on the biological function of GC cells by using bioinformatics techniques and molecular biology experiments.
Genetic Changes and Prognostic Value Analysis.
Using the cBio Cancer Genomics Portal (cBioPortal) online software (https://cbioportal.org), genomic alterations of KLF2 in GC were analyzed, including mutations in KLF2 and the proportion of data distribution and the frequency of changes (mutations and amplifcations) of KLF1 gene in gastric cancer subtypes [24]. Also, the association between KLF2 gene alterations and overall survival, disease-free survival, and progression-free survival in GC patients was investigated to assess the prognostic value. Subsequently, we also analyzed copy number variation (CNV) of KLF2. In addition, the CNV module of Gene Set Cancer Analysis (GSCA; https://bioinfo.life.hust.edu.cn/GSCA/#/) was used to analyze the proportion of KLF2 heterozygous/homozygous and amplifed/deleted in GC, the Spearman correlation between KLF2 mRNA expression and CNV, and survival diferences between KLF2 CNV and wild type [25].
Tissue Microarray and Immunohistochemistry.
A chip (No.: HStmA180Su19) containing 86 adjacent noncancerous specimens and 88 gastric cancer tissues was purchased from Shanghai Biochip Limited company. Te specimens were subjected to TNM pathological staging according to the 7th edition of the AJCC/UICC tumor lymph node metastasis (TNM) staging classifcation. Te expression of KLF2 protein in the samples was detected by the immunohistochemical EnVision method. Following deparafnization, hydration, antigen retrieval, and blocking endogenous peroxidase activity of the tissue microarray according to the kit instructions, the chip was incubated with KLF2 polyclonal antibody (PA5-40591, Invitrogen, USA) for 1 h. Afterwards, the chip was processed using the EnVision Rb Gt anti-Rb-HRP kit (K4003, Dako, Denmark) according to the manufacturer's instructions and visualized by DAB staining solution (Dako, Denmark) in the dark. Following rinsing in tap water, the chips were counterstained for chips using hematoxylin, dehydrated in a graded water-ethanol series, soaked in xylene, and fnally sealed with neutral gum. Staining signals were visualized using a Nikon microscope (Tokyo, Japan) and statistically analyzed using ImageJ Pro (Media Cybernetics, Rockville, Maryland, United States). Quantitative analysis was performed independently by two pathologists who were blinded to the experimental groupings. Te fnal score is obtained by multiplying the staining intensity and the percentage of positive tumor cells and ranges from 0 to 300. Staining intensity scores were 0 (negative), 1+ (weak), 2+ (moderate), and 3+ (strong). Percentage of positive tumor cells is 0-100%. A fnal score of <100% is considered low and >100% is considered high [26].
KLF2 siRNA (si-KLF2) and its control (si-control) were designed and synthesized by Invitrogen (USA). When AGS and HGC-27 cells reached 80% confuency, the above siR-NAs were transfected into GC cells using Lipofectamine ® 2000 (Termo Fisher Scientifc, Inc., USA) according to the manufacturer's instructions. Six hours after transfection, the fresh serum-containing medium was replaced. Te culture was continued for 48 h and cells were collected for CCK-8, colony formation, wound healing, transwell, RT-qPCR, and immunoblot analysis. Te si-KLF2 sequence is as follows: si-KLF2: sense: UCAACAGCGGCUGGACUUTT, antisense: UAAGCCAGCACGCUGUGGAUTT.
Colony Formation Assay.
Transfected AGS and HGC-27 cells (5 × 10 2 cells/well) were seeded into 12-well plates and cultured for 14 days at 37°C in a 5% CO 2 incubator to form macroscopic colonies, the medium was removed, and the cells were washed three times with PBS; then, the colonies were fxed with 4% paraformaldehyde for 20 minutes at room temperature and the colonies were stained with 0.1% crystal violet for 10 minutes. Images were taken under a light microscope, and the number of cell colonies >50 cells was calculated [27].
Wound
Healing Assay. Transfected GC cells (3 × 10 5 cells/well) were seeded into 24-well plates and cultured at 37°C and 5% CO 2 for 24 h. When more than 90% confuent monolayers were formed, cell monolayers were scratched with a 100 μl pipette tip and washed with PBS to remove foating cells. Subsequently, fresh serum-free medium was added to each well, and after 24 h of culture, images at 0 and 24 h after scratching were collected with a light microscope (Nikon, Japan), and then, wound healing rates were measured and calculated using ImageJ software. Te formula for wound healing rate is as follows: [wound width (0 h) wound width (48 h)]/wound width (0 h)×100% [27].
2.9. Transwell Assay. Transfected AGS and HGC-27 cells (1 × 10 5 cells/ml) were suspended in serum-free RPMI-1640 medium and seeded into the upper chamber precoated with Matrigel (BD Biosciences). Ten, RPMI-1640 medium containing 10% FBS was added to the lower chamber of a 24well plate and cultured for 24 h at 37°C in an incubator with 5% CO 2 . Remove the culture medium. Cells remaining in the upper chamber were gently removed with a cotton swab. Cells were rinsed three times with PBS, and then, cells were fxed with 4% paraformaldehyde for 20 min at room temperature and stained with 0.1% crystal violet for 10 min at room temperature. Under a light microscope, fve felds were randomly selected to observe cells, acquire images, and calculate the number of migrating or invading cells [28].
Statistical Analysis.
All gene expression data were normalized by log2 transformation. Kaplan-Meier analysis and log-rank test were used for survival analysis. Cox regression models were used to analyze the relationship between KLF2 expression and clinicopathological parameters. Pearson or Spearman tests were used to assess the association between the two variables. All experiments were repeated three times. Experimental data were analyzed with GraphPad Prism 8.0 (GraphPad-La Jolla, CA, USA) Evidence-Based Complementary and Alternative Medicine software. Experimental results are presented as mean-± standard deviation (SD). Diferences between the groups were analyzed using the paired Student's t-test or one-way analysis of variance (ANOVA) with the multiple group Tukey's multiple comparison post hoc test. P < 0.05 was considered statistically signifcant.
KLF2 Is Lowly Expressed in GC.
Aberrant expression of KLF2 has been reported in a variety of cancers [30,31]; however, the association and the role of KLF2 and GC have not been elucidated. In this study, we found that KLF2 mRNA levels were KLF2 expression promotes the malignant biological behavior in GC cells and accelerates GC progression. lower in various tumor tissues than in normal tissues through the TIMER2.0 database, and KLF2 mRNA expression was signifcantly lower in STAD tissues than in normal tissues (Figure 1(a)). CCLE database analysis showed low KLF2 expression in GC cell lines (Figure 1(b)). Data from GC tissues downloaded from the TCGA database and GEPIA database for analysis were consistent with the above results (Figures 1(c) and 1(d)). Further validation by RT-qPCR analysis revealed a signifcant downregulation of KLF2 mRNA levels in GC patient tissues compared to adjacent normal gastric tissues (Figure 1(e)). In addition, immunohistochemical staining showed signifcantly higher KLF2 protein expression in adjacent normal gastric tissues than in GC tissues (Figure 1(f)). Together, these results suggest that KLF2 is aberrantly expressed in GC tissues and may be a potential target for GC and deserves in-depth investigation.
Association of KLF2 Genomic Alterations with Prognosis in GC.
In order to elucidate the mechanisms underlying aberrant KLF2 expression, in this study, we used the cBioPortal database to analyze the amplifcation frequency and mutation types of KLF2 alterations in a cohort of GC patients. Te results showed that KLF2 was mutated at a frequency of 1.1% in GC, and the mutation types included mutation and amplifcation (Figures 2(a) and 2(b)). Further analysis of KLF2 gene mutations on the prognosis of GC patients was performed. Te results showed that GC patients in the KLF2 mutant group had poorer disease-free survival (Figure 2(c)), disease-specifc survival (Figure 2(d)), overall survival (Figure 2(e)), and progression-free survival (Figure 2(f )) compared with the nonmutant group, but there was no signifcant diference (P > 0.05). In addition, we analyzed the CNV distribution (including amplifcation and deletion) of KLF2 in GC (Figure 2(g)). In addition, further analysis by the GSCA database showed 10.66% and 26.98% CNV for total amplifcation and total deletion of KLF2 in STAD, with 26.76% heterozygous deletions (Figure 3(a)). Te Spearman correlation between KLF2 mRNA expression and CNV was 0.2 ( Figure 3(b)). Overall survival (OS) was lower in KLF2 CNV (amplifcations and deletions) compared to wild type (Log-rank P value � 0.38) (Figure 3(c)). Tese results suggest that low KLF2 expression in GC may be associated with genomic deletions.
Correlation between KLF2 Expression and Clinicopathologic Features of GC Patients.
Subsequently, KLF2 protein expression in GC tissue microarrays was detected by IHC in this study (Figure 4(a)), and the results showed that the higher the age of patients (>75 years) and the greater the T stage (T3 and T4), the lower the KLF2 expression level (Figures 4(b) and 4(c)). In addition, GC patients with low KLF2 expression had shorter survival times and lower overall survival (Figure 4(d)). Terefore, these results suggest that low KLF2 expression promotes tumor invasion/metastasis and can be used as a diagnostic biomarker for GC with an important clinical value.
Knockdown of KLF2 Promotes Malignant Biological Behavior of GC Cells.
To further elucidate the mechanism of KLF2's action on GC, in this study, we investigated the efect of KLF2 on GC cell growth and proliferation, migration, and invasion by knocking down KLF2 in HGC-27 and AGS cells. Western blot analysis showed that knockdown of KLF2 in GC cells resulted in higher levels of N-cadherin, Snail, vimentin, and Twist protein expression than in the si-control group, but reduced E-cadherin protein expression ( Figure 5(a)). CCK-8 and colony formation assays showed that knockdown of KLF2 signifcantly promoted HGC-27 and AGS cell growth and proliferation compared to the sicontrol group (Figures 5(b)-5(e)). In addition, wound healing assay and transwell analysis showed that the wound healing rate and number of migrated and invaded cells in the si-KLF2 group were signifcantly higher than those in the sicontrol group (Figures 5(f )-5(j)). Tese results indicate that low KLF2 expression promotes the malignant biological behavior in GC cells and accelerates GC progression.
Discussion
GC is one of the most common gastrointestinal tumors in mammals. Due to western diets, bad living habits, and increased pressure to study and work, the incidence of GC is rising,, which will cause a huge medical burden on society. It is reported that the occurrence of GC is closely related to many factors such as Helicobacter pylori infection, eating habits, environment, genetics, and immunity [32][33][34], among which genetic factors play a crucial role [35]. At present, the treatment of GC mainly includes drug and surgical treatment, but most patients still have metastasis and recurrence after conventional radiotherapy and chemotherapy [36]. In addition, many studies have focused on the important impact of host genetic susceptibility on the development of GC. Based on this, some abnormally expressed genes have been identifed as biomarkers and therapeutic targets. However, the diagnosis and prognosis of GC are still not accurate. In this study, the expression mechanism, prognostic value, and potential role of KLF2 in GC development were investigated in depth by using bioinformatics techniques combined with clinical tissue samples and cellular experiments.
Cancer-associated infammation emerging at diferent stages at the time of tumorigenesis has been reported to lead to genomic instability and epigenetic modifcations [37]. Previous studies have shown that KLF2 is downregulated in GC and prone to genomic dysregulation [15]. In this study, KLF2 was signifcantly lower expressed in GC tissues and cells and may be associated with mutations in the KLF2 gene. Correlation analysis of clinical characteristics showed a negative correlation between KLF2 and age, T stage, and survival in GC patients. Te knockdown of KLF2 in GC cells was further analyzed to investigate the efect of low KLF2 expression on GC function. Te results showed that the knockdown of KLF2 could signifcantly promote the growth, proliferation, migration, and invasion of GC cells, as well as tumor development. Tis is in agreement with Wang et al. [38]. In addition, Li et al. showed that the long noncoding RNA DLEU1 promotes GC progression by epigenetically inhibiting KLF2 [39]. Another study showed that micro-RNAs promote GC cell proliferation and invasion by targeting KLF2 [16]. MicroRNA-32-5p promotes GC development by activating the PI3K/AKT signaling pathway and targeting KLF2 expression [40]. Furthermore, a study has shown that SUZ12 may regulate proliferation and metastasis of GC, through modulating KLF2 and E-cadherin [22]. We also observed a regulatory relationship between KLF2 and E-cadherin. Te knockdown of KLF2 resulted in a reduction in E-cadherin. Furthermore, the expression of epithelial-mesenchymal transition (EMT)-related proteins (N-cadherin, Snail, Vimentin, and Twist) was increased after KLF2 knockdown. Tese studies suggest that KLF2 may be a novel therapeutic target for GC. In conclusion, low KLF2 expression promotes the malignant biological behavior of GC, and KLF2 acts as a tumor suppressor and may be a potential therapeutic target and prognostic biomarker for GC. Although this study provides a rationale for the role of KLF2 in GC pathogenesis, we did Evidence-Based Complementary and Alternative Medicine Evidence-Based Complementary and Alternative Medicine not further investigate the efect of epigenetic alterations on KLF2 underexpression in GC nor did we assess the number of patients who developed KLF2 underexpression in GC, and therefore, we will continue in-depth studies through databases and clinical trials in the future.
Data Availability
Te data used to support the fndings of this study are available from the corresponding author upon request.
Conflicts of Interest
Te authors declare that they have no conficts of interest. | 4,099.2 | 2023-02-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
A statistical method for transforming temporal correlation functions from one-point measurements into longitudinal spatial and spatio-temporal correlation functions
The transformation of temporal, one-point correlation functions into longitudinal spatial and spatio-temporal correlation functions in turbulent flows using a simple statistical convection model is introduced. To illustrate and verify the procedure, experimental data (one-point and two-point) have been obtained with a laser Doppler system from a turbulent, round, free-air jet.
Introduction
In the study of turbulence, temporal and spatial correlation functions are fundamental quantities defining characteristic scales of motion. In particular, integral scales and Taylor microscales are directly defined from the correlation function and, under the hypothesis of local isotropy, an estimate of the rate of dissipation per unit mass can be obtained.
Experimentally, measurements of correlation functions have often been performed using single-point measurement techniques, for example with stationary hot-wire probes (e.g., Favre 1965) or laser Doppler anemometers (e.g., Romano et al. 1999). Temporal Eulerian correlation functions can be obtained directly from the measured time series. Two-point or spatial correlation functions can be obtained with an array of multiple probes or, sequentially, with two measurement probes at varied separations. With particle image velocimetry (PIV), the Eulerian spatial correlation can be measured directly from the spatially resolved velocities at given time instances. However, the temporal and spatial resolution of PIV usually lacks the requirements to obtain small scales, for example the Taylor microscale, with sufficient accuracy. High-speed particle tracking (e.g., Ouellette et al. 2006) allows the Lagrangian correlation statistics to be obtained.
Without spatially resolved data, longitudinal two-point or spatial correlation functions are often approximated by using single-point, temporal correlation functions and invoking Taylor's frozen flow hypothesis (Taylor 1938) (TH), which assumes that the fluctuating velocity u 0 is small compared to the mean velocity U, that is, u 0 =U ( 1; thus, spatially fluctuating quantities of the advected fluid along a path line of the fluid can be observed as temporal fluctuations at a given point. The functions investigated in the past span from local derivatives of velocities or of passive scalars like the temperature, where TH reads to statistical functions f, like correlations or structure functions, using f ðx þ n; tÞ ¼ f ðx; t À n=UÞ: Although TH requires small fluctuations compared with the mean advective velocity, it has not seldom been applied to turbulent flows, where the above condition is no longer met. Here, as expected, it has been observed that TH works well only at small scales, smaller than the typical scale of the flow (Lin 1953), where local homogeneity can be assumed. Corrections due to the fluctuating advective velocity may or may not be necessary (Heskestad 1965;Tennekes 1975;Browne et al. 1983;Hill 1996).
If applied to functions on larger scales, corrections of the TH become necessary, considering the variations of the velocity in a turbulent flow. Cholemari and Arakeri (2006) introduce methods which give a correspondence between the temporal and longitudinal spatial correlations by translating the time lag into separation or vice versa. However, they cannot be used to derive the spatio-temporal correlations, which decrease in amplitude for larger separations. Cenedese et al. (1991) predict the decrease in correlation by transfer functions, which work as a phase shifter from lower to higher frequencies. In He and Zhang (2006) and Zhao and He (2009), an elliptical approximation based on the secondorder Taylor-series expansion of the spatio-temporal correlation function is introduced, reproducing well the shift and deformation of the correlation peak. However, the latter two methods describe the effect phenomenologically and require empirical parameters to be adapted to the measured data.
In the present study, an integral method based on the probability density distribution of the fluid velocity is proposed to transform temporal into longitudinal spatial and spatio-temporal correlation functions (Sect. 2). The procedure is validated by comparing results to two-point correlations measured directly with a laser Doppler system from a turbulent, round, free-air jet (Sect. 3).
Integral time-to-length transform
To transform a temporal correlation function into a longitudinal spatial correlation function, the spatio-temporal correlation function Rðn; sÞ ¼ u 0 ðx; tÞu 0 ðx þ n; t þ sÞ h i is introduced, where the chevrons mean the expectation of the inner term. The temporal correlation function obtained from single-point measurements then is R(0, s) with the time delay s and no separation (n = 0). The transformation into the longitudinal spatial correlation function with the separation n and no time delay is then with the time of flight h, which the flow needs to cover the separation n. Allowing both arguments, n and s of the transformed correlation function to be varied, the spatiotemporal correlation function is obtained.
TH assumes that the flow is ''frozen'' while it moves through the measurement point(s) with the mean velocity U yielding Turbulent flows with fluctuating velocity u(t) may significantly differ from this ''frozen'' hypothesis. The idea of the new integral time-to-length transform (ITLT) is that any fluid structure with a specific temporal correlation function, which has been measured at a specific point, can move to another point in flow direction within a certain time. This time depends on the varying fluid velocity u. While TH assumes that this velocity is constant, namely the mean velocity U, ITLT considers that the velocity and also the time to cover the separation between the two points can vary.
For a given separation n, every possible velocity value u yields a different time of flight h. For given n and u, the time of flight is determined by assuming that u does not change within the separation n (or within the time of flight h). If u has a probability density function p u (u), then (for a given n) the time of flight has a probability density of Integration over all possible times of flight yields an averaged spatio-temporal correlation function 3 Experimental verification Laser Doppler data have been taken in a turbulent, round, free-air jet, as illustrated in Fig. 1. The flowfield consists of the turbulent inner jet and an outer co-flow. Both parts of the flowfield are seeded. The co-flow is slow and has a low turbulence intensity of about 1 %. The velocity of the outer co-flow is expected to be uniform over the outlet, while the velocity profile of the inner flow is fully developed. The flow specifications are summarized in Table 1 and the specifications of the laser Doppler system in Table 2. A two-velocity component laser Doppler system has been used with two independent probes, yielding a two-point system. Both measurement volumes measure the u component. The two channels acquire the velocity samples independently (free-running mode) without coincidence.
The Reynolds number is 20,000 based on the inner mean velocity and diameter. The laser Doppler data have been taken in the center of the jet at a distance x = 320 mm (40d i ) downstream. The separation of the measurement volumes has been varied between n = 0 and n = 32 mm (4d i ) with symmetrical shifts of ±n/2 with respect to x. For one-point measurements, the appropriate data sets have been selected from the two-point measurements.
The mean velocity U and the RMS velocity u 0 are obtained from the measurements as ensemble averages applying transit time weighting (Hösel and Rodi 1977;Fuchs et al. 1994). The mean velocity decays from 7.85 to 7.15 m/s over the measurement region, while the RMS velocity decays from 1.44 to 1.40 m/s. This corresponds approximately to an inverse relation with the distance to some virtual origin lying within the nozzle; the virtual origin of the RMS data being slightly further inside the nozzle than the mean velocity data set. A turbulence intensity u 0 =U of about 19 % is found, slightly increasing with the distance from the nozzle. Comparing with Wygnanski and Fiedler (1969), this small value indicates that the present free jet may not be fully developed in the measurement region; this is of no direct consequence for the present study.
The correlation functions are obtained from the measurements using the fuzzy slotting technique with local time estimation (Nobach 2002) and transit time weighting. Then, the correlation functions R(n, s) have been normalized, yielding the correlation coefficient functions q(n, s) with using the RMS u 0 of the fluctuating part of the velocity for the autocorrelation cases and for the cross-correlation cases, where u 0 1 and u 0 2 are the RMS velocities at the two measurement points. Figure 2 shows the autocorrelation coefficient function obtained from the one-point experimental data. Note, that the variance estimates are biased by data noise, whereas the correlations are not, yielding an autocorrelation coefficient q \ 1 at s = 0. Figure 3 shows examples of crosscorrelation coefficient functions obtained from two sets of two-point experimental data for separations n of 0 and 32 mm. The two LDV measurement volumes for two-point measurements are aligned parallel. Therefore, the crosscorrelation for two overlapping measurement points (n = 0) almost reaches the amplitude of the autocorrelation for one measurement point. While both the noise in the autocorrelation/one-point case and the differences between the measurement channels in the cross-correlation/twopoint case generate systematic errors of the estimated correlation coefficients, this bias does not affect the following derivations of the spatio-temporal transform. Therefore, no corrections of the correlation functions have been undertaken. Based on the mean velocity U and the RMS velocity u 0 obtained from the measured data set, the temporal correlation function is transformed into the longitudinal spatial correlation function using TH [Eqs.
(1), (3)] and ITLT [Eqs. (5), (6)]. In the present study, a Gaussian distribution of the velocity u is assumed to derive the probability density function of times of flight. Alternatively, the probability density can be derived directly from the measured data. Figure 4 shows the results of the two transforms in comparison with the longitudinal spatial correlation function obtained from two-point measurements. The results of the different methods have no significant deviations from each other or from the two-point cross-correlation. Even the simple transform based on TH shows reasonable results. This coincides directly with the results found in Tummers et al. (1995). For a flow with 25 % turbulence intensity, this is noteworthy, since this is far away from a ''frozen'' flow condition.
To understand this observation, a detailed look at the spatio-temporal correlation function is useful. Figure 5 shows the results of the transformation based on TH and ITLT in comparison with appropriate temporal cross-correlation functions from the two-point measurement at a separation of the measurement volumes of n = 32 mm (4d i ). The shift in time of the maximum correlation corresponds to the mean time of flight to cover the separation. The height of the maximum decreases, which indicates that the shape of fluid structures changes during their passage. Furthermore, the peak width increases, indicating spatial diffusion of the fluid structures. Similar behavior has been shown by Kerhervé et al. (2008) for a turbulent round jet and by Cenedese et al. (1991) and by Chatellier and Fitzpatrick (2005) for other turbulent flow configurations.
TH simply shifts the autocorrelation function to the time of flight given by the separation of the probes divided by the mean velocity. The shape of the obtained spatio- temporal correlation function is the same as the autocorrelation function. Therefore, the transform based on TH is not able to reproduce the degradation of the correlation height or the expansion of the correlation width. However, the accuracy of the predicted time shift by TH is additionally limited due to the fact that the maximum position of the correlation peak may deviate from the separation divided by the mean velocity due to a skewness of the deformed peak. This yields a correlation peak traveling slower than the mean velocity as observed by He and Zhang (2006) and in Zhao and He (2009). In contrast, ITLT is able to recover the spatio-temporal correlation correctly, including the time shift, the degradation of the correlation height and the expansion of the correlation width. Even the skewness of the peak is reproduced correctly.
In Fig. 6, the results of ITLT are shown in comparison with the two-point measurements for different longitudinal spatial separations of the two measurement volumes. The results of ITLT clearly correspond to the two-point measurements, indicating that ITLT is an appropriate model for the temporal development of fluid structures, while the result for TH would fail.
Although the spectral transfer functions from the autocorrelation to the modeled spatio-temporal correlation functions are redundant, if the time responses are correct, it is still interesting to verify the correspondence of the obtained transfer functions with the results in Cenedese et al. (1991). Therefore, the experimentally obtained spatio-temporal correlation functions and the pendents obtained by TH and ITLT are Fourier transformed and divided by the Fourier transform of the autocorrelation function.
The diagrams in Fig. 7 show the first 15 complex spectral transfer coefficients for the case n = 32 mm (4d i ).
The result of TH rotates in the complex plane with unit magnitude, corresponding to a simple time shift. Although the experimental data strongly scatter, ITLT obviously reproduces the amplitude decreasing with increasing frequency, yielding similar results as in Cenedese et al. (1991).
Unfortunately, if u changes within the separation n (or within the time of flight h) the probability density function of times of flight changes and also the fluid structure passing by changes, yielding an additional degradation of the correlation. This case is a strong limitation of the present transformation method, which requires the temporal correlation in the Lagrangian framework to be much longer than the time of flight for a certain velocity and a given separation of the measurement volumes.
An indication on how good this requirement is fulfilled is given by comparing the integral time scales derived as the integral of the cross-correlation coefficient functions for different separations of the measurement volumes. If the requirement of small changes during the passage is fulfilled, the integral time scale should be constant and independent of the separation of the measurement volumes. Figure 8 shows the integral time scales obtained from the two-point measurements. With increasing separation, a small decrease in the integral time scale can be observed. However, it is small enough to allow reliable application of the ITLT method.
Longitudinal spatial correlation
In deriving the spatio-temporal correlation function, ITLT is clearly superior to the TH. However, if only the longitudinal spatial correlation function is required at turbulence levels at least up to 25 %, the TH performs as well (Fig. 4). To estimate the longitudinal spatial correlation function from two-point measurements, only the values at s = 0 (coincidence) are measured for several separations n, while all other time lags of the spatio-temporal correlation are not taken into account. However, on the left tail of the spatiotemporal correlation function, the two transform methods almost coincide (Fig. 5). Significant deviations are visible only at the peak center and on the right tail of the spatiotemporal correlation function. Only for turbulence levels above at least 25 % could we expect deviations on the left tail of the spatio-temporal correlation functions occurring between the TH and ITLT, yielding also differences between the longitudinal spatial correlation functions.
Conclusion
An integral time-to-length transform method has been introduced. It is capable of reproducing the longitudinal spatial-temporal correlation by considering fluctuations of the varying convective velocity. Therefore, it is able to provide longitudinal spatial and spatio-temporal correlation functions from temporal correlation functions obtained from single-point measurements. In the case of turbulent flows, it is superior to Taylor's hypothesis of a ''frozen'' flow. The integral transform method is able to recover the spatio-temporal correlation correctly, including the time shift, the degradation of the correlation height, the expansion of the correlation width and the skewness of the peak.
On the contrary, the transform of a temporal correlation function into a longitudinal spatial correlation function based on Taylor's hypothesis is possible up to turbulence intensities of at least 25 %, because the systematic errors are small, even if the model of a ''frozen'' flow is far from reality. However, for turbulence intensities beyond 25 %, differences between the methods may occur also at the time lag s = 0; hence, differences of the obtained longitudinal spatial correlation function must be expected as well. However, Taylor's hypothesis is not able to recover the temporal development of fluid structures and, therefore, is not capable of transforming temporal correlation functions into spatio-temporal correlations, which the integral method is able to reproduce reliably. | 3,848.6 | 2012-10-12T00:00:00.000 | [
"Physics",
"Engineering"
] |
Modulating Effects of the Plug, Helix, and N- and C-terminal Domains on Channel Properties of the PapC Usher*
The chaperone/usher system is one of the best characterized pathways for protein secretion and assembly of cell surface appendages in Gram-negative bacteria. In particular, this pathway is used for biogenesis of the P pilus, a key virulence factor used by uropathogenic Escherichia coli to adhere to the host urinary tract. The P pilus individual subunits bound to the periplasmic chaperone PapD are delivered to the outer membrane PapC usher, which serves as an assembly platform for subunit incorporation into the pilus and secretion of the pilus fiber to the cell surface. PapC forms a dimeric, twin pore complex, with each monomer composed of a 24-stranded transmembrane β-barrel channel, an internal plug domain that occludes the channel, and globular N- and C-terminal domains that are located in the periplasm. Here we have used planar lipid bilayer electrophysiology to characterize the pore properties of wild type PapC and domain deletion mutants for the first time. The wild type pore is closed most of the time but displays frequent short-lived transitions to various open states. In comparison, PapC mutants containing deletions of the plug domain, an α-helix that caps the plug domain, or the N- and C-terminal domains form channels with higher open probability but still exhibiting dynamic behavior. Removal of the plug domain results in a channel with extremely large conductance. These observations suggest that the plug gates the usher channel closed and that the periplasmic domains and α-helix function to modulate the gating activity of the PapC twin pore.
The cell envelope of Gram-negative bacteria contains a vast array of protein machineries dedicated to the translocation of polypeptides across the cytoplasmic membrane, periplasm, and outer membrane (OM) 3 (1,2). Some of these complexes also participate in the assembly of surface-exposed appendages, such as flagella and pili (fimbriae). One of the most thoroughly studied secretion systems is the chaperone/usher pathway, responsible for the biogenesis of a superfamily of virulenceassociated surface structures, including P and type 1 pili (3). These pili play essential roles in the pathogenesis of uropatho-genic Escherichia coli by providing a tool for attachment of the bacteria to host urothelial cells (4 -6). P pili, encoded by the chromosomal pap gene cluster, are critical virulence factors for infection of the kidney by uropathogenic E. coli and the development of pyelonephritis. The P pilus is composed of multiple subunits of PapA, which form a rigid helical rod. A thin linear tip fibrillum is located at the distal end of the pilus and is made of four different subunits (PapK, PapF, PapE, and the adhesin PapG) that assemble in a precise order and stoichiometry (3). The minor pilin, PapH, anchors the pilus rod to the cell surface (7).
The pilus subunits are first translocated through the cytoplasmic membrane via the Sec general secretory pathway (8). Once in the periplasm, the subunits form binary complexes with the PapD chaperone. The details of the binding interaction between the chaperone and subunits were revealed by crystal structures of chaperone-subunit complexes (9 -11). The actual assembly of the subunits into a pilus and secretion of the pilus fiber to the cell surface is mediated by the OM usher, PapC (12). The usher recruits chaperone-subunit complexes from the periplasm and provides a platform for polymerization of the subunits in a precise order (13,14). The energy for pilus formation at the OM is thought to be provided by the polymerization itself, and the details of the interactions between subunits during the polymerization process are well understood (11,15,16). Nevertheless, how the usher facilitates polymerization and how fiber growth is coupled to translocation are two currently unresolved questions.
The PapC usher is a dimer where each monomer is composed of four domains: 1) a N-terminal periplasmic domain (ϳ135 residues), 2) a -barrel domain (residues 135-640), 3) a plug domain (residues 257-332) located within the -barrel domain, and 4) a periplasmic C-terminal domain (residues 641-809) (17)(18)(19). The N-terminal domain of the usher has been implicated in the recognition and initial binding of chaperone-subunit complexes (20,21). The C-terminal domain participates in the binding of chaperone-subunit complexes and is required for further assembly (17,22). A major breakthrough in our understanding of the chaperone/usher system came with the elucidation of the three-dimensional structure of the usher translocation channel (19). The crystal structure of a 55-kDa fragment corresponding to the predicted transmembrane domain (residues 130 -640) revealed that the pore is a kidneyshaped -barrel of 24 strands (see Fig. 1A, blue), the largest number of strands identified for OM proteins so far. The inner dimensions of the pore are extremely large (25 ϫ 45 Å), and it is likely that a gaping pore of such dimensions would be deleteri-ous to the cell. Thus, not surprisingly, the pore is occupied by a plug domain, formed by the region between strands 6 and 7, which adopts a six-stranded  sandwich fold (see Fig. 1A, orange). The plug domain appears to be held in place by a hairpin loop between strands 5 and 6 (see Fig. 1A, magenta). Finally, another salient feature of the PapC pore is the presence of an ␣-helix (residues 448 -465) that interacts with the 5-6 hairpin and is positioned over the hairpin and plug domain on the extracellular side of the barrel lumen (see Fig. 1A, aqua).
There is evidence that PapC indeed functions as a dimer during pilus biogenesis. PapC was shown to assemble as a dimeric twin pore complex in the lipid bilayer by cryo-EM (23) and in solution by crystallography (19). In addition, complementation of defective PapC mutants was achieved by expression of the homologous FimD usher used in type 1 pilus biogenesis (17,22), and PapC-FimD interactions were demonstrated (22). Recently, a type 1 pilus intermediate was isolated, and a snapshot of the pilus biogenesis process was captured by cryo-EM of this complex (19). The images revealed electron densities for the FimD usher that corresponded to the crystal structure of the PapC dimer. In addition, the images revealed that the assembling pilus fiber appears to utilize only one of the usher monomers for secretion. Based on the structural and biochemical information obtained so far on both the P pilus and the type 1 pilus systems, a model for pilus assembly at the usher was presented (19) whereby the periplasmic N-terminal domains of both usher monomers alternate in delivering the subunits to an assembly platform provided by the usher dimer, whereas translocation through the OM occurs via a single pore.
To gain insight into the molecular mechanism of the usher during pilus assembly and secretion, we have initiated an electrophysiological analysis of the pore function of PapC. Here we used the planar lipid bilayer technique to investigate the channel properties of the PapC usher and some of its mutants. The comparison of the functional properties of the wild type and mutant proteins allowed us to gain insight into the role of various domains in the basal activity of the usher channel and set the stage for using electrophysiology as a functional assay for secretion.
EXPERIMENTAL PROCEDURES
Media and Reagents-Dodecyl-maltopyranoside and lauryl(dimethyl)amine oxide were purchased from Anatrace, and N-octyl-oligo-oxyethylene was from Axxora. Thrombin was from Novagen. The pentane and hexadecane used in planar lipid bilayer experiments were from Burdick and Jackson and TCI, respectively. Other chemicals were from Sigma or Fisher.
Strains, Plasmids, and Growth Conditions-E. coli strains DH5␣ (24) and the multi-porin mutant BL21(DE3)Omp8 (25) were used for plasmid construction and PapC purification, respectively. Bacteria were grown in LB broth containing 100 g/ml ampicillin at 37°C with aeration. For protein expression, PapC was induced at an A 600 of 0.6 for 2-2.5 h by the addition of 0.1% L-arabinose. Plasmid pDG2 encoding wild type PapC with a thrombin cleavage site and a His 6 tag added to the C terminus was previously described (23). New plasmids were constructed from pDG2 to engineer the various mutations. A detailed description of the strategies and the primer sequences are given in supplemental Fig. S1. All of the PapC mutants were checked for proper construction by DNA sequencing.
PapC Purification-The OM fraction from bacteria induced for expression of one of the PapC constructs was isolated by French press disruption and Sarkosyl extraction, as described (22). The OM was solubilized by resuspending into 20 mM Tris-HCl (pH 8), 0.12 M NaCl, and 1% dodecyl-maltopyranoside and rocking overnight at 4°C. The extract was clarified by ultracentrifugation (100,000 ϫ g, 1 h, 4°C), imadazole was added to 20 mM to the supernatant fraction, and this fraction was run over a nickel affinity column (HisTrap; GE Healthcare) using an Akta fast protein liquid chromatography apparatus (GE Healthcare). The bound PapC protein was eluted using an imidazole step gradient in buffer A, which is 20 mM Tris-HCl (pH 8), 0.12 M NaCl, and 10 mM lauryl(dimethyl)amine oxide. Fractions containing PapC were pooled, and the His 6 tag was cleaved off PapC by overnight digestion at room temperature with 1.5 units of thrombin/mg of PapC while dialyzing against 20 mM Tris-HCl (pH 8) and 0.12 M NaCl. The thrombin was inhibited by addition of phenylmethylsulfonyl fluoride, and the mixture was subjected to a second round of nickel affinity chromatography in buffer A. In this case, the thrombin-cleaved PapC came off the column in the flow-through fraction. The collected flowthrough material was concentrated using a Millipore Ultrafree centrifugal concentrator (molecular mass cut-off, 50 kDa) and applied to a HiLoad 16/60 Superdex 200 prep grade gel filtration column (GE Healthcare) in buffer A. The peak fraction containing PapC was collected from the gel filtration column and concentrated using the centrifugal concentrator as described above. Purified protein was stored frozen in aliquots.
Analysis of PapC Mutants for Expression and Function in Pilus Biogenesis-Each of the PapC mutant constructs was compared with the wild type PapC for expression level in the OM by immunoblotting with anti-His 6 tag (Convance) or anti-PapC antibodies. Proper folding of the PapC constructs in the OM was checked by resistance to denaturation by SDS, as determined by heat-modifiable mobility on SDS-PAGE. These procedures were performed as described (20,22). The ability of each of the PapC mutants to complement a ⌬papC pap operon for assembly of P pili was determined by hemagglutination assay and purification of pili from the bacterial surface as described (22).
The release of pilus subunits into culture supernatant fractions was analyzed as follows. Bacteria expressing WT PapC or one of the PapC mutants together with a ⌬papC pap operon as described above were grown to mid-log phase and placed on ice. Five ml of each culture was centrifuged (3,000 ϫ g for 10 min at 4°C) to separate the bacteria from the culture supernatant. The supernatant was removed to a new tube, centrifuged again (10,000 ϫ g for 10 min at 4°C) and passed through a 0.22-m filter (Millipore) to remove any remaining bacteria. The bacterial pellet was resuspended in 1 ml of 10 mM Tris (8.0), and an aliquot of the resuspended bacteria was mixed with an equal volume of 2ϫ SDS-PAGE sample buffer and heated at 95°C for 10 min. The cell-free culture supernatant was precipitated by the addition of 0.5 ml of 100% trichloroacetic acid and incubation on ice for 30 min. The precipitate was pelleted (10,000 ϫ g for 20 min at 4°C), washed twice with acetone, resuspended in 100 l of 1ϫ SDS-PAGE sample buffer, and heated at 95°C for 10 min. Twelve l of the whole bacterial sample (representing 0.06 ml of the original culture) and 5 l of the trichloroacetic acid-precipitated culture supernatant (representing 0.5 ml of the original culture) were subjected to SDS-PAGE, and chaperone-subunit complexes were detected by immunoblotting with anti-PapDG or anti-PapDE antibodies.
Planar Bilayer Electrophysiology-A 100-m-diameter hole was formed in a 0.01-mm-thick Teflon film and pretreated with a 1% solution of hexadecane in pentane. The Teflon film was sandwiched between two Teflon chambers filled with 1.5 ml of 1 M KCl, 5 mM Hepes (pH 7.2). Fifty g of soybean phospholipids (phosphatidylcholine Type II-S; Sigma) in pentane (5 mg/ml) were added to both chambers. A lipid bilayer was formed over the hole by lowering and raising the level of buffer in one chamber. An aliquot of pure protein was added to one chamber only (cis-side), from a 5-100-fold dilution of the purified protein (0.3-0.6 mg/ml) in a 1% N-octyl-oligo-oxyethylene solution in the 1 M KCl buffer. If no spontaneous insertion of channels occurred within 10 min, the membrane was broken and formed again, and/or more protein was added. The amount of protein added varied considerably depending on the success of insertion (10 -7,000 ng added to the chamber).
Currents were recorded under voltage-clamp conditions with an Axopatch 1D amplifier connected to a CV-4B headstage (Axon Instruments). The voltages given are those of the cis-compartment with respect to the trans compartment (ground). The currents were filtered at 1 kHz and digitized every 100 s (Instrutech). The data were stored on a PC computer using the Acquire software (Bruxton) and analyzed with pClamp.
RESULTS
Phenotypic Characterization of the PapC Mutants-We constructed a series of mutations in PapC to investigate the roles of major domains and features in pilus biogenesis and channel function. The following regions were deleted either alone or in combination ( Fig. 1B): the N-and C-terminal domains (⌬N⌬C), the plug domain (⌬plug), the 5-6 hairpin that interacts with the plug (⌬hairpin), and the ␣-helix that caps the hairpin (⌬helix). The PapC ⌬N⌬C mutant corresponds to the usher translocation domain for which the structure was recently determined (Fig. 1A) (19). Each of the PapC mutants was compared with the wild type protein for expression level in the OM and resistance to denaturation by SDS, which provides an indication of proper folding and stability of the -barrel domain (26). The 5-6 hairpin appears to be important for the overall stability of the usher, because the expression levels of the ⌬hairpin mutant and a combined ⌬hairpin⌬helix mutant in the OM were very low (Table 1). In addition, a triple mutant deleted for the N-and C-terminal domains and the plug domain (⌬N⌬C⌬plug) had very low expression and no longer exhibited SDS resistance ( Table 1), indicating that the combined absence of these three domains prevents proper folding of the usher in the OM. The other PapC deletion mutations had no or only small effects on protein expression and stability ( Table 1).
Functionality of the PapC deletion mutants was assessed by the ability to complement a ⌬papC pap operon for assembly of pili on the bacterial surface and agglutination of red blood cells ( Table 1). As expected based on previous studies (20 -22), deletion of the N-and C-terminal domains abolished the ability of the usher to assemble pili. The ⌬plug mutant was also unable to assemble pili, demonstrating an essential functional role for this domain. In contrast, the mutation affecting the 5-6 hairpin (⌬hairpin) had only mild effects on pilus biogenesis, and the ⌬helix mutant behaved essentially as WT (Table 1). However, the combined ⌬helix⌬hairpin mutant was completely defective for assembly on the bacterial surface. The ability of the ⌬hairpin mutant to assemble pili was surprising giving its low expression level (Table 1). This mutant may be stabilized by interactions with chaperone-subunit complexes and the assembling pilus fiber. Although the ⌬N⌬C, ⌬plug, and ⌬helix⌬hairpin PapC mutants were unable to assemble pili on the bacterial surface, these mutants, particularly ⌬plug, might allow release of unassembled pilus subunits into the extracellular milieu. However, comparison of culture supernatant fractions from bacteria expressing WT PapC or one of the deletion mutants together with a ⌬papC pap operon did not show increased release of pilus subunits (data not shown).
Channel Signatures of Wild Type and Mutant PapC-The channel properties of purified wild type and mutant PapC proteins were investigated with the planar lipid bilayer technique. The ⌬hairpin, ⌬hairpin⌬helix, and the ⌬N⌬C⌬plug mutants were not analyzed for channel formation because of their effects on protein stability and/or expression. In the planar lipid bilayer technique, a phospholipid bilayer is formed over a 100m-diameter hole in a Teflon film separating two buffer-filled chambers, and an aliquot of purified protein in detergent is added to one chamber only (the cis-side). Gentle stirring of the chamber promotes the spontaneous insertion of single poreforming protein. Under voltage-clamp conditions, such insertion events are typically witnessed by a change in the amount of current passing through the bilayer and a change in the pattern of current fluctuations. Some proteins are more prone to rapid insertion; the E. coli porin OmpF, for example, inserts within a few minutes. We found that the wild type and mutant PapC proteins were relatively reluctant to insertion. An allowance of up to 10 min was set for insertions, and often several attempts were necessary, even with amounts 7,000 times larger than those typically used for OmpF porin (ϳ1 ng). We attribute this intrinsic reluctance to the complex structural organization of the usher protein, with two subunits comprising a 24-stranded -barrel domain and large N-and C-terminal domains. Despite this technical difficulty, we were able to obtain multiple recordings of each protein investigated here. We have not yet been able to experimentally determine the orientation of the inserted proteins, but we assume that the protein inserts with the extracellular side going in first, because it is unlikely that the hydrophilic N-and C-terminal domains would be able to go through the bilayer. Fig. 2 shows typical recordings of the spontaneous activity of the wild type PapC and the ⌬N⌬C, ⌬plug, and ⌬helix mutants.
The recordings were made in buffer containing 1 M KCl, which is a typical composition for the initial characterization of poreforming proteins when using this technique, because it stabilizes the membrane and provides a good signal-to-noise ratio because of the large currents at high ionic concentration. The four protein types have very distinct kinetic signatures, which were found repeatedly in different experiments (Ͼ15 for wild type PapC, 5 for ⌬N⌬C, 9 for ⌬plug, and 8 for ⌬helix). The fact that each protein has a characteristic activity pattern underscores that the observed activity is not artifactual and does not originate from the instability of the membrane. All four channels show spontaneous opening and closing transitions that are seen as spiky deflections of the current traces. These recordings were made at a clamped potential of Ϫ30 mV, and by convention openings are shown as downward spikes (marked by the downward arrows in Fig. 2, A and B). It is noteworthy that, in the same conditions, the ⌬plug channel passes a huge amount of current, with a maximum current level of several hundreds of pA at this relatively low voltage (for comparison, the current passing through the three channels of a trimeric OmpF porin is about 120 pA at 30 mV) (27). In fact, we found that we need to maintain the membrane potential below 50 mV and limit the recordings to no more than 30 s to avoid difficulties in clamping the membrane potential. The ⌬helix channel shows bursts of activity comparable with that of the ⌬plug channel, although the size of the events is smaller. Another remarkable trait is that none of these channels is either completely closed or completely open, as might have been expected for the wild type and ⌬plug PapC, respectively, and all show a substantial amount of dynamic behavior. In the text below, we describe the activities in further detail.
Activity of Wild Type PapC-The trace in Fig. 2A shows that the current through the wild type channel remains at a baseline level that is close to the zero current level but displays frequent short-lived transitions to open states of various sizes. In Fig. 3, we display the traces on an expanded time scale to show the details of the opening activity. All four traces in Fig. 3 were obtained from the same bilayer at ϩ50 mV (A and B) and Ϫ50 mV (E and F). C and D of Fig. 3 show amplitude histograms constructed from 1-min-long recordings in the same experiment at ϩ50 and Ϫ50 mV, respectively. Such histograms represent the number of sample points having the current values given in the abscissa (bins are 0.5 pA). The peaks correspond to A and B, respectively, and one at 146 pA is shown in F (numbers in angled brackets represent the current difference from the base line). These transitions are not frequent or longlived enough to yield peaks on the amplitude histograms. The diversity of current levels has made it impossible to generate current-voltage relationships, because we do not know what current steps belong to the same type of events while the voltage is being changed. There also does not seem to be strong reproducibility in the size and the frequency of these events from one bilayer to the next.
Finally, we have seen some variability in the current value for the base line. A current value of 70 pA, such as seen in Fig. 3 (A and B), is rather high for a true closed state. There is indeed a leak through the bilayer (or at the bilayer-Teflon interface), but such a leak is typically on the order of a few pA. Close inspection of many traces shows that, in fact, the base-line level might represent a very stable, already partially open state of the channel, because a closing step is observed from this level on rare occasions. This phenomenon is also observed in the ⌬helix and the ⌬plug mutants (Fig. 2).
In summary, the pattern of activity of the wild type channel is reproducible and characterized by the following traits: 1) the channel is mostly in a leaky, low conductive state, 2) the channel shows additional transient openings to many different current levels that are not integer multiples of each other, 3) some of the current levels can be quite large, 4) transitions to the various current levels typically involve a single step, with no dwelling at intermediate levels. This kinetic signature suggests that the bilayer contains a single protein, although we do not have direct proof that this is the case. However, if multiple proteins were present, the larger events would be composites of smaller ones with a transient step, because it would be unlikely that several proteins gate in perfect unison. We also have to consider that the protein is dimeric and thus can expect to occasionally see two events of the same size stacked on one another, as seen in Fig. 3F. But again, there is no direct proof at this point that these stacked events originate from two different monomers rather than one.
The ⌬N⌬C Mutant Is a Mostly Open Channel-The N-and C-terminal domains have been shown to play essential roles in pilus assembly (17, 20 -22). The structure of the N-terminal domain of the FimD usher was determined and shown to form a soluble, globular domain (21). The structure of the usher C-terminal domain is unknown but is believed to form a similar globular domain. We were interested in testing whether these domains might modulate the basal channel activity of PapC by analyzing a mutant deleted in both domains. The truncated protein was even more reluctant to insert than wild type PapC, even if reconstituted first in artificial liposomes and then added to the bilayer. A reproducible activity was found in five record-ings, one of which contained what appears to be a single dimer and is shown in Fig. 4 (A-C). The traces reveal a channel that is essentially open most of the time (P o , the probability to be in the open state, is 0.9) and undergoes transient closures to two well defined current levels. Marked segments are shown on an expanded time scale below the main traces. In principle, there could be three possible scenarios to account for these data: 1) the activity originates from two monomers within one dimer, 2) it originates from two monomers belonging to two distinct dimers, where only one monomer from each is active, and 3) it originates from two separate entities that inserted as monomers. We can dismiss the third possibility on the basis that the proteins were purified as dimers (supplemental Fig. S2) and should insert as such. Of the other two possibilities, we believe that the first one is most likely, because of the high level of cooperativity observed in the opening and closing transitions (Fig. 4, see expanded traces in A and B and also see D). Therefore, the simplest explanation for the observed activity is that a single dimer was reconstituted in the experiment shown in Fig. 4 (A and B), and we are witnessing the activity of each monomer, either individually or cooperatively.
For this reason, we are assigning current levels to the closed (C), open monomer (O1), and open dimer (O2) states as indicated. In this experiment, transitions to the O1 level were not frequent enough to yield a peak in the amplitude histograms displayed to the right of the main traces. These histograms, however, do show that the major peak corresponds to the O2 level. Fig. 4C is a current-voltage (I/V) plot generated from this experiment, where the size of the current difference between the closed level and each of the O1 and O2 levels has been plotted versus voltage. The slope of the linear regression gives the so-called channel conductance, which is indicative of the channel size. Conductance depends not only on the pore dimensions but also on the interactions between the permeant ions and the channel wall. Nevertheless, a large conductance is typically found in wide channels, such as porins (28) and other translocons (29,30). The conductances derived from these plots are 799 and 1612 pS for the O1 and O2 pores, respectively. The fact that these two conductances differ by a factor of 2 lends support to the notion that they represent the monomeric and dimeric channels. The O1 conductance is in the range of the conductance derived from the smallest events recorded from the wild type channel (ϳ500-600 pS).
The trace in Fig. 4D shows transitions to many different levels (highlighted by dashed lines) that are roughly integer multiples of a current size of ϳ40 pA (800 pS), which we attribute to the monomeric current (marked M). This bilayer contained multiple channels, and the trace shows a mixture of well defined transitions of monomeric size (M) and dimeric size (D), sometimes occurring individually or in a stepwise fashion. The maximum current level is Ϫ245 pA (at Ϫ50 mV); therefore the channels, as seen in the traces in A and B, are open most of the time, a hallmark of the activity pattern of the ⌬N⌬C mutant. Importantly, single-step events of size comparable with the large events seen in wild type PapC (indicated by arrows in Fig. 3) were not observed. Multi-channel containing bilayers, such as this one, were obtained in three other occasions and also displayed the same activity pattern.
The ⌬plug Mutant Has an Extremely High Conductance-A salient feature of the PapC structure is the occlusion of the -barrel lumen by a large globular domain acting as a plug (Fig. 1A). The absence of the plug is expected to yield a wide channel delimited by the 24 -strands. Electrophysiological analysis confirms that this is indeed the case. The traces in Fig. 5 (A and B) reveal a dynamic channel that dwells at three preferred current levels. Based on the same arguments as we made for the ⌬N⌬C mutant above, we believe that these levels correspond to a closed and two open states representing the monomeric and dimeric forms from the same usher dimer. Note that these traces were obtained at a transmembrane potential of only Ϯ 15 mV. Potentials higher than ϳ40 mV led to so much current and activity that the traces became difficult to analyze (as can be surmised from the trace of Fig. 2C). The I/V plot of Fig. 5C was obtained from the same experiment shown in Fig. 5 (A and B). The linear regression through each plot provided conductance values of 2,950 pS for the O1 level, and 7,330 pS for the O2 level. As a reference, the conductance of a single monomer of the porin OmpF is 1,400 pS in the same conditions (27).
In terms of kinetics, we found that, surprisingly, the channel displays a lot of dynamic behavior and is not a gaping open pore, as might have been anticipated. It does appear to favor an open state, but numerous transitions to less conductive states are frequently observed, some of them even for prolonged periods of time (Fig. 5). An interpretation for this observed pattern is provided under "Discussion." At potentials greater than 40 -50 mV, the traces dwell at a very high current level, suggesting that the channel is mostly open, and the quality of the recording deteriorates somewhat because of the large amount of current and channel noise. In the experiment of Fig. 5, channel opening appears favored at negative potentials, as can be seen from the relative size of the peaks of the amplitude histograms, but this asymmetry has not been a consistent trend.
The ⌬helix Mutant Allows Occasional Plug Release-The single ␣-helix in the PapC protein is located on the extracellular side, as a cap that covers the dent in the barrel wall because of the inwardly folding of the 5-6 hairpin (Fig. 1A). It is anticipated that removal of the helix might release the 5-6 hairpin keeping the plug domain in place. The electrophysiological recordings revealed that the ⌬helix mutant is somewhat destabilized and will occasionally burst into a gating pattern that is similar to the ⌬plug mutant. Because of the similarity in behavior with the ⌬plug mutant, our interpretation of these results is that, indeed, the plug domain can spontaneously dislodge for some period of time in this mutant. In some experiments, the ⌬plug-like behavior is apparent at the onset of the recording, as shown in Figs. 2D and 6 (A and B), as if the protein inserted in a conformation where plug movement had already occurred. Other times, the channel displays at first a quiet activity and then switches to a high activity mode. This switch may be preceded by a stretch of opening transitions to low conductance states (Fig. 6, C and D). This type of behavior has been observed in eight independent experiments, but there is variability in the time dependence, duration, and frequency of the ⌬plug-like activity mode. The conductance of the ⌬helix mutant is some- C, current-voltage relationships of the monomeric (O1) and dimeric current (O2). The conductance was calculated as the slope of each linear regression and was found to be 799 and 1,612 pS for O1 and O2, respectively. D, representative 1.8-s current trace obtained at Ϫ50 mV from a bilayer that contained more than one channel. To save space, the zero current level is not shown, but the maximum current observed is given (Ϫ245 pS). what smaller than that of the ⌬plug mutant, with values of ϳ2,500 and 5,800 pS for the O1 and O2 states, respectively.
DISCUSSION
We have presented here the first report on the electrophysiological activity of the wild type and mutants forms of the PapC Ϫ15 mV (B). At positive voltages, the channel is largely in a closed state (marked C) but displays frequent opening transitions (upward) to O1 and O2 current levels. This kinetic behavior is reflected in the sizes of the peaks of the amplitude histogram shown at the right side of the trace. At negative voltages, however, the channel spends more time in the O2 state than in the closed state (openings shown as downward transitions). Highlighted segments are shown on an expanded time scale below each trace to illustrate the two current levels. The leftmost scale bar is for the 20-s traces in A and B, whereas the right scale bar refers to the enlarged traces of A and B. C, current-voltage relationship obtained from the same experiment for the O1 and O2 levels. The conductance was calculated as the slope of the linear regression for each curve and was found to be extremely large: ϳ3,000 pS for the monomer and ϳ7,300 pS for the dimer. As a reference, the conductance a single monomer of OmpF is ϳ1,400 pS in the same conditions (27). FIGURE 6. The ⌬helix mutant displays bursts of ⌬plug-like activity. Current traces were obtained from three different channels to illustrate the range of activity. A and B, traces were obtained at ϩ15 and Ϫ15 mV, respectively, from the same channel, which displayed ⌬plug-like activity from the onset of the experiment. The traces in the boxes correspond to expanded segments to show details of the gating to the O1 and O2 states. Each trace represents 10 s, but the amplitude histograms shown on the right side were obtained from 30 s of the same recording. Note that the O1 peak of the amplitude histograms is either broad or biphasic because of two slightly different current levels for O1, depending on whether the state originates from the closed state or the O2 state. C and D, traces from two separate experiments showing that the ⌬plug-like behavior occurred with a time delay after applying a voltage of Ϫ50 mV and is preceded by gating to substates. For trace D, in fact, the burst of activity at the end of the trace still corresponds to a conductance that is about half of O1. The downward arrows at the right sides of the traces point in the direction of channel opening. The leftmost scale bar is for traces A and B, and the rightmost scale bar is for traces C (400 pA ordinate) and D (200 pA) ordinate. The dashed line is the zero current level. DECEMBER 25, 2009 • VOLUME 284 • NUMBER 52 usher. As found for other translocators involved in the biogenesis of membrane or surface structures, such as FhaC, Omp85, HMW1B, and PulD (29 -32), the usher channels display characteristic size and kinetic signatures. The PapC mutants were also analyzed for effects on expression and folding of the usher in the OM and for function in pilus biogenesis. Removal of the plug domain and of the N-and C-terminal domains completely compromised pilus assembly on the bacterial surface. This was expected for the ⌬N⌬C mutant, because the individual domains were previously shown to play roles in subunit recognition and binding (17, 20 -22). Our results show that the plug domain is also essential for PapC function, suggesting that the plug may participate in fiber assembly and secretion rather than acting simply as a channel gate. A similar functional role was recently reported for the plug domain of the Caf1A usher involved in F1 capsule biogenesis in Yersinia pestis (33). The PapC ⌬plug mutant did not allow release of unassembled pilus subunits into the extracellular medium, suggesting that the subunits remained bound to the chaperone in the periplasm. Further studies are required to determine the specific defect of the ⌬plug mutant in pilus biogenesis. Although the combined ⌬helix⌬hairpin mutant was defective for pilus biogenesis, neither the 5-6 hairpin nor the ␣-helix alone was required for function of the usher. However, the channel data support a role for the helix in maintaining the plug in a closed state. The lack of effect of the ⌬helix mutation on pilus biogenesis suggests that the occasional destabilization of the plug is not sufficient to impair the overall function of the machine.
Channel Properties of the PapC Usher
The comparison of the activity patterns of the wild type and mutant channels allows us to propose roles for the different domains in the basal pore activity of the usher, i.e. in the absence of chaperone-subunit complexes. The cartoons in Fig. 1B illustrate the domain composition of the different proteins investigated here. One might have expected the wild type channel to be closed shut because of the plug. However, current fluctuations in the bilayer data indicate that transitions between "open" (ion-conductive) and "closed" (non-ion-conductive) states frequently occur. The transitions display variable sizes, possibly indicating multiple routes for ion movement. Some of these transitions have sizes that are comparable with those of the ⌬plug channel. A close inspection of the PapC crystal structure reveals the presence of small water-filled cavities at the interface between the plug domain and the -barrel wall (circled in red in Fig. 1C). These intrinsic "channels" could be a possible passageway for ions, while the plug is lodged inside the lumen. We propose that the transitions of various sizes correspond to 1) the interruption of the "water channels" by either the jiggling of the plug or the N-and C-terminal globular domains blocking the ion flow at the periplasmic end and 2) the occasional displacement of the plug to lead to the very large transitions. We expect plug displacement to be quite infrequent, short-lived, and even partial, because of the extensive network of interactions between the plug domain, the channel wall, and the 5-6 hairpin that dips inside the pore. We had hoped to investigate a mutant lacking the 5-6 hairpin, but the expression level of this protein was very low. On the other hand, we investigated a mutant lacking the ␣-helix that caps the 5-6 hairpin and likely participates in maintaining the plug in place (19). The behavior of the ⌬helix mutant is indicative of a destabilized plug, because the channel will spontaneously enter a state of high conductance, comparable with that of the ⌬plug mutant. These transitions often occur in bursts and appear to be favored by negative voltages. The application of a transmembrane voltage might lower the energy barrier for plug displacement in this already unstable mutant. Interestingly, the conductance of the ⌬helix mutant is smaller than that of the ⌬plug mutant, suggesting that the plug may not fully come out of the pore. These data support a proposed model where gating of the PapC translocation channel involves rotation of the plug domain within the pore lumen rather than movement of the plug completely out of the channel (19).
The ⌬plug mutant has a very large conductance, in line with the measured size of 45 ϫ 25 Å for the translocation channel per se (19). Conductance is not only a reflection of the physical size of a pore but also of the interactions made between the channel wall and the permeating ions. Thus, we will refrain from calculating an estimated size on the basis of the monomeric conductance of ϳ3,000 pS in 1 M KCl. This size is much larger than those reported for other protein translocators in the same ionic conditions, such as FhaC (1,200 pS) (29), HMW1B (1,400 pS) (30), and Omp85 (500 pS) (32), which may be related to the biological function of the PapC pore in translocating folded polypeptides or may be due to the large number of -strands in PapC. Interestingly, oligomeric secretins such as XcpQ from Pseudomonas aeruginosa (34) and YscC from Yersinia enterocolitica (35) form channels with very large conductances (ϳ3-10 nS), which are comparable with that of the ⌬plug PapC mutant. EM studies revealed that the monomers organize as ring-like structures forming a central pore (34,35). These observations are consistent with the notion that a fairly large pore is required for the translocation of folded substrates by secretion systems, either as a central conduit within a single subunit (as in PapC) or as the center of a ring of monomers (as in YscC and XcpQ). However, another secretin, PulD, showed a behavior similar to WT PapC, i.e. it appeared to be essentially closed but displayed voltage-dependent openings of various relatively small sizes (ϳ200 pS in a 400/100 mM KCl gradient) (31). The fluctuations were interpreted by the authors to represent slight movements of some protein domains.
An intriguing aspect of the behavior of the ⌬plug mutant is that a gating activity (i.e. open-closed transitions) is still observable, indicating interruptions in the ion flow. It is possible that these transitions derive from mobility of some of the barrel loops or other structural elements of the -barrel itself or from the collapse of the barrel in the absence of the support of the plug. However, we favor the hypothesis that the transitions represent transient blockages of ion movement by the globular Nand/or C-terminal domains, which might have enough dynamic flexibility to transiently occlude the plugless pore. A triple ⌬N⌬C⌬plug mutant, which would address the role of the N-and C-terminal domains in occluding the pore, did not express well in bacteria and did not appear to fold properly in the OM. Thus, electrophysiological investigations could not be performed on this mutant.
The PapC ⌬N⌬C mutant also shows gating activity despite lacking the N-and C-terminal domains. Here the plug is in place, and thus the gating activity is intrinsic to the plugged -barrel and might originate from the jiggling of the plug or some other loops or domains affecting the size of the small water channels. These movements might be normally somewhat restricted in the presence of the N-and C-terminal domains. The observed transitions might also stem from the reduced stability of this particular mutant (as suggested by its reduced resistance to SDS) (Table 1). Interestingly, the large transitions observed in the WT and ⌬helix proteins are absent in the ⌬N⌬C mutant. If indeed these transitions correspond to movements of the plug to liberate a large translocation channel, their absence in the ⌬N⌬C mutant might signify that the globular domains play a role in allowing the plug to occasionally come out in the wild type usher to yield these large transitions. This hypothesis fits with the proposal that the usher C-terminal domain is located under the pore and makes contact with the 5-6 hairpin and the plug domain (19), and the observation of a weaker density in the PapC channel in the absence of the C-terminal domain, as observed by cryo-EM (23).
In conclusion, the domain analysis of the PapC mutants presented here supports that PapC forms a twin pore and reveals that 1) ion movement is observable in WT, possibly through water-filled cavities at the interface of the plug and the barrel, 2) there is considerable dynamic flexibility in the whole protein, 3) the plugless channel has an extremely large conductance, in accord with the estimated dimensions and the biological activity of translocating folded peptides, 4) the ␣-helix capping the 5-6 hairpin appears to stabilize the plug in place, and 5) the N-and C-terminal domains might transiently occlude the pore and play a role in plug displacement. The knowledge of this basal activity sets the stage for further experiments on PapC in the presence of translocating substrates. | 9,931.6 | 2009-10-22T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Saron Music Transcription Based on Rhythmic Information Using HMM on Gamelan Orchestra
Nowadays, eastern music exploration is needed to raise his popularity that has been abandoned by the people, especially the younger generation. Onset detection in Gamelan music signals are needed to help beginners follow the beats and the notation. We propose a Hidden Markov Model (HMM) method for detecting the onset of each event in the saron sound. F-measure of average the onset detection was analyzed to generate notations. The experiment demonstrates 97.83% F-measure of music transcription.
Introduction
Gamelan is a traditional musical instrument of Indonesia, which comes from Java. In order to preserve gamelan as national heritage and to bring back the greatness of this music as it was in 17-18 century, some efforts must be conducted to make people more familiar with gamelan and to help them play this instrument easier.
Gamelan consists of about fifteen groups of different instruments, such as Saron, Kenong, Kempul, Kendang, Bonang, etc. Some of the instruments have the same fundamental frequency, such as saron and bonang [1]. See Figure 1. In gamelan music, saron and bonang are not sounded at the same time. Bonang is struck a half beat before saron time, where beat, in this case, is defined as the distance between two consecutive sounds Saron, [2]- [3] See One commonly used method to detect onset is the feature-based onset detection [4]- [8]. The disadvantage of this conventional method is susceptible to weak onset feature or spurious peak that not corresponds to an onset event. See Figure 3. Another difficulty, gamelan instruments are handmade and were tuned based on the sense of craftsmen. Thus, gamelan music signals often have fluctuations in amplitude, frequency and phase [1]. These fluctuations may lead to different shape of signal envelope. 105 We propose a Hidden Markov Model (HMM) approach to predict the likely timing of the onsets of gamelan music signals. HMM method allows information such as tempo to be combined in onset detection method [9] - [10]. Tempo information gives prediction for early detection of onsets and discriminates them from false peaks coresponding to another instrument's sound. HMM offers efficient computation and does not need large data for training. Transcription is essentially done by detecting the onset of a specific type of instrument. Many studies have been carried out to detect the onset of musical events, but those are especially focused on western music. Common onset detection should be improved to detect the onset of the eastern musical instruments such as the gamelan.
Standard onset detection methods are used first to find the location of the peaks by measuring the abrupt change in energy content, magnitude, frequency, or phase of the music signal, then to apply a threshold to decide whether it is a peak or not, by considering height as an onset [5]- [8]. If a peak's height is above the threshold, then the peak will be considered as an onset event and vice versa. This onset detection accuracy only depends on single peak that is analyzed at current time. Therefore, this method could not distinguish the spurious peaks in Bonang signal with the real onset event and the weak onset peak due to many fluctuations in Gamelan music signal such as in Figure 4.
In gamelan ensemble, an instrument sound is always interfered by those of other instruments. For example, the extracted saron sound may still contain bonang sound since both instruments have the same fundamental frequency. But the presence of bonang sound can be distinguished from saron sound by comparing the spectral envelope of both sounds, since bonang sound (60 ms) has shorter envelope than that of saron (300 ms).
In the conventional methods of onset detection, each peak is only evaluated individually without considering the temporal relationship with other peaks. If an onset has appeared, the next onset will not appear in the near future, unless a certain time interval has passed. A clear example is the interval between beats in the form of music that can often be followed by the audience. It's kind of important information that is difficult to relate the onset of feature-based detection.
Other recent onset detection methods are using machine learning. An artificial neural network can be trained to detect the onset of the event [11]- [12]. Important prerequisite for machine learning methods is that the training data must be large enough to represent the actual data in reality. In some cases, the amount of training data should reach up to 70% of all actual data [2]. Due to fluctuations in the signal can be found in many gamelan music, all the variations of the signals must also be included in the training process. In order to detect the onset of the event many different gamelan instruments, the network can not be trained by just one variety of gamelan instruments. Thus, one requires a large database to train the network.
Three stages in onset detection methods can be seen in Figure 3. 1. Preprocessing, an optional initial process to accentuate or attenuate some aspects of the original signal is related to the onset detection. At this stage, the signal's spectrogram is divided into multiple frequency bands. 2. Reduction, is the most important part in the onset detection, because the original signal is converted into a sampled function of detected onset. In general, the original signal is transformed into the detection function by standard features such as an explicit signal amplitude, frequency or phase 3. Peak-picking, which is the final process after the detection function, is formed and onset peaks began to appear. At this stage smoothing or normalization can be done in advance so as to facilitate peak-picking process that identifies the locations of local maxima that are above the threshold.
Previous Detection Onset methods
There are several onset detection methods:
a. Spectral Flux
Method of Spectral Flux (SF) is based on the detection of sudden changes in the signal positive energy that shows a part of a new event. Spectral flux measures the change in the amount of energy in each frequency bin, and summed giving onset detection function. The formula is written in eq. (1) and (2) [7].
with | | is the half-wave rectifier function and X(n, k) is the result of the STFT of the input signal x at every n th frame in the k th frequency bin. Based on empirical experiments [7], it is known that the function of L1-norm in eq. (1) is superior to L2-norm [7] in eq. (2). Selection of Spectral Flux method as a method of comparison due to the characteristics of the equipment Gamelan is a musical instrument played by percussive or beaten. The percussive features of the gamelan instrument, cause changes of magnitude more prominent than the other features [7].
ISSN: 1693-6930 In this method, we are going to look for the large changes in the output magnitude STFT X(n, k) Magnitude of the peak detection method of Spectral Flux is based only on the magnitude of the change in magnitude with the previous frame. In eq. (3), the difference in the magnitude of each frame n is rectified, then the result of the k th frequency bin summed up all the window length N.
As the process continued, it takes peak-picking process to analyze the magnitude of change in multiple frames at once. Threshold value for detection function at time t is the average of the detection function in the analysis window centered at t.
Then a peak at the n-th frame is selected for the beat or beats if it is a local maximum.
, ∀ : The selection of the value of w is based on the average period of music that indicates the average distance between the knock on the Gamelan music signal. While the variable m is a multiplier variable given a value of 1 so that the determination of a peak based solely on the magnitude of the peak height of the frame to the other frames in a range of music tempo on average.
b. Phase deviation
One of Phase components observed in the detection method is the instantaneous frequency changes that are indicators of possible onsets. Let φ(n,k) is the phase component of X(n,k) STFT results of the input signal to the nth frame in the frequency bin k.
φ (n, k) has a range of values from-π to π. So the instantaneous frequency, φ '(n, k), is calculated from the first time-difference of phase spectrum φ'(n,k).
Then change of the instantaneous frequency can be derived from the second order timedifference of phase spectrum: As a final step, the onset detection function based on phase deviation obtained from the absolute value of the average instantaneous frequency changes in all bin frequencies [7].
Onset Detection Based on spectral Features
In feature-based onset detection, the input signal is converted into the detection function through reduction process by observing the sudden change of the standard features such as audio signal's explicit information on energy (or amplitude), frequency content, or phase. In the following subsections we briefly reviewed the existing approach of onset detection using spectral flux and phase deviation [13] The rest of the paper is organized as follows. Section II describes proposed method, the HMM method that is used for our onset detection describes the methods used, while Section III describes our whole method of performance measurement, presents our experimental results and discuss the results. The last section concludes this work.
Proposed Method
And seThe proposed method in this research is using Hidden Markov Model (HMM) for the Saron's beat tracking. In this study, to analyze the performance of the Saron's beat tracking using HMM methods, we compare it with conventional detectors Spectral Flux, applying the principles of adaptive thresholding. Flowchart of the methods used in this study can be seen in Figure 5.
Frequency Filtering
In Gamelan orchestra, tactus level corresponds to the speed of beats that sounded by the Saron instrument, which only a single strike to each notation. Therefore, in a system of assessment Gamelan music tempo, frequency filtering processes necessary saron instrument, 500-1000 Hz, especially if the audio signal is a signal observed Gamelan orchestra.
Kaiser window is one way to form a filter. Kaiser filter type can create a wide area and is restricted to a narrow band region.
Preprocessing
Reviewed Modeling systems is an audio signal that has been processed with overlap Short Time Fourier Transform (STFT). In the STFT, the audio signal is represented in two domains, the time domain and frequency domain.
STFT process can be used to emphasize the feature magnitude of an audio signal. Moreover, the representation in the frequency domain allows the screening process when onset detection of saron is applied to the signal Gamelan orchestra.
This study used a window length N = 2048, or equivalent to 43 ms at 48 kHz sampling frequency. Both window length and hope length in overlapped STFT were used to maintain frequency resolution and time resolution respectively. In order to get a smaller index again, the width of hops h = 10 ms (or 76% overlap) is used. The usage of wide-hop size is already commonly used in these studies for detection of the onset and beat tracking [13].
Hidden Markov Models
The flow chart of Hidden Markov Model (HMM), proposed in this research can be seen in Figure 6. Generally, the method is the incorporation of HMM probability value of observation, transition probabilities, and the initial probability to predict the value of a state that has an order or regular enough structure so that many variations of the observational data can be 109 approximated by the information structure of the transition state. In this study HMM is used to extract tempo infofmation of musical pieces that will be useful for eliminating the false peaks in transcription. Figure 6. Flowchart of the HMM Hidden variable τ t is defined as the number of frames since last onset, which is worth one if the frame is an onset frame (state event = 1). Conversely, if τ t = s, means that the tframe is an s th frame from the onset to the last frame. If the detection of the onset of the next frame is met, then the state event back to being equal to 1 and moves up again to encounter the onset of the next frame. Total number of state S that may arise is calculated from the maximum frame spacing between two successive beats.
In each frame, the system also issues the observational data ot, that is the peak value of the input audio signal. The desired decision is to find the optimal state sequence : * which refers to the observational data and formulated in eq. (9) [10].
Therefore, onset detection process does not require frequency information, then all output STFT magnitude at each frequency bin is summed to obtain the total magnitude of each time frame. If the audio signal under study is a complex audio signal, which many instruments played at once, then the sum of all magnitudes are only performed in the frequency range of instruments to analyze the beat.
The probability of the observed to occur if state-t th -happens to P(o t |τ t ) is divided into two probability values, ie the probability of the observed onset frame t is a P(o t |τ t =1) and the probability of the observed if frame to-t-th is not an onset frame P (ot | τt ≠ 1), and second overall probability value amounted to 1.
The probability of the observed data is an onset frame P(o t |τ t =1) is determined from the results of elevation normalized output extraction of STFT process' magnitude. Higher the peak value of the magnitude of a frame t th are observed, the more likely the frame is a frame onset. Contrarily, if the value of magnitude lower, it is likely that the t th observed frame is not an onset frame . In eq. (10) and (11), it can be said that the calculation of the probability of the observations can be considered as a fixed thresholding with a value of 0.5 is often used in the conservative onset detection methods. If the peak value is greater than 0.5, the peak is an onset, and vice versa.
The calculation of the state transition probability is denoted by the symbol Ps,u which is the probability of s state is changed to state u or P(τ t =s| τ t+1 =u), where s,u ∈ {1,2,…,S}. Because of hidden state variables represent the frame index calculated from the onset of the previous frame sequence, then it is likely that the only possible state changes from s to s+1 or 1. Ps,u may happen only Ps,s+1 dan Ps,1 which means that the next frame is a frame onset. Illustration of the transition state of the HMM method is illustrated in Figure 6.
Figure 6. Illustration Relationship between State Transition Opportunities
Ps,1 values, is modeled as Gaussian probability distribution that has a peak at an average frame distance between two successive onset. The average value can be obtained from the value of music tempo on the commonly used composition of Gamelan, eg 60 bpm tempos means that in 1 minute on average there are 60 beats.
A simple example of modeling the distribution of the state transition can be seen in Fig. 7, with a number of state of 20 and an average of 10 Gaussian distribution, which means most likely that after the 10th frame to the state transitions to state 0 (a10, 0 = 0.99).
Figure 7. Simple Transition Distribution Model
While observational data o t is defined as the peak value detection onset of the feature extraction process results when the t th τt state of the following probability distribution P(o t |τ t ).. Illustration of the observed relationship with the hidden variable state can be seen in Figure 8. A simple example to illustrate again the corresponding relationship with the hidden state of observational data can be seen in Figure 9. When the value of observational data on a frame showing a peak or a high value, then the data should be hidden state corresponds to the state 0 in the frame. Decisions that is going to be achieved from HMM beat detection method is to find the optimal state sequence : * which refers to data observation o t . The integration process and the transition probability value of observational data is illustrated in Figure 10. While simple example of merging the transition distribution model Figure 7 with quite a variety of observational data can be seen in Figure 10. In Figure 11, after the onset frame is detected, then the state of transition distribution model move again from state 0, and so on until the last frame. Figure 11, it can also be seen that the existence of a false peak in the 25th frame. However, because the frame is the value of the state transition probabilities into state 0 is low, therefore the counting of the frame state continues.
If there are many variations of tempo, all values of tempo variations in the calculations can be included on Ps, 1 as written in eq. (11).
where K is the number of possible variations in tempo and μ k is the value of music to variations in the kth period-. σ k is the standard deviation of the value of music to the kth period. Illustration value of P s,1 with two variations of tempo can be seen in Figure 12 which shows the value of the probability of a subsequent frame if the previous onset frame have an s-state . Two tempo variations expected to occur are 60 bpm and 120 bpm, which means that the average onset frame appears at a distance of every 100 frames and 50 frames, given the distance between each frame is as wide as 10 ms hop size. The calculation of the latter is the initial value of the assumed probability is the probability with uniform distribution among all N number of states that may arise.
By locating the frames that have a state value of 1 from the eq. (13), the performance of HMM method can be measured and compared to the performance of the method of Spectral Flux.
In this study, the input value of the musical period is obtained from the calculation peak distance of first 4 seconds, so when the orchestra is including the slow tempo (tempo less than 60 bpm), the first 4 seconds of the calculation can be obtained at least 2 peaks. While the number of state are included also has the same value, ie N = 400 which is equivalent to 4 seconds (hop size is used as the distance between frames is 10 ms). The process of modeling distribution of the transition state of this research is made of two possible models. The first model is a Gaussian distribution with a peak average value obtained from the average distance between the peaks in the first 4 seconds. While the second model, many peaks Gaussian distribution are made, on which the peaks are multiples of the average distance between the peaks in the first 4 seconds. Illustration of the two models can be seen in Figure 13. Selection of models made at the end of the frame that has the greatest probability value that fit with observed data from all frames on the total multiplication eq. (14). While the overall flow diagram of the complete method is shown by Figure 14.
with T is the total number of frames of the observed audio signal is, is the value of initial probability, | state transition probabilities, and | is the probability of the observed values with state requirements of t th . Directed acyclic graph showing the relationship between the observational data with state frame from the first frame to the last frame. The calculation of the value of the observations used in the HMM method in this study is the extraction of features from the output magnitude STFT.
Performance Measurement
To test the performance of the onset detection system yields F-measure is calculated parameter which is a major requirement in the provision MIREX (Music Information Retrieval Evaluation Exchange) [8].
with n tp is the number of true positives (number of beats right), n fp is the number of false positives (wrong number of beats) and n fn is the number of false negatives (number of beats that are not detected).
As for the location of a beat would be true if the actual beats still within tolerance of ± 70 ms of the detection results. This is in accordance with the specifications of MIREX and to make room for manual labeling process which may be less accurate. If there is more than one beat detection results within the tolerance limits, then only one is counted as a true beat detection and others counted as false positives. If a beat detection is right on the boundary between the two beat the real location, it is considered there is a true beat detection and a false negative.
In this experiment, the method of HMM (Hidden Markov Model) and the method of Spectral Flux compared performance on synthetic track data and acoustic instruments played by balungan group, namely demung, Saron, and Peking. The song is played has the same notation and the distance between the beats are almost the same anyway.
We generated two types of gamelan sound for testing: 1. Synthetic. Each gamelan note was recorded and the ensemble was played using computer with gamelan note direction. 2. Acoustic. Gamelan ensemble was played by the players and was recorded.
Because the data track is tested synthetic track, then all the data this song has a fixed tempo, so the F-measure value measured is high. Variation of the F-measure on experiments with synthetic tracks influenced by the presence of a single recording signal amplitude variations. Significant improvement occurred in the data demung synthetic track, because there are lot of false negatives, as shown in Figure 15.
On test, the performance gained by 98% due to the irregularity demung beats which can be shown in Figure 16. On these data, there is a shortage of beats after the 38 th notation which make onset detector to be likely to go wrong. This was evidenced by the F-measure obtained by Spectral Flux method only 91%, far below the performance of HMM methods.
In this experiment, the Manyar Sewu song data is used. It is played in the orchestra and recorded immediately. Because there are many instruments being played, then the assessment of system beats (beat tracking) the frequency of the screening process should be done in advance and beat detection focused on signals from the instrument balungan Saron or demung, because the song Manyar Sewu, Saron instruments and demung played with a single beat corresponding notation song.
The diversity of instruments orchestra played the Manyar Sewu track, generates very various music signals of very varied. STFT Magnitude results have the height value diversity so is not an uncommon onset detection methods perform error detection.
Conclusion
In order to construct a robust instrument extraction from music ensemble, From all the experimental results in this study for the assessment of the performance of beat detection signal onset balungan on Gamelan music, detection of the onset using HMM method has a high 117 performance up to 89% on a played single instrument song data. While the song data with many instruments, achieving 91% accuracy. Single instrument the song data which often change tempo, HMM detector has a performance of up to 95%. HMM method can improve the performance of the F-measure is better 10% when compared to the use of Spectral Flux. | 5,559.4 | 2015-03-01T00:00:00.000 | [
"Computer Science"
] |
A Model for the Force Exerted on a Primary Cilium by an Optical Trap and the Resulting Deformation
Cilia are slender flexible structures extending from the cell body; genetically similar to flagella. Although their existence has been long known, the mechanical and functional properties of non-motile (" primary ") cilia are largely unknown. Optical traps are a non-contact method of applying a localized force to microscopic objects and an ideal tool for the study of ciliary mechanics. We present a method to measure the mechanical properties of a cilium using an analytic model of a flexible, anchored cylinder held within an optical trap. The force density is found using the discrete-dipole approximation. Utilizing Euler-Bernoulli beam theory, we then integrate this force density and numerically obtain the equilibrium deformation of the cilium in response to an optical trap. The presented results demonstrate that optical trapping can provide a great deal of information and insight about the properties and functions of the primary cilium.
Introduction
Eukaryotic non-motile cilia [1] are slender structures extending from the bodies of cells, 0.2 microns in diameter and several microns long, the specific length being regulated by the cell itself [2].
OPEN ACCESS
Although their existence has been long known, knowledge of their mechanical [3][4][5], sensory [6,7] and functional [8][9][10] properties is still incomplete.After considerable effort, sophisticated models of the ultrastructure [11][12][13], equilibrium [14] and dynamic [15][16][17] response of cilia to applied fluid flow exist.However, these models make use of material parameters that are incompletely measured, such as the elastic and shear moduli [3][4][5]16,18].One of our overall goals is to better measure the mechanical properties of primary cilia to gain fresh insight regarding the functional significance of this organelle.
Optical traps are a non-contact method of applying a localized force to microscopic objects and an ideal tool for the study of ciliary mechanics.Optical scattering and gradient forces on micrometer scale particles were first reported in 1970 [19].Optical trapping in three dimensions via a single laser beam ("gradient trapping") was reported shortly thereafter [20].Optical trapping is already used for determining the mechanical properties of microtubules [21] and similar cytoskeletal structures; previous optical techniques for cilia often involve attaching dielectric spheres to the tip [18,22].However, the sensory nature of the primary cilium implies that it is preferable to generate and measure the deformation of a cilium in a non-contact fashion, and from this deduce the elastic modulus and refractive index.We have previously demonstrated the ability to measure the force applied by an optical trap in the absence of information about the optical or physical properties of the trapped object [23].
Figure 1 presents a schematic of the measurement; a cilium is trapped and the trap is displaced from the center of the (undeformed) cilium by a distance "δy" and base of the cilium by a distance "δz"; the flexible cilium deforms in response to the applied load.A detailed description of the apparatus has been presented in [23].
Figure 1.
A cilium is trapped with trap displaced from the center of the base by a distance "δy"; the flexible cilium deforms in response to the applied load.The resulting shape is shown when the trap is located at the distal tip of the cilium, but in general is located a distance "δz" from the cell surface.
The Electric Field and the discrete dipole approximation (DDA)
Cilia are slender dielectric cylinders and due to their size, can be considered optically homogeneous.Because cilia are oriented along the optical axis of the trapping beam, scattering algorithms based on for example, Mie scattering, break down [24][25][26].As detailed in Ref. [26], the farfield scattered intensity contains a term 1/cos(θ) that diverges as the light propagation axis becomes parallel with the cylinder axis (θ = 90°).The discrete dipole approximation (DDA) is an alternate means of calculating the force on non-spherical particles [27][28][29][30].In DDA, objects are modeled as a collection of interacting dipoles.The DDA approach, being a first-order approach, suffers from difficulties if the relative refractive index is high.As discussed in [31], our system with a low relative refractive index (∆n ≈ 0.02) can be modeled reasonably well with the DDA approach.Higher-order corrections to DDA have been developed [31] but are beyond the scope of this paper.The local electric field at dipole "I" is a combination of the electric field of the beam and the scattered field from the other dipoles ( ) = ( ) + ∑ ( , )( ) ≠ (1) where ( , ) is the dyadic Green's function and the polarization () = α()() with α() the Clausius-Mossotti polarizability [30] , where δ is the separation between dipoles and ϵ the relative permittivity of the surrounding cell culture media.Optically, the cilium is approximately homogeneous so ϵ( ) ≅ ϵ( ) ≡ ϵ and ( ) ≅ α = 3ϵ ϵ 0 δ 3 (ϵ − ϵ ) (ϵ + 2ϵ ) ⁄ .Modeling the optical trap as a focused linearly-polarized (taken to be in the 'y' direction) Gaussian laser beam propagating in the 'z' direction, the incident field can be written as where: Trap beam waist ω 0 = λ 0 π −1 (/ ) ⁄ Radial coordinate ρ = [( − δ) 2 + ( − δ) 2 ] 1/2 P the optical power NA the numerical aperture of the focusing lens, and δx, δy, δz are the displacement of the center of the beam waist from the coordinate origin, which we place at the center of the base of the cilium.
For the numerical calculations performed here, P = 0.5 W, NA = 0.95, and λ0 = 1.064 µm, corresponding to our experimental apparatus.The culture media is an aqueous saline solution with estimated refractive index nm =1.33, resulting in z0 = 1.2 μm and ω0 = 0.32 μm.When appropriate, we set the refractive index of the cilium nc = 1.35, based on the chemical composition of the cilium [6].
Similarly, for numerical calculations presented here, we set the length of a cilium h = 3 µm and diameter d = 0.2 µm.The length of the cilium is regulated by the cell in response to stimulation by flow [6] or other biochemical factors (for example, [32]), and 3 μm is a typical length.
We note that in this paper, Equation (2) models the focused laser beam using the paraxial approximation even though our microscope objective is specified as sin(θ) = 0.71, corresponding to an error of 12% as compared to sin(θ) = θ.Incorporating higher-order corrections, for example [33] or using vectorial diffraction theory [34] is beyond the scope of this paper.
The Force on a Dipole and the Force Density
In Appendix A and Figure 2 it is shown that for our experimental conditions the fraction of the electric field produced by neighboring dipoles provides a negligible contribution to the total field, greatly simplifying the analysis.Ignoring the scattered field contributions, the force on an individual dipole can be calculated using ( ) = α 2 ⁄ ∇ ( * • ).
Figure 2. (a)
A plot of the points in a discretized cilium; (b) A plot of the ratio of the scattered electric field in the y direction from the other dipoles to the total electric field in the y direction as a function of s (microns) along the length of the cilium for (δx, δy, δz) = (0 µm, 0 µm, 1.5 µm).Notice that the field from the other dipoles is never more than 3/100 of the total field.
For the y-polarized beam ( ) = ( ) ̂: Extending this result by passing from discrete dipoles to a continuous distribution suggests a force density 'f' in the y direction of the form Modeling the cilium as a circular cylinder of diameter 'd' and height 'h' with base centered at the origin and axoneme oriented in the z direction, the total force in the y direction is given by
The z Dependence of the Force Density and the Bending of a Cilium
Here we use time-independent Euler-Bernoulli beam theory [35], valid for small static deformations of beams under constant load.The load on the cilium at a particular value of 'z' is given by integrating the force density over the cross section of the cilium at that value of 'z'.
The displacement of the centerline of the cilium in the y direction ('Y') at any position along its length, parameterized by 's', is given by solving the differential equation: where E is the elastic modulus (the mechanical property we wish to measure), I is the second moment of area of a cross section of the cilium I = πd 4 , and boundary conditions and Y'''(h) = 0.These boundary conditions correspond to a "simply supported" beam with "free end".Extension to the time-dependent problem (say, for oscillatory or pulsatile applied flow occurring in vivo) is straightforward but not relevant for this report.In Equation ( 6), we modeled the cilium as a stack of infinitesimally-thin disks of diameter 'd', each laterally displaced by an amount Y(s).The coordinate s is related to z by ds 2 = dz 2 + dY 2 .For small deformations, s ≈ z.When appropriate, we set E = 1.2 * 10 -8 N * μm −2 based on published reports [5,17,[36][37][38].
The General Case
Equation ( 6) is not, in general, analytically solvable.Instead, we used an approximate analytic solution using series representations, giving: Appendix B derives this result, using the assumption that the beam is only displaced from the origin in the direction of polarization and z.Although Equation ( 7) is rather complex, it does converge rapidly, as shown in Figure 3. Using the series approximation, Equation ( 6) can be numerically solved, providing a final equilibrium deformed shape of the cilium.The model can also output other experimentally convenient parameters, for example the displacement of the cilium tip from equilibrium.
Example plots of Equation ( 7) are shown for a representative experiment (Figure 4), allowing the displacement of the center of the trap relative to the cilium base to vary (Figures 5-7) and allowing the bending stiffness to vary (Figure 8).
A Very Narrow Cylinder
An alternative to the complicated approximation of the load from Equation ( 7) is to treat the cilium as a one-dimensional object, one dipole thick.This changes the polarizability to Passing to a continuous medium, the differential equation describing the deformation of the cilium is where Y is the position of the center of the cilium.Figure 9 compares the results from Equations ( 7) and ( 9), showing that the 1-D approximation provides reasonable results as compared to the full 3-D expression.
Figure 9.
Example deformation of a primary cilium for a representative experiment, showing the effect of idealizing the primary cilium as a 1-D object.Finite cilium diameter (red), the 1-D idealized approximation (blue).The trap parameters: incident power P = 0.5 W, trapping lens numerical aperture NA = 0.95, and trap wavelength λ0 = 1.064 µm.The trap center was displaced from the cilium axis by δy = 0.5 µm and δz = 1.5 µm.The cilium length is 3.0 µm.
Discussion
There are two aspects to our results presented above.First, we have demonstrated a method to calculate the applied force by an optical trap.We have shown that our model can output a variety of measurable results (cilium profile, tip displacement) as a function of several different experimental inputs (trap displacement, cilium mechanical properties).For example, in concert with measured data, we can determine the bending modulus of a primary cilium.
The rationale for using optical trapping to stimulate a primary cilium rather than fluid flow is that optical traps apply a precise force to a single cilium, rather than apply a force to the entire apical surface of cells.Thus, optical trapping has specific advantages as a tool to study ciliary mechanosensation.Since quantitative knowledge of the applied force is essential to use optical trapping in this context, our results provide justification and validation of our optical trap method.
Conclusions
The motivation for this work was to develop a method to accurately measure the mechanical properties of the primary cilium.Laser tweezers offers the ability to apply a quantified force in a noncontact fashion.The goal here was to develop an initial model that can be used to analyze experimental data (total applied force, movement of the cilium tip) that we can obtain in the laboratory.The approximations introduced here were motivated by complexity of the analytical expression.Thus, we took pains to demonstrate that our approximate solution still provides results with sufficient accuracy and precision to be useful.However, it may be necessary to use more complicated differential equations to find the final deformation when accounting for environmental factors such as fluid flow, gravity, and thermal fluctuations.Similarly, solving the dynamic Euler-Bernoulli equation for beam oscillation may introduce additional complicating factors that we have not yet accounted for.For example, as the cilium is bending, the integrals in Equation ( 4) and ( 6) will depend on time.It is possible that a modified shooting approach [39] could be used, but a fully dynamic model is beyond the scope of this paper.
The final deformation profile of the cilium is a function of many variables.Importantly, one of them is the elastic modulus, a measure of stiffness.By measuring the displacement of the tip of the cilium for many different positions of the center of the optical trap it should be possible to infer the value of the bending modulus and the refractive index.It is critical to note that our model allows us to determine the mechanical properties of the primary cilium by only measuring the tip displacement.This is significant because the geometry of the primary cilium (parallel to the optical axis) greatly complicates attempts to image the entire axoneme.Thus, our model has a significant advantage because a single measurement of the tip displacement is sufficient to constrain the model and determine, for example, the bending modulus.Accurately characterizing the mechanical properties of the primary cilium is an essential step in demonstrating the biological relevance of this organelle.where = − and = | |.Since the relative polarizability is small, terms of second order dependence or higher are neglected.The cilium is modeled as a cylinder with a hemispherical top shown in Figure 2. SciLab 5.3.3code was written to evaluate the above expressions.Our results show that the scattered field in the y direction is very small compared to the total electric field in the y direction.Therefore, the scattered field can be ignored for our purposes.
Appendix B: Series Solution for the Force on a Cylinder
The goal is to evaluate the integral: Using the error function √ 2 () = ∫ exp(− 2 ) , we first obtain Using the following series expansions: where 2j + 2m-p-q is even.
Figure 3 .
Figure 3. Plots of Equation (7) with increasing numbers of terms, indicated on the plot.The series converges rapidly, indicating that only the first 5 or 10 terms are required for numerical evaluation.
Figure 4 .
Figure 4. Example deformation of a primary cilium for a representative experiment.The trap parameters: incident power P = 0.5 W, trapping lens numerical aperture NA = 0.95, and trap wavelength λ0 = 1.064 µm.The trap center was displaced from the cilium base by δy = 0.2 µm and δz = 2.5 µm.The cilium length is 3.0 µm.
Figure 5 .
Figure 5. Example deformation of a primary cilium for a representative experiment, showing the effect of allowing the vertical displacement of the trap from the cilium base δz = 0, 1.5, 3, and 4.5 µm.The trap parameters: incident power P = 0.5 W, trapping lens numerical aperture NA = 0.95, and trap wavelength λ0 = 1.064 µm.The trap center was displaced from the cilium axis by δy = 0.2 µm.The cilium length is 3.0 µm.
Figure 6 .
Figure 6.Example deformation of a primary cilium for a representative experiment, showing the effect of varying the vertical displacement of the trap from the cilium base δz.Data graphs the displacement of the cilium tip Y(3) as δz is varied.The trap parameters: incident power P = 0.5 W, trapping lens numerical aperture NA = 0.95, and trap wavelength λ0 = 1.064 µm.The trap center was displaced from the cilium axis by δy = 0.2 µm.The cilium length is 3.0 µm.
Figure 7 .
Figure 7. Example deformation of a primary cilium for a representative experiment, showing the effect of varying the horizontal displacement of the trap from the cilium axis δy.Data graphs the displacement of the cilium tip Y(3).The trap parameters: incident power P = 0.5 W, trapping lens numerical aperture NA = 0.95, and trap wavelength λ0 = 1.064 µm.The trap center was displaced from the cilium axis by δz = 1.5 µm.The cilium length is 3.0 µm.
Figure 8 .
Figure 8. Example deformation of a primary cilium for a representative experiment, showing the effect of varying the bending modulus of the primary cilium.Data graphs the displacement of the cilium tip Y(3).The trap parameters: incident power P = 0.5W, trapping lens numerical aperture NA = 0.95, and trap wavelength λ0 = 1.064 µm.The trap center was displaced from the cilium axis by δy = 0.5 µm and δz = 1.5 µm.The cilium length is 3.0 µm. | 3,997 | 2015-05-29T00:00:00.000 | [
"Physics"
] |
On the Use of the Helmert Transformation, and its Applications in Panel Data Econometrics
: We revisit the Helmert transformation, and provide a useful and simple derivation of the joint distribution of the sample mean and the sample variance in samples from independently and identically distributednormalrandomvariables.Ourderivationisdistinguishedbyconcreteness,verylittleabstractness, and should be appealing to beginning students of statistics, and to both beginning and advanced students of econometrics. We also highlight one fruitful application of the Helmert transformation in panel data econometrics. The Helmert transformation can be used to eliminate the fixed effects in the estimation of fixed effects models, and we briefly review this application of the transformation in the panel data context.
Introduction
The Helmert transformation is named after the German geodesist Friedrich Robert Helmert (1876) and has a long history of use in statistics (Sawkins 1940;Cramér 1946, p. 116;Kruskal 1946;Weatherburn 1961, p. 164;Brownlee 1965,p .2 7 1 ;Rao 1973, p. 182 among others).One application (the application from now on for brevity) of the Helmert transformation in statistics is to find the joint distribution of the sample mean, and the sample variance, calculated from a sample from a normal population.Although the cited references are authoritative, the presentations there of the application are in our view often rather complicated, which inhibits readers' understating of the topic.We present our version of how the Helmert transformation works in the application, and our version should be particularly suitable and accessible to either beginning undergraduate university students in statistics, or students at both undergraduate and postgraduate level in econometrics, or quantitative social and political sciences which use econometrics.
For example, Cramér (1946, p. 116) and Weatherburn (1961, p. 164) both follow Sawkins (1940),andintroduce in abstract terms an orthogonal linear transformation having certain properties.Instead, we think that explicitly stating the transformation, and then directly establishing its key properties would have pedagogical advantages.Rao (1973, p. 182) similarly uses an orthogonal matrix, i.e. a Helmert matrix, which he defines in terms of certain abstract properties.We think there would be pedagogical advantage to instead explicitly display one such Helmert matrix, so that the reader can visualize what is going on, and then proceed to establishing its properties and what it exactly does in the context of the application.
Upon presenting our treatment of the application, which of course uses the vary same ideas of the Helmert transformation and the associated Helmert matrix as the sources mentioned above, we also briefly discuss the use of the Helmert transformation in panel data econometrics.In particular, the Helmert transformation can be employed to eliminate fixed effects in the estimation of fixed effects models.Therefore, our treatment has the advantage that it is very concrete and explicit compared to previous literature, and advocates for more extensive use of the Helmert transformation.Unlike previous authors, we firstly explicitly state the Helmert transformation and display the Helmert matrix, and then proceed to prove the implications of applying the Helmert transformation in the application.
Our proof is closest to Brownlee (1965,p.271).Brownlee (1965, p. 271) starts with a formula for the sample variance and using it, derives new variables that possess certain desired properties.This derivation of the new variables is fairly involved.We instead start by defining the new variables and then using a simple induction argument, prove that these variables have the same variance as the original variables.
We use mathematical induction in our proof, and Stigler (1984) also uses mathematical induction in his proof, but the argument there is different.He first establishes the joint distribution of the sample mean and variance for a sample of just two observations and then via the induction argument, proves that this joint distribution extends to samples of larger size.Zehna (1991) asserts that Stigler's proof is not completely rigorous, and relies three times throughout the proof on a faulty argument.
This article proceeds as follows.In Section 2 we introduce the Helmert transformation and derive the joint distribution of the sample mean and the sample variance.In Section 3 we introduce the Helmert matrix associated with the transformation, and provide some remarks on its properties.In Section 4 we suggest that the use of the Helmert transformation has been somewhat overlooked in within estimation of one way error component fixed effects models in panel data econometrics, and we briefly survey the extant applications of the Helmert transformation in panel data context.In the last section we conclude.
The Helmert Transformation and the Joint Distribution of the Sample Mean and the Sample Variance in Samples from a Normal Population
We have a set of T random variables x t , t = 1, 2, … , T independently and identically distributed (i.i.d.from now on) as Normal(, 2 ), and we want to find the joint distribution of the sample mean xT = ∑ T t=1 x t ∕T and the sample variance s 2 = ∑ T t=1 (x t − xT ) 2 ∕(T − 1).Consider the Helmert transformation, which takes the set x t , t = 1, 2, … , T and produces a new set of variables z t , t = 1, 2, … , T as follows: The new set of variables z t have convenient properties.(In what follows, we liberally use the properties of the expectation E(⋅), variance Var(⋅), and covariance Cov(⋅, ⋅) operators, and we assume that the reader is familiar with these properties.) Properties of the Helmert transformation: ∕t, which we prove in the theorem below.
An Auxiliary Theorem.Let x t be a scalar quantity observed over t = 1, 2, … , T periods.Then, where xT = Proof of the Auxiliary Theorem by induction.
so for T = 2 the relationship holds indeed.
Now we assume that the relationship holds for T, and demonstrate that if it holds for T, then it holds for By the induction hypothesis Therefore, what remains to be shown is that Hence, which is indeed equal to (x T+1 − xT ) 2 T∕(T + 1).
□
We will now use the properties of the Helmert transformation to deduce the joint distribution of the sample mean and the sample variance in samples from a normal population.We use the following three facts.First, linear combinations of jointly normal random variables are themselves jointly normally distributed (Cramér 1946, p. 213;Weatherburn 1961, p. 57).Second, for a set of jointly normal random variables, if they are uncorrelated, they are independent as well (e.g.Lancaster 1959;David 2009).There is a subtle point here, the set of variables need to be jointly normal, as one can construct counter examples where the marginal distributions are normal but the joint distribution is not normal, variables are uncorrelated, and yet they are not independent.Lancaster (1959) presents such a counter example, and precisely states the conditions in his Theorem 1. Third, the sum of k independent, squared, standard normal variables is distributed as 2 (k), which is a distribution discovered and described by Helmert (1876). (Helmert (1876) discovered the 2 distribution, however he did not observe that the sample mean and the sample variance are independent, see David (2009).) Main Theorem.The joint distribution of the sample mean and the sample variance in a sample of T i.i.d.random variables x t , t = 1, 2, … , T,eachx t distributed as Normal(, 2 ), has the following properties: -The sample mean xT and the sample variance s 2 are independently distributed.
-The sample mean xT is distributed as Normal(, 2 ∕T).
Proof of the Main Theorem.In the proof we will refer to the listed Properties of the Helmert transformation as Property plus the number under which the property was listed.
-The sample mean xT is a function of z 1 only, and by Property 5, the sample variance s 2 is a function of z t , t = 2, … , T only.ByProperty1,z t , t = 1, 2, … , T are uncorrelated, and by Property 2, z t , t = 1, 2, … , T are jointly normally distributed.Because z t , t = 1, 2, … , T are a set of uncorrelated and jointly normal random variables, they are a set of independent random variables as well (Lancaster 1959, Theorem 1).
Therefore the independence of xT (a function of z 1 only) and s 2 (a function of z 2 , z 3 , … , z T only) follows because z 1 is independent of z 2 , z 3 , … , z T .-The sample mean xT is normally distributed by Property 2. Its mean is given in Property 3, and its variance isgiveninProperty4.Overallx T is distributed as Normal(, 2 ∕T).
where the last equality follows by dividing both sides of Property 5 by 2 .However, t ∕ 2 is the sum of T − 1 squared standard normal variables, and hence is distributed as 2 (T − 1).
The Main Theorem was first proved by Fisher (1915Fisher ( , 1925)), but Fisher's "geometric arguments" are di cult to follow.What we have presented, albeit somewhat lengthy, is an elementary and concrete proof which requires very little abstract thinking.Our proof should be accessible to students with any moderately quantitative background, and should be much preferable to students who are not used to elaborate abstract mathematical thinking, such as beginning undergraduates in statistics, or both undergraduate and graduate students in econometrics.Overall, from the transformed set of variables z t it is very easy to deduce the joint distribution of the sample mean and the sample variance.
We finish this section by stating another property of the Helmert transformation, which we will briefly use in Section 4. Suppose there is another set of variables y t , t = 1, … , T. Then, The proof of this property follows the same steps as the proof of the Auxiliary Theorem and, therefore, is omitted.Note that if y t = x t for all t, this property reduces to Property 5.
The Helmert Matrix
The previous section is self contained, and using the Helmert transformation to deduce the joint distribution of the sample mean and the sample variance does not require any matrix algebra.However if there is need, or desire to do so, one can also relate the Helmert transformation from the previous section to what Lancaster (1965) calls a Helmert matrix in the strict sense.Consider the following matrix We can verify by direct multiplication that the rows of this matrix are orthogonal, that is, HoHo ′ results in a diagonal matrix.We can also consider the rescaled version of Ho where diag(v) is an operator that transforms a vector v into a diagonal matrix, and overall the operation diag(v) ⋅ Ho results in multiplying the ith row of the matrix Ho by the ith element of the vector v.With the choice of the first element in v as 1∕T, we can see by direct multiplication that Hm ′ Hm is symmetric with one element repeated on the main diagonal and another element repeated everywhere off the main diagonal.HmHm ′ is diagonal and is almost the identity matrix, only the upper left element is different from 1, the rest of HmHm ′ coincides with the identity matrix.
If we instead choose the first element in v to be 1∕ √ T, direct multiplication shows that Hn ′ Hn and HnHn ′ are both the identity matrix, and we would call such a matrix Hn an orthonormal matrix.
W es e et h a ti fw ea r r a n g et h es e tx t , t = 1, 2, … , T into a column vector x ≡ ( choose the first element of v to be 1∕T, we will obtain the Helmert transformation from the previous section, z = Hm ⋅ x, where z ≡ . Because of the appealing aesthetics of Hn with the first element in v chosen to be 1∕ √ T, i.e. a choice resulting in an orthonormal matrix Hn ′ Hn = HnHn ′ = Identity, all authors we are aware of use this version of the Helmert matrix.However for our application all we need is that HmHm ′ be a diagonal matrix with all the elements on the main diagonal below the first being equal to 1.In this situation, the elements of the vector z = Hm ⋅ x will be mutually uncorrelated and each element z t , t = 2, … , T will be homoskedastic.Therefore, for our application choosing the first element in the vector v as 1∕T serves perfectly fine.
The Helmert Transformation in Panel Data Models
We have derived the joint distribution of the sample mean and the sample variance with particular focus on the simplicity and concreteness of the derivation, and particular focus on the use of the Helmert transformation.
The Helmert transformation has found its application in the "fixed effects" panel data model too.Consider the standard one way "fixed effects" panel data model (e.g.Wooldridge 2010, Ch.10;Hsiao 2014,Ch.3) where the regressand y it is a scalar, the regressor vector x it is K × 1, y it , x ′ it are i.i.d. for i = 1, 2, … , I,i.e.the variables constitute an i.i.d.random sample in the cross section.The individual "fixed effects" (so called by convention) are time constant, random and potentially correlated with the regressor vector.The idiosyncratic error term it is uncorrelated with the regressor x j for all t,,i, j, i.e. the regressor x it is strictly exogenous with respect to it conditional on the fixed effects, and it is i.i.d.both in the cross section and in the time series dimensions.
Consistent estimation of the parameter vector in Eq. ( 3) under the assumptions that the fixed effects can be arbitrarily correlated with the regressors, and the regressors are strictly exogenous with respect to it , conditional on the fixed effects, proceeds by eliminating the fixed effects.The conventional way of eliminating the fixed effects is by the within transformation, firstly averaging Eq. ( 3) across time, to Then we subtract this averaged equation from Eq. ( 3) to eliminate the fixed effects and to obtain the estimating equation (4) Finally, we estimate Eq. ( 4) by ordinary least squares (OLS) over the I ⋅ T pooled observations to obtain the within estimator ) . (5) The within estimator is well studied, well understood and a basic building block in the panel data econometrics literature.However the within transformation introduces strong correlation between the transformed errors in the estimating Eq. ( 4), because all the transformed errors ( it − εiT )foracrosssectionaluniti share the same εiT .This makes residual diagnostic checks and residual analysis awkward and di cult, for example, if we wanted to test that the errors it are indeed i.i.d.
On the other hand if we apply the Helmert transformation in Eq. ( 1) on each variable in the fixed effects model Eq. ( 3) and for each cross sectional unit i separately, to construct transformed variables corresponding to z 2 , z 3 , … , z T which have mean 0, then we again eliminate the fixed effects i : We can proceed with the OLS estimation of the Helmert-transformed estimating Eq. ( 6), which is the best linear unbiased estimator, because the errors in the Helmert-transformed estimating equation ( it − εit−1 ) √ (t − 1)∕t are uncorrelated and homoskedastic, (and normal if the original it were normal to start with).By invoking Property 6 of the Helmert transformation, one can verify that the estimator in Eq. ( 7) coincides with the within estimator in Eq. (4).However, as an added benefit, we can apply any residual diagnostics and checks that we might have in mind, because the Helmert-transformed errors in the estimating equation have the same stochastic properties as the original errors in the structural model.To the best of our knowledge, the existing literature on panel data models has not exploited this simplification in the analysis afforded by the Helmert transformation.
The use of the Helmert transformation in the context of panel models and, in particular, dynamic panel models has been popularized by Arellano and Bover (1995) and Arellano (2003).(See also Alvarez and Arellano 2003.)It is also described in Hansen (2022, Section 17.43).A dynamic panel model is like the one given in Eq. ( 3), but it additionally contains the lagged values of the dependent variable as regressors.For example, if y it (directly) depends only on its own value in the previous period, then the model is (8) Arellano and Bover (1995) suggest to transform the variables in the following way: Observe that this is the same Helmert transformation in Eq. ( 1) but with the variables ordered in the reverse order according to thetimeindex.Arellano and Bover (1995) refer to this transformation as "the forward orthogonal deviation" as opposed to "the backward orthogonal deviation" which is the one displayed in Eq. ( 6).Both forward and backward orthogonal deviations produce uncorrelated and homoskedastic errors in the transformed model, provided that the original errors it are i.i.d.However, the forward orthogonal deviation has the following advantage in the dynamic panel model.When transforming the variables to remove the fixed effects, it introduces correlation between the transformed lagged dependent variable and the transformed error term.Therefore, one needs to use an instrumental variables estimator to obtain consistent estimates of and .With the forward orthogonal deviation, the past values of the dependent variable y i1 , … , y it−1 are valid instruments for (y it−1 − ȳit ) √ (T − t − 1)∕(T − t), which is the new lagged dependent variable after the variable transformation.Furthermore, Hayakawa (2009aHayakawa ( , 2009b) ) shows that the instrumental variables estimator is more e cient if the instruments themselves are constructed using backward orthogonal deviations.
Conclusion
We revisit the Helmert transformation, and we provide a simple and useful induction-based derivation of the joint distribution of the sample mean and the sample variance in samples from independently and identically distributed normal random variables.Our derivation is concrete and should be appealing to students of statistics and econometrics.We also suggest one fruitful application of the Helmert transformation in panel data econometrics -residual based tests in fixed effects models.We briefly review the applications of the Helmert transformation in panel data context, where the transformation is more commonly known as "the forward/backward orthogonal deviations operator".
where without loss of generality we take >t > 1. (Property 1 does not require normality; an i.i.d.distribution of the variables would be su cient.) 2. z t , t = 1, 2, … , T are linear combinations of x t , t = 1, 2, … , T (which we have assumed, are jointly normally distributed) and hence z t , t = 1, 2, … , T are jointly normally distributed as well.3. Ez 1 | 4,349.8 | 2022-10-05T00:00:00.000 | [
"Economics"
] |
Individual reflectance of solar radiation confers a thermoregulatory benefit to dimorphic males bees (Centris pallida) using distinct microclimates
Incoming solar radiation (wavelengths 290–2500 nm) significantly affects an organism’s thermal balance via radiative heat gain. Species adapted to different environments can differ in solar reflectance profiles. We hypothesized that conspecific individuals using thermally distinct microhabitats to engage in fitness-relevant behaviors would show intraspecific differences in reflectance: we predicted individuals that use hot microclimates (where radiative heat gain represents a greater thermoregulatory challenge) would be more reflective across the entire solar spectrum than those using cooler microclimates. Differences in near-infrared (NIR) reflectance (700–2500 nm) are strongly indicative of thermoregulatory adaptation as, unlike differences in visible reflectance (400–700 nm), they are not perceived by ecological or social partners. We tested these predictions in male Centris pallida (Hymenoptera: Apidae) bees from the Sonoran Desert. Male C. pallida use alternative reproductive tactics that are associated with distinct microclimates: Large-morph males, with paler visible coloration, behave in an extremely hot microclimate close to the ground, while small-morph males, with a dark brown dorsal coloration, frequently use cooler microclimates above the ground near vegetation. We found that large-morph males had higher reflectance of solar radiation (UV through NIR) resulting in lower solar absorption coefficients. This thermoregulatory adaptation was specific to the dorsal surface, and produced by differences in hair, not cuticle, characteristics. Our results showed that intraspecific variation in behavior, particular in relation to microclimate use, can generate unique thermal adaptations that changes the reflectance of shortwave radiation among individuals within the same population.
Introduction
Solar (shortwave) radiation plays a large role in an organism's thermoregulation via radiative heat gain, driving adaptive patterns of body coloration and reflectance [1,2]. Organisms may absorb solar energy to heat up, or transmit/reflect solar energy to avoid additional heat gain [3, and plays a large role in allowing ectotherms to reach active temperatures at cool air temperatures [34]. Females may also be darker in coloration (like small-morph males) in order to increase solar radiative heat gain and maximize foraging performance in the cool, early-morning period. We hypothesized that microclimate usage drives morph-specific differences in UV-NIR reflectance. We predicted that large-morph males would have higher reflectance of solar radiation compared to small-morph males, as a thermoregulatory adaptation to reduce or increase shortwave radiative heat gain in their hotter or cooler microclimates, respectively. Because microhabitat differences are strongest in incident radiation from above, we predicted morph reflectance differences on the dorsal, but not ventral, body surfaces (see [8]). To better understand mechanisms of morph-specific reflectance we tested whether differences in reflectance were related to hair (branched setae) or cuticle characteristics; we measured solar reflectance of the dorsal surface of the thorax with, and without, hairs present. We predicted that hair density would correlate with higher reflectance across body regions, or morph/sex, and calculated hair density on the thorax and abdomen across morphs and females. Finally, we predicted that females would have similar reflectance profiles to small-morph males on the dorsal surface, and would not differ from males in ventral surface reflectance.
Specimen collection
We classified males as the 'large' male morph if we found them patrolling, digging, or fighting and they had the grey/white coloration (Fig 1, S1 Fig) and leg morphology (thick, bulging femurs on the hindmost pair of legs) distinctive to the large, behaviorally-inflexible morph [28][29][30][31][32][33][34][35][36][37][38][39]. We classified them as 'small' male morphs if we collected them hovering near vegetation (large-morph males never engage in hovering behavior [28]). We collected C. pallida males and females (n = 10/morph or sex) in late April and early May of 2018, or in late April and early May of 2019, either within 10 km of N33.464, W111.632 or N32.223, W11.008. Permits were obtained through the Desert Laboratory on Tumamoc Hill, or bees were collected non-commercially on public lands and no permits were required. We transported bees in a cooler on ice to a lab where we weighed them on an analytical balance (Metler Toledo AB54-S) to the nearest 0.1 mg.
Morphological measurements
We used digital calipers (Husky, 6 inch-3 mode model) to measure the body parts of the male and female bees used for reflectance spectrophotometry. We assumed the thorax and head to be a single cylinder for surface area calculations, with a diameter was equal to the widest point of the thorax and height equal to the length of the bee from the top of the head to the back of the thorax. We assumed the abdomen was a cylinder with a diameter equal to the width of the second tergite, and height equal to the length of the abdomen. For surface area calculations, we subtracted one flat, circular side (e.g., base) of both the thorax-head, and abdomen, cylinders, in order to account for the surface where the abdomen and thorax meet.
Reflectance spectrophotometry
We stored males of each morph and females dry, and away from light, following field collection. We captured total diffuse and spectral ultraviolet to near-infrared (290-2500 nm) percent reflectance (R) of the external surface using a Cary 5000 spectrophotometer equipped with an Agilent integrating sphere, deuterium arc (<350 nm) and tungsten-halogen (>350 nm) lamps, and R928 photomultiplier and PbS IR detectors. We limited measurements to a consistent diameter for all specimen using a custom aluminum aperture (2.18 mm diameter, 0.66 mm thickness), which we also used during the zero and baseline (Labsphere certified reflectance standard, Spectralon) measurements. We set integration time to 0.2 seconds, with data intervals of 1 nm. Baseline measurements varied across set up days, possibly due to minor contamination on the Spectralon surface, so we used the following formula to standardize measurements (R standardized,n ) across different set-ups: Where R n is the measured percent reflectance on Day n, B 100% is the Spectralon baseline (on Day 1 through Day n) and B 0% is the zero baseline (on Day 1 through Day n).
We measured R on the dorsal surface of the thorax (unshaved, and shaved of all hairs using a razor blade and the tips of #5 forceps), the dorsal surface of the abdomen on the first two terga, and the ventral surface of the abdomen on terga 2-3, for each specimen (n = 10 individuals of each morph/females for each area). We mounted specimen so that the thorax or abdomen was flat against the aperture, with measurements normal to the surface. Measurement error occurred between 799 and 800 nm due to the detector switching from the R928 to PbS IR (an artefact of sample orientation changing in relation to the new detector). To account for this, we took the difference in R between 799 and 800 for each individual and then added half of this value to all measurements � 799 and subtracted it from all values � 800. As C. pallida can lose hair over their lifetime, we only used individuals collected when the aggregation had just started, with low wing wear and no visible hair thinning on the thorax.
We assumed transmission through the bee was 0 (a standard assumption for insect bodies in the NIR [6,8]), and calculated average absorption (a n ) for a morph or sex at a wavelength 'n', using the following equation: where � R n is the average fraction of reflectance of all ten specimen at wavelength n.
Solar radiation calculations
Thermal flux due to solar radiation can be defined by summing direct beam (b), diffuse sky (d), and reflected (from substrates; r) radiation: where � a is the absorption coefficient of the bee (the fraction of beam, diffuse, or reflected radiation absorbed by the bee); A b , A d , and A r are the surface area of the bee exposed to beam, diffuse, or reflected radiation; and I b , I d , and I r is the total irradiance of the beam, diffuse, or reflected source. Here, we assumed A b = 0.25 A s (total surface area), and A d and A r = 0.5 A s (standard assumption for bees, and cylindrical objects, as in [36]). For A b and A d , we used the total thorax and abdomen surface areas separately for A s , in order to take advantage of the two distinct absorption coefficients we calculated for each region dorsally; for A r, we combined the two regions into a total surface area. Because only male morphs consistently differ in surface area, we did not perform thermal flux calculations for females; we did, however, calculate their absorption coefficients as described below.
We obtained the direct normal irradiance spectra in order to calculate � a b and � a r from the National Renewable Energy Laboratory (ASTM G173-03 Reference Air Mass 1.5 Spectra Derived from SMARTS v. 2.9.2 [37]). We calculated the direct beam absorption coefficient (a b ) for each individual on the dorsal surface of the abdomen and the thorax, as well as the shaved cuticle, between 290 and 2500 nm (λ min and λ max ) as: l min a n I n I b Where I n is the intensity of solar radiation at wavelength n and I b is the total direct beam irradiance between the min and max wavelengths.
We obtained the spectrum of radiation reflected from a light soil substrate (RS substrate ), at each wavelength (n), from the USGS Spectral Library (sample ID: splib07a record = 13249 [38]), in order to calculate the intensity of reflected radiation, I r , based on the intensity of direct beam solar radiation (I b ) at each wavelength (n).
We then calculated the average absorption coefficient ( � a r ) of reflected radiation, using each bee's ventral abdominal absorption (a rn ) coefficient at each wavelength (n): We also calculated a second measure of I r using an albedo measurement of light sand from the Sonoran Desert (I r = 0.245I b [39]), to compare with the light soil substrate spectrum obtained from the USGS spectral library. Unlike the USGS sample, this summed-reflectance value could not account for wavelength specific effects-but this value had the advantage of being from the location of interest.
We assumed � a d to be equal to � a b (as in [36]). We assumed I d was a 100 W/m 2 [40]. We estimated direct beam radiation on April 20 th , 2018 at 1000 hr using the following formulas from the solar radiation geometry literature [41][42][43]: With the following assumptions: S po = 1361 W/m 2 , a cd = 0.83 (clear day), and atmospheric pressure at the study site = standard air pressure (101.3 x 10 3 kPa). Altitude angle (ø) was calculated using the following equations, using the Julian date (J = 110), Latitude (32.2˚, 0.562 rad [λ]), time (t = 1000), and solar noon time (t 0 = 1200): solar zenith angle ðzÞ cos z ¼ ðsin l sin dÞ þ ðcos l cos d cos yÞ Finally, We divided Q solar by body mass (g) to determine solar radiative heat flux per unit of body mass.
Scanning electron microscopy
We shaved small patches of the second tergite of the abdomens of the specimen used for reflectance spectrophotometry (thoraxes were already shaved). We then mounted thoraxes and abdomens on metal stubs with electrically conductive tape and silver paint, before sputtercoating samples (Pt/Pd target, 80/20; Cressington 208 Hr Sputter Coater) for 40 s at 40 mA (approximately 8-10 nm deposition). We took three photos of the cuticular surface in three different places on the abdomen/thorax at 100X or 200X magnification using a Zeiss Supra 50VP (EHT set to 5 kV, high vacuum mode, SE2 detector). We used ImageJ to count the number of pores found at the base of the hairs within a standardized, circular region of 0.1 mm 2 area (centered on a pore) on each of the three photos for each individual, and averaged these three numbers to obtain hair density in terms of the number of hairs/mm 2 . On the abdomen, both unbranched and branched setae can be found, however the pores that at the base of these two setae types are quite distinct (Fig 2); we counted only the pore type that led to the branched setae, which was more common (in photo 2A, for instance, only 2.9% of pores are for unbranched setae).
We used GraphPad Prism 9.1.2 [44] and and R v. 4.1.3 [45] to analyze spectrophotometry, morphology, and hair density data. We used one-way ANOVAs with Tukey's MCT to compare body mass, and thorax and head or abdominal surface area across large/small males and females. We used a one-way ANOVA followed by Tukey's MCT for normally distributed data, or Kruskal-Wallis test with Dunn's MCT for data that were not normally distributed, to assess variation in hair density on the abdomen and thorax. We calculated mean percent reflectance in the UV (290-399), VIS (400-700) and NIR (split into "close NIR" [cNIR, 701-1400] and "far NIR" [fNIR, 1401-2500]) for each individual, and used a two-way RM ANOVA (with a Geisser-Greenhouse correction due to lack of sphericity in wavelength) to assess the affect of morph (e.g. large-morph male, small-morph male, and female), wavelength region (UV, VIS, cNIR, and fNIR), and individual differences in mean reflectance. When we found significant p-values for 'morph', we used a Tukey's MCT to assess variation across categorical morph assignments within each wavelength region (and adjusted p-values for multiple comparisons). To test for differences in mean reflectance in the UV, VIS, cNIR, and fNIR between the shaved and unshaved dorsal thorax surface, or the dorsal and ventral side of the abdomen, of large and small males, we used two-way RM ANOVAs with Geisser-Greenhouse corrections. When we found significant p-values for the effect of 'shaving' (thorax) or 'side' (abdomen), we used a Sidak's MCT to assess variation associated with that variable within each wavelength region (and adjusted p-values for multiple comparisons). To test for a specific thermal adaptation across morphs/sexes in the NIR on the dorsal surface of the thorax and abdomen, while controlling for correlations with VIS reflectance, we tested the morph/sex and mean VIS reflectance for each individual as fixed effect predictors of NIR reflectance in a linear model, using repeated single-term deletions based on AIC comparisons followed by Type 1 ANOVA p-value comparisons to determine the simplest, best-fit model. We used one-way ANOVAs followed by Tukey's MCTs were used to assess differences in absorption coefficients between morphs and females.
Reflectance of solar radiation
Morph/sex significantly affected mean reflectance on the dorsal abdomen and thorax surfaces (Fig 4, Table 3, S1 Table; Two-way RM ANOVA; dorsal-abdomen: F = 26.32, df = 2, Table 1. Differences between male morphs and females in their surface area and body mass (n = 10; One-way ANOVAs with Tukey's MCT). Different letters indicate statistically significant differences (p < 0.05), among morphs/sexes in that variable.
Variable
Morph/Sex Mean ± SD Range The only significant difference between morphs/sexes in the mean reflectance of the shaved dorsal thorax (e.g., cuticle-only) was between small males and females in the UV (Fig 4C, S2 Table; Tukey's: q = 5.70, df = 16.66, p = 0.0301). All differences in mean reflectance between large and small males on the thorax disappeared when the hairs were shaved (S2 Table; Tukey's: all p > 0.05). The presence of thorax hairs significantly increased mean reflectance across the entire UV-NIR spectrum for both small males and large males compared to just the cuticle (Table 4 & S3 Table; Two-way RM ANOVA with Sidak's MCT: all p < 0.05).
Table 2. Differences between male morphs and females in the density of hairs per mm 2 on their dorsal thorax (Kruskal-Wallis with Dunn's MCT) and abdominal surfaces (One-way ANOVA with Tukey's MCT).
Letters indicate significant differences (p < 0.05) across morphs/sex (n = 10) for that region. Asterisk indicates significant differences (p<0.05) between regions (abdomen vs. thorax) for that morph/sex (Kruskal-Wallis with Dunn's MCT).
PLOS ONE
The mean reflectance of the ventral abdominal surface did not differ across morphs/sexes (S2 Fig, Table 3; Two-way RM ANOVA: F = 0.65, df = 2, p = 0.53). The ventral surface of the abdomen of large males had lower mean reflectance across the entire UV-NIR spectrum compared to the dorsal surface (Table 4 & S3 Table; Two-way RM ANOVA with Sidak's MCT: all p < 0.05), however the mean reflectance of the dorsal and ventral surface did not differ in small males (F = 0.39, df = 1, p = 0.54).
Females and males did not differ in the direct beam absorption coefficients of their shaved cuticles (Fig 5A; One-way ANOVA, F = 2.78, df = 27, p = 0.08). There was also no difference in the absorption of reflected light by the ventral surfaces of male morphs and females (Figs 5B and 6C; One-way ANOVA, F = 1.21, df = 27, p = 0.31).
Direct beam and diffuse sky radiation was calculated to be 1,178.82 W/m 2 at 1000 h on April 20 th . Reflected radiative energy was higher when accounting for wavelength specific effects (329.85 W/m 2 ) as compared to when applying the same albedo for light sand (0.245) across all wavelengths (288.81 W/m 2 ); we used 329.85 W/m 2 for all further calculations.
Large-morph males absorbed a mean of 0.36 ± 0.02 W of thermal energy from reflected, direct, and diffuse radiative sources (1.28 ± 0.13 W/g of body mass), compared to 0.26 ± 0.02 Table 3 W for small-morph males (1.85 ± 0.21 W/g of body mass). If large-morph males had the same reflectance profiles as the averages for small-morph males, they would absorb 0.37 ± 0.03 W of thermal energy (or 1.40 ± 0.15 W/g of body mass, an 8-9% increase in W/g). Females absorbed a mean 0.28 ± 0.03 W of thermal energy, or 1.40 ± 0.11 W/g of body mass. For all three groups, hairs on the dorsal surface reduced thermal heat flux due to solar radiation; thermal flux for shaved large males was 0.41 ± 0.03 W (1.42 ± 0.17 W/g), for small males was 0.28 ± 0.02 W (1.94 ± 0.23 W/g), and for females was 0.31 ± 0.03 W (1.63 ± 0.12 W/g).
Discussion
Comparative studies of animal solar reflectance (or VIS coloration) often collect data at a macrogeographic scale, and demonstrate the effect of geographic gradients in climate variables (such as ambient temperature or solar radiation) on solar reflectance [2,4,11,16,19]. However, large differences in ambient temperature can occur within populations when individuals use different microclimates. The 1 cm above-ground microclimate where large male C. pallida engaged in their mating behaviors has air temperatures > 8˚C hotter than the 1 m aboveground hovering microclimate used by small males by 1100 am [32], demonstrating that there can be substantial differences in thermal selective pressure across distinct microclimates at the same site. The reduced solar absorption of large-morph males in both the VIS and NIR may be a unique thermal adaptation to this very hot microclimate, where they must behave for prolonged periods to successfully access mates (males are known to dig and fight for up to 19 minutes [28]). Morph-specific variation in solar reflectance on the dorsal surface went beyond the VIS, with consistent differences in reflectance in the NIR. Because morph reflectance differences were strongly correlated in the VIS and NIR, selection on visual characteristics (e.g., anti-predator adaptations) could indirectly affect NIR reflectance. However, the higher UV-NIR reflectance of large-morph males conferred a thermal benefit for this morph, which utilizes a hotter microclimate: the higher UV-NIR reflectance of large-morph males led to an 8-9% reduction in W/g of absorbed solar energy. We suggest that morph differences in solar reflectance may be, in part, a thermoregulatory adaptation to their distinct microclimates; reflectance differences beyond visible coloration suggests there may be non-visual selective factors partially or wholly driving reflectance differences.
Females had intermediate dorsal absorption coefficients compared to males: female thorax reflectance profiles were similar to small males, but lower compared to large males, and abdomen reflectance profiles were similar to large males, but higher compared to small males. There was no significant difference in the absorption of reflected irradiance on the ventral surface among morphs and sexes, supporting the hypothesis that differences in reflectance may be an adaptation that functions to increase radiative heat gain in cooler microclimates, and decrease it in hotter microclimates.
Differences in dorsal reflectance were entirely caused by the hairs; there was no significant difference in the shaved cuticular absorption coefficients of the male morphs or females. Large-morph males had higher hair densities on the thorax and abdominal surfaces compared to small-morph males; however, they had decreased hair density on the abdomen compared to females, despite similar reflectance profiles. This suggests that both hair density, as well as [8]. The striated external surface structure of the ant's hairs contributes to total internal reflection; however, the lack of triangular shape, and the orientation of the hairs as perpendicular (as compared to parallel) to the cuticle, reduce the likelihood of this structural mechanism in the C. pallida case. Alternately, the hollow structure of the hair may play a role in thermal management, as is the case in polar bear fur [47] and the silver ant [8].
Temperature can vary both spatially and temporally at a given site: across the course of the morning, temperature increased from a low of 16.9˚C to a high of 44.5˚C between 0600 and 1100 hrs in the 1 cm microclimate, and from 16.1˚C to 38.9˚C in the 1 m microclimate [32]. In male Colias butterflies, intraspecific increases in radiative heat gain due to melanization allowed for longer flight periods at cooler air temperatures, increasing access to mating opportunities [48,49]. The mean body size of the C. pallida male population increases throughout the morning (both for males foraging, and males engaged in mating-relevant behaviors [50]), suggesting smaller males may use their darker coloration to increase radiative heat gain and their access to mates during cooler early morning periods when large males are not yet as prevalent in the population.
An additional implication of this work relates to unraveling the various factors that may lead to stabilizing selection for dimorphic body morphologies in alternative reproductive tactic systems. Generally, it has been hypothesized that bird predation may select against the largemorph male mating advantage in the C. pallida system, producing persistent size variation and two distinct morphs [34]. However, bird predation is variable across field sites [34] while size dimorphism is not. In addition, large-morph males are not behaviorally wary of predation events (which might be expected if predation was as a significant selective force); for example, large-morph males can be captured easily with just the fingertips and will continue to dig or fight when experimenter shadows pass over them [31,Barrett pers. obs.]. An alternate hypothesis, better supported by our data, relates the continued existence of two microclimate-specialized morphs to the thermoregulatory selective pressures of the available microclimates. Males of an intermediate size and UV-NIR reflectance profile would be disadvantaged due to higher metabolic rates (and thus higher metabolic heat production) when using small male mating strategies (in other bees with intraspecific body size variation, smaller individuals have increased power efficiency in flight without any additional metabolic cost [51]), while also disadvantaged due to their increased solar absorptivity when using the large male mating strategies. Individual morphological specialization in relation to body form and reflectance may potentially facilitate the stability of the male C. pallida alternative reproductive tactic system.
Our study demonstrates the importance of both VIS and NIR reflectance as intraspecifically traits conferring a thermal benefit to large-morph males in the hotter microclimate. Variation in reflectance of shortwave radiation, and other morphological adaptation to that benefit organisms facing thermal pressures (such as larger body sizes), may alter the necessity for physiological thermoregulatory differences-for example, large morph males that can avoid overheating in the sun due to their reduced relative solar heat load may not need higher thermal tolerances to survive (see [32]). The interplay between these morphological and physiological thermoregulatory benefits/strategies thus deserve greater attention. In addition, this variation suggests individuals within populations may find their fitness-relevant behaviors to be differently constrained by increasing global temperatures due to the effects of variation in shortwave radiative heat gain on their energy budget. If thermoregulatory selective pressures play a role in maintaining size or behavioral variation within species (such as alternative reproductive tactic systems), this suggests that climate change may have impacts that stretch beyond species-level diversity-affecting intraspecific morphological and behavioral diversity as well.
We demonstrate that individual differences, often overlooked, may be important for understanding the selective pressures acting on individuals to generate evolutionary adaptations to fine-scale ecological variation. Future studies should thus pay greater attention to the effects of morphological, physiological, or behavioral variation between conspecifics, which may have important consequences for understanding proximate and ultimate causes of ecological reflectance and coloration patterns. | 5,793.2 | 2023-01-17T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
System Dynamics Modeling in Additive Manufacturing Supply Chain Management
: A system dynamics model was developed with the primary purpose of visualizing the behavior of a supply chain (SC) when it adopts a disruptive technology such as additive manufacturing (AM). The model proposed a dynamic hypothesis that defines the following issue: what is the impact of the AM characteristics and processes in the SC? The model was represented through a causal diagram in thirteen variables related to the SC, organized in two feedback cycles and a data flow diagram, based mainly on the three-essential links of the SC and the order display traceability: supplier–focal manufacturer–distribution Network. Once proposed, the model was validated through the evaluation of extreme conditions and sensitivity analysis. As a result, the dynamic behavior of the variables that condition the chain management was analyzed, evidencing reduction times in production, especially in products that require greater complexity and detail, as well as reductions in inventories and the amount of raw material due to production and storing practices from AM. This model is the starting point for alternative supply chain scenarios through structural operating policies and operating policies in terms of process management.
Introduction
The application of modeling with system dynamics (SD) in supply chain management (SCM) has its roots in Forrester's Industrial Dynamics, through the definition of a model structured in six flow systems that interact among them transmission of the information, materials, orders, money, labor, and capital [1]. Forrester developed one of the first related models, in which he considered the chain as part of an industrial system in terms of policy design [2]. Nowadays, the model has evolved towards the concept of a testing environment for business management systems to make the right decisions regarding strategies and variable changes [3]. However, current trends and the transition to industry 4.0 and technological impact issues require support from system dynamics to respond to the behavior of variables influenced by breakthroughs, such as big data, cybersecurity, augmented reality, cloud computing, the Internet of Things (IoT), and additive manufacturing in SC management. Understanding them as value chain networks, where the resources and activities necessary to create and deliver services/products are involved to satisfy the customer's needs [4].
Regardless of the industry where they are analyzed, the supply chains are still considered complex and dynamic systems that involve different participants (stakeholders). These SCs contemplate internal and external factors, the importance of which increase or 1.
Conceptualization of the model where the context and components of the supply chain are defined.
2.
The proposal of the diagram of influences, which includes a dynamic hypothesis that facilitates understanding the behavior of the interaction of the variables. 3.
The design of the data flow diagram. 4.
The mathematical formulation that includes the model equations. 5.
The validation of the model. 6.
The sensitivity analysis.
Literature Review
In order to learn about the applications of SD in SC modeling, a literature review was carried out to determine bibliometric indicators such as annual behavior, impact, and geographic concentration. The applications were characterized based on Lambert & Cooper's framework of key elements for chain management. Finally, researchers identified and classified the variables in the eight management processes defined by the Global Supply Chain Forum. The Web of Science and Scopus databases were consulted with the search equation: (TITLE (system AND dynamic*) AND TITLE (supply AND chain*)) that intercepted the two topics of interest: SD and SC, in a time window of 1993-2019 for a total analysis of 306 documents.
Complementarily, the ten papers with the highest impact were analyzed, which were published between 2000 and 2012. Half of the results correspond to proposals for system dynamics models: remanufacturing in closed-loop [11], control and management in construction [12], logistics [13], strategic food chain [14,15]; another three to the analysis of dynamic systems: effectiveness of e-tools [16], sustainability [17], and recycling [18]; and the remaining two to reviews of model literature and chain systems [19,20].
In order to know the applications that SD has had in the analysis of SC management; it was necessary to carry out a keyword analysis through time. It was found that the first investigations focused on articulating the concepts through simulations and analysis of the Bullwhip effect. During the 1970s, there was an emphasis on analyzing variables such as reverse logistics, inventory control, supply chain system, optimization, stability, manufacturing, remanufactured products, and risk management, which are still in use today. By 1985, discrete event simulation was included in developing models and simulations, followed by evaluating variables related to green supply chains, sustainable development, and sustainability, which happened five years later.
Between 1990 and 2000, the paperwork related to Co-powersim (SD software) was associated with multi-agent systems and SC financial analysis. More recently, from 2008 to 2014, the study of these variables has continued, with variations in the evolution of concepts such as customer satisfaction, chain performance, uncertainty analysis, life cycle, and trade, among others, as shown in Figure 1. From 2014 to the present, the research has focused on modeling proposals that articulate collaborative models and disruptive technologies in the structure of the chain. The results show that SD initially focused on modeling internal variables of the chain (inventory management, responsiveness, manufacturing, and optimization). However, currently, it has been integrating external variables that impact the chain from the traditional approach (risk management, sustainability, collaboration models), which leads to including variables from the Industry 4.0 approach [21]. Consequently, it has been possible to identify different opportunities that can allow integrating digitalization trends through additive manufacturing, big data, and augmented reality, which will impact its configuration and performance [22]. Researchers have also identified the direction of efforts to include constructs that explain each of the trends (e.g., personalization and knowledge management), which are usually analyzed from a subjective or specific perspective for an application case, preventing a holistic analysis of the chain management. In response, multi-criteria models have been developed to delimit strategies for managing chain disruptions since the application of these technologies requires a practical approach that mitigates risks and ensures sustainability [23].
The application results were grouped into the key elements and decisions from the supply chain model developed by Lambert and Cooper: processes, components, and network structure [24]. It was also found that 75% of the studies are related to the analysis of the variables from the supply chain management processes, in which the focus is to study one or two variables that model a specific process. There were coincidences in some elements associated with the model's mere application, such as processes and components or structure and processes (Supplementary Materials Table S1). In most application cases, a specific issue of a variable that affects a particular management process was modeled, which leads to understanding the variable behavior within the chain's performance. The predominant variables are inventory management, information transmission, chain integration, system performance, demand projection and management, and production planning and scheduling.
However, it was impossible to discover any reference where all the chain management processes were approached together from a holistic perspective. Although some of the paperwork did consider the chain as a system, where its variables affect each other, they did not integrate the eight SC management processes within a single model. Based on the literature review, the variables that influence and affect the SC management were identified, which are also categorized into the eight management processes defined by Lambert & Cooper from The Ohio State University (2000) [24] (Supplementary Materials Table S2). After analyzing the studies mentioned above, Table 1 presents the variables for the proposed model. In the customer relationship management process, structures are defined to develop and maintain loyal relationships with customers, where the product variability and service levels condition cost effectiveness. On the other hand, in the supplier relationship management process, the structure is defined to foster relationships with them, according to their long-term importance, modifying variables such as orders, raw material inventory, and acquisition of materials that ultimately translate into orders. Customer service management is the relationship between the company and the consumer. The company manages to identify essential information about the consumer and later on monitor the compliance on the product services agreements seeking to modify the availability variable. On the other hand, demand management is related to balancing customer requirements with supply chain capabilities, reducing variability, and increasing flexibility.
Complete order management is in charge of guaranteeing the integration of manufacturing, logistics, and marketing, making it easier to meet customer requirements and reduce delivery costs, which are conditioned by production orders and final dates. The manufacturing flow management includes all the activities necessary to manufacture the products requested by the demand, at the minimum cost, modifying the design cycle, the minimum batch size, and the production capacity. The product development marketing process corresponds to the essential activities to develop and bring products to market, which is mainly evidenced in the design cycle. Finally, returns management helps entities to achieve a competitive advantage related to sustainability, identifying opportunities for improvement and innovation projects, translated into variables such as asset recovery and product availability.
In the literature there is research that reflects the behavior of supply chain management variables affected by the implementation of AM that generates: (+) personalization, less [25]. Similar research that has analyzed the impact of AM on CS was consulted, as presented in Table 2. Table 2. Studies of Impacto f the additive manufacturing in the supply chain.
Research Methodology Purpose
The impact of Additive Manufacturing on Supply Chain design: A simulation study [26] Discrete event simulation model (Excel) Quantitative evaluation of the effect of additive manufacturing on supply chain performance through system configuration.
Investigating the Impacts of Additive Manufacturing on Supply Chains [27] Case study-surveys To analyze the applications of AM in supply chains. This research focuses on the characteristics and traditional structure, which ends up designing an optimal business model to use.
The impact of 3D printing on manufacturer-retailer supply chains [28] Mathematical model To represent a simple supply chain consisting of a manufacturer and retailer that serves a stochastic customer demand that uses 3d printing to produce.
How will the diffusion of additive manufacturing impact the raw material supply chain process? [29] System dynamics To represent a model that represents the initial stage of the supply chain (raw material supply) by evaluating the reduction of materials inventories through the adoption of AM.
Additive manufacturing impacts on a two-level supply chain [30] Joint Economic Lot Sizing model To determine the impact of AM implementation in a two-level supply chain, focusing on inventory, transportation, and production costs.
Traditional vs. additive manufacturing supply chain configurations: A comparative case study [31] Configuration theory, postulated by Alfred Chandler and widely applied in studies of TM and service SCs To design a framework to determine the impacts on chain actors' operations by developing different modes and levels of products.
Impact of additive manufacturing on aircraft supply chain performance: A system dynamics approach [9] System dynamics It consists of evaluating the impact of AM implementation in a case study: aircraft supply chain. It was performed with theoretical data due to the absence of real-life data. Topological network design of closed finite capacity supply chain networks [32] Mathematical model To analyze the layout, location, and arrangement of quotes for supply chains. Additive manufacturing technology in spare parts supply chain: a comparative study [33] System dynamics To compare three supply chain scenarios and contrast the differences between costs and carbon emissions.
Additive manufacturing of biomedical implants: A feasibility assessment via supply-chain cost analysis [34] Stochastic programming model To determine the production costs of biomedical implants using AM. To determine the feasibility of manufacturing the implants on-site (hospitals). Additive Manufacturing in an End-to-End Supply Chain Setting [35] Optimization model To determine the most critical factors to consider in the configuration stage of the SC that AM impacts.
Impact of additive manufacturing adoption on future of supply chains [36] System dynamics To describe the changes in the SC performance and structure as a result of the additive manufacturing implementation. To describe the characteristics and requirements of the chain. The impact of additive manufacturing in the aircraft spare parts supply chain: Supply chain operation reference (SCOR) model-based analysis [37] SCOR To evaluate the impact of AM in the aircraft spares SC according to the SCOR operating model.
In the studies mentioned above, the benefits of adopting AM in SCs were identified through a holistic analysis of chain behavior. It was possible to observe that 3D printing on supply chain management operations and relationships is relatively unexplored in the scientific literature, considering that studies have worked on changes in the chain's performance according to its structure in terms of time and costs. A particular focus has been given to mitigating inventory risks and the reduction of logistics operations. It was possible to identify that the literature lacks a better understanding of the nature of the changes generated by AM in chain management (processes), i.e., from a holistic perspective.
One of the methodologies most used by studies has been system dynamics [9,29,33,36], which started with Forrester [38] as a technique to mathematically model complex problems [20]. SD has become one of the most appropriate alternatives for analyzing supply chain management with structure and process policies. In addition, it has been widely used to represent the information and material flows [39] that are typical of SC analysis.
Materials and Methods
Researchers carried out a modeling process to the behavior of the variables related to the management of a supply chain driven by multi-product and multi-demand orders where the additive industry operates. The modeling was carried out through system dynamics, and its objective was to determine the additive industry's impact on the previously mentioned supply chain.
Researchers chose a methodology helpful when simulating supply chains through a modeling process such as system dynamics. Some of the advantages of a simulation carried out from that kind of modeling are: • Discussion and understanding of complex issues. • Creation and validation of scenarios that constitute the fundamental structure of a constantly changing system [40].
•
Comprehension of the different relationships that might emerge between the system elements that are being analyzed.
During the model proposal, researchers followed Andrade's guidelines, which define the process as iterative, allowing them to review the stages as often as necessary. They also considered Sterman's process stages [10] that enable the validation of the proposal. Figure 2 represents the language system in which the casualty from this model can be expressed: prose language and graphs with causal loop diagrams, flow and level diagrams, and equations as mathematical representation. The results are structured in stages as follows:
•
The conceptualization stage corresponds to the definition of the problem to be modeled.
•
The variables of the model are adjusted to the conventions of the DS methodology.
•
The influence diagram stage depicts the hypothesis' dynamic.
•
The flow and level diagram with the proposed model is given.
•
The mathematical formulation, meaning, and the equations are given.
•
The model validation throughout tests of extreme conditions and sensitivity is given.
The model's output variables are related to the entire supply chain's cycle times and to the raw materials and finished goods inventory to contrast the subtractive manufacturing approach's demeanor and the additive manufacturing approach. The results were obtained by the Vensim and Evolution 4.6 software. Figure 3 presents a supply chain's operating model made up of three links: supplier, manufacturer or assembler, and distribution network. It corresponds to a representation of a Make to Order (MTO) system; that is, production is only activated when the demand places an order. To define the structure, the characteristics that describe production with additive manufacturing, which are possible to find with traditional manufacturing, were reviewed. MTO modes respond to customer requests, resulting in increased customization opportunities with limited inventory time. Simultaneously, they generate an increase in the demand response time due to the acceptance capacity and the raw material inventory [41]. Likewise, they involve small production batches and variable raw material consumption and times. Consequently, inventory levels (WIP and finished goods) are kept to a minimum and, in many cases, even amount to zero. Theoretically, the response time is slow since companies want to finish all the activities before delivering the order.
Conceptualization
When the order is received, the producer analyzes their production capacity and generates purchase orders for raw materials to start manufacturing. Considering that the inventory level tends to zero, suppliers are used, where variables such as response time, order quantity, and deliveries are analyzed.
Subsequently, the transformation cycle begins. Once the final product is obtained, the distribution network will deliver the product to the customers. Certain variables occur in this stage, such as delivery times, means of transportation, and the finished goods' quantity and size.
For the representation of the model through SD, researchers analyzed a multi-demand and multi-product assembly company [42]; with variable capacity and different characteristics for each product. The modeling includes suppliers of a single raw material, corresponding to an easily accessible commodity product that does not require a specialized supplier, and distributors with varying delivery times depending on the selected means of transportation. These actors are articulated in the supply chain that functions as a collaborative system to respond to demand. This behavior can also be seen in a traditional supply chain, making it easier to contrast and quantify the manufacturing approach's impact on SC management.
The model is intended to represent additive manufacturing characteristics in orderbased supply chain management to evaluate the behavior over time in terms of cycle time, raw material inventory, and finished goods inventory. The values worked in the model are discrete [43].
The main characteristics it represents are: 1.
Multi-product with variable demand quantities; 2.
Varied material consumption for each product; 3.
Variable processing time for each product.
Based on these characteristics, the model seeks to answer questions such as these: What would happen if demand increases or decreases? What would happen if production capacity increases or decreases? What would happen if purchasing and distribution times change? What would happen if the policies of the assembly company or distribution network change?
Influence Diagram
Based on the context and the identification of variables, the dynamic hypothesis is formulated to facilitate understanding the behavior of the interaction of these variables. The diagram of influences presented in Figure 4 was constructed to propose the dynamic hypothesis. These diagrams represent the feedback mechanisms, which can be negative (balancing) or positive (reinforcement). In addition, they make two significant contributions to SD methodologies since they are preliminary sketches of the hypotheses and simplify the model proposal [11]. The diagram shows the variables connected by arrows representing the relationship between them. The direction of these arrows corresponds to the direction of the effect. The "+" or "−" signs are the signs of the effect, where "+" indicates that the variables change in the same direction while "−" indicates the opposite.
The diagram represents an order-based supply chain model that starts with the top left corner with the exogenous variable Demand x, y, and z and continues with the committed orders for production and demand satisfaction. This graph can be understood in two hemispheres. The left hemisphere presents the two control loops: available manufacturing and raw material inventory, while the right hemisphere relates to the distribution of products, and no loops are evident.
Regarding the left hemisphere, the first cycle comprises the committed products that correspond to the products that the supply chain will serve. These generate Purchases that are the amount of raw material necessary to produce the committed products, which varies depending on the order. Once this purchase is calculated, the Raw Material Inventory is created, which is the amount of raw material stored to be dispatched. The latter then transitions to Work-in-process (WIP) and affects the Available Manufacturing, which conditions the order approval policy. The second cycle, which is responsible for regulating production based on raw material inventories. It starts with the Committed Orders that define the Firm Orders, i.e., those accepted waiting for the purchase of raw material. Once the purchase is received, they become Work-in-process (WIP), varying the Available Manufacturing. These cycles are stable and allow defining the order approval policy and the purchase of the raw materials.
On the other hand, the right hemisphere includes the Products to Deliver, which are stored in the Finished Goods Inventory. By definition, the finished product inventory does not increase or decrease the production capacity or the work-in-process inventory. Finally, they are distributed, generating a stability cycle on the system.
As mentioned before, these dynamic interactions between variables influence the responsiveness of supply chains, particularly in order management. This situation is verified through the flow and level diagram and its subsequent simulation.
Model Variables
The structure of an SD model contains stock and flow variables [11]. Taking as a starting point the conventions used in SD methodologies [40] and the variables related to SC management found in the literature review, the six elements included in the model are grouped and presented below:
•
Model parameters: The parameters correspond to elements of the model that are independent of the system or its own constant that does not vary during the simulation [40], where they are found:
1.
Raw_Mat_Inv: the amount of raw material, which corresponds to the raw material consumption for each product demanded. The consumption is different for each product.
2.
Print_Load: capacity to load products per printer and machine, i.e., how many products a machine/printer can produce/print simultaneously.
3.
Printers: number of printers/machines installed in the system. 4.
PO_A_T: production order acceptance time.
5.
Dist_A_T: time of acceptance of the product to be distributed. 6.
Time_OA: time of acceptance of the product entry. 7.
Dist_Lead_Time: time it takes to deliver the demanded product to the customer varies according to the shipping characteristics (transport used and distance traveled). 8.
Print_Time: time to process an order. 9.
• Level variables:
The level variables or state variables represent the accumulation of flows, which in this case are stable; i.e., as they grow, they also leave the system.
1.
Committed_Orders: the products to be handled in the supply chain.
2.
Firm_Orders: the products accepted for production and awaiting raw materials.
3.
Raw_Mat_Inv: Raw Material Inventory corresponds to the amount of raw material that the supplier stores and is available for production.
4.
Work_In_Process: the products in the production process. Their behavior and development depend on the processing time.
5.
Finished_O_Inv: finished orders inventory refers to the number of finished units ready for delivery.
6.
Delivery_Request: products that are accumulated in the finished goods inventory and are pending delivery. 7.
O_Records: history of delivered products. It is used to verify product delivery and evaluate the model from an input-output perspective, where everything that the SC commits to doing is delivered.
• Flow elements: Flow elements that are understood as the variation of a level, representing changes in the state of the system:
1.
CO_Policy: acceptance policy corresponds to the products' units accepted to enter the supply chain and be produced and delivered.
2.
Inputs_Order: product entry is defined by the total load capacity or final stock, which defines the production batch to be accepted.
3.
Purchase: amount of raw material needed to produce all the orders; the consumption changes according to the type of demand to be produced.
4.
Prod_Order_E: production order entry. It corresponds to those orders that are taken into account according to the distribution network's operating policies. The availability of capacity and the existence of raw material for production are verified.
5.
O_Accepted_AS: products accepted according to the production orders' requirement and the stock of raw material to be produced. 6.
Raw_Mat_Exit: quantity of raw material sent by the supplier according to the producer's need. The number of kilograms/grams (unit of measure) needed to produce a product is considered. 7.
Order completed: final stage of the production process. From now on, the product will be considered as already elaborated. 8.
Product_Delivery: delivery of finished goods to the distributor. 9.
Delivered: products that leave the system and are considered delivered to the final customer.
• Delays: Delays are elements that simulate delays in transmitting information or material between system elements. All model delays are of infinite order since they produce an output equal to the input after a particular time. That is to say, the delay manifests itself in the output with the same input that arrived sometime before.
1.
Purchase Delay: represents the processing time of the supplier in producing and obtaining the raw material necessary for the production carried out by the manufacturers or assemblers.
2.
WIP: represents the sum of the products that are in process. Assuming that no product fractions are received, and no product fractions leave the System. 3.
Delivery Delay: represents the time it takes the distributor to deliver the product to the customer.
• Auxiliary Variables: Auxiliary variables are quantities with some significance to the modeler and with an immediate response time. They have been divided into base model auxiliary variables and structural policy auxiliary variables.
Base model auxiliary variables: 1. Productive_Capacity: total load capacity of the system, i.e., how many products can be produced simultaneously.
2.
Available_Manufacturing: available capacity or load availability; corresponds to the total amount of products that can go into production (of any given demand).
Structural policy auxiliary variables: 1. RM_Stock1: raw material stock after manufacturing the batch from demand 1.
2.
RM_Stock2: raw material stock after manufacturing the batch from demand 2.
That is to say, the raw material stock available for demand 3.
3.
F_Order1: number of products of demand one that can be manufactured according to the products that are for production and the raw material stock available at the moment. 4.
F_Order2: number of products of demand two that can be manufactured according to the products that are for production and the raw material stock available at the moment. 5.
F_Order3: number of products of demand three that can be manufactured according to the products that are for production and the raw material stock available at the moment. This step comes after meeting the first two demands. 6.
FO_available_inv: products that can be manufactured based on the raw material stock available at the moment.
Approval policy based on manufacturing availability 1. A_Load1: products available after meeting demand 1.
2.
A_Load2: products available after meeting demand 2. What is left from this part will be used to manufacture demand 3. This is the number of products that can be served from demand 1. For this step, it is necessary to analyze the product availability and the demand requirements.
3.
Order1: number of products that can be served from demand 1. For this step, it is necessary to analyze the product availability and the demand requirements.
4.
Order2: number of products that can be served from demand 2. For this step, it is necessary to analyze the product availability and the demand requirements. It also comes after meeting demand 1.
5.
Order3: number of products that can be served from demand 3. For this step, it is necessary to analyze the product availability and the demand requirements. This measure comes after meeting the first two demands. 6.
Order_Prodt: production order: quantity of each product delivered for production, previously having guaranteed the raw material stock and the production capacity availability.
• Exogenous variables: Exogenous variables are those whose evolution is independent of the rest of the system. They represent the actions of the environment upon it.
1.
Prodt_X: Products X. Represents the demand X in different periods.
2.
Prodt_Y: Products Y. Represents the demand Y in different periods.
3.
Prodt_Z: Products Z. Represents the demand Z. in different periods.
Data-Flow Diagram
Based on the influence diagram and the definition of model variables, the data flow diagram was constructed using the conventions indicated in Section 4.2, summarized in Table 3.
The model is structured in four sectors, each one related to the defined supply chain links: order display, supplier, manufacturers, and distribution network, alongside the SC order traceability and the operating policies for inventory and availability. Figure 5 presents the data flow diagram that depicts the multi-product supply chain's behavior, taking into account the definition of the order approval policies and the purchasing policy. This representation corresponds to a simplified approximation to the supply chain and its order management. It also operates in vector graphics except for the supplier link because it only recreates a single raw material which is used for different products. Table 3. Elements of the model.
Element Name Description
Parameter Constant value of the system that does not change during the simulation.
Level variable (Stock) Corresponds to the state variables in systems theory and represents the flows accumulation.
Flow variable (valve) Defines the behavior of the system.
Delay
Simulates delays in material or information transmission between elements of the system. Auxiliary variable Element that has certain meaning or interpretation for the modeling with an immediate response.
Exogenous variable
Has an independent evolution from the system evolution.
It represents an interaction of the system with the exterior.
Information channel
The transmission of information that does not require storage.
Taken from: [40]. The vector graphics have the advantage of representing, simultaneously, different operating conditions. In the case of the proposed model, it allows representing the behavior throughout the supply chain with three types of product 1(X), 2(Y), and 3(Z), with different material consumption, processing times, and distribution times for each one.
The order management process is developed transversally to the SC, relationship with the three links in the chain. It starts with the multi-product demand (product 1, product 2, product 3), which are exogenous variables of the system. In this part, the products to be handled by the manufacturers are obtained. The order approval is determined based on the defined policies.
Once an order is accepted, it is communicated to the supplier, who manages the raw material inventory and dispatches the necessary quantities for production to the manufacturers. In this process, there is a purchase delay caused by the processing that the supplier must have in the acquisition of products or its own internal processing. The raw material purchasing policy represented in the model is the immediate purchase of what is committed to being produced by the manufacturers.
Subsequently, the production order is generated according to the products that are in the firm orders status, which go to the processing stage according to the available manufacturing determined by the productive capacity. The latter is calculated with the number of products that a machine can produce and the number of machines/printers established in the system. Production times have been defined from the design stage, where raw material consumption and processing time (printing time) have been previously determined.
Once the entire production process has been carried out, it goes to the finished goods inventory of the focal manufacturers, who deliver the product to the distribution network. In this stage, a delivery delay appears, which varies according to the characteristics of the transport and routing system used by the distributor, which the schedules used by the modeler can define.
Moreover, the structural operation management policies of the model were designed to associate the approval policy based on stock and the approval policy based on manufacturing availability, as shown in Figure 6. These are policies that the focal manufacturers consider for prioritizing order processing and the generation of production orders. Consequently, the raw material inventory is analyzed to ensure that it can guarantee production. Afterward, the availability of the production capacity is reviewed. This element generates the order that is entering the process. Simultaneously, the priority criteria intervene to handle first product 1, then product 2, and, finally, product 3.
The latter means that the Firm Orders are a priority for the manufacturers, and they will continue producing them until there is no more demand. When they finish with product 1, they will continue with product 2 and successively with product 3.
Equations
In addition to the status equations that can be seen directly in the data flow diagram, Table 4 shows the equations for each of the sectors. Description = According to the requirements of production orders and raw material stocks, these are the accepted products to be produced. Order_Completed:Flow_ Definition = [O_Completed [1],O_Completed [2],O_Completed [3]] Description = Processed product is the output or finished good.
Operating policies
A_Load1:Auxiliary_ Definition = (Available_Manuf-Order_Prodt1) Description = For order available inventory after meeting Dem1. A_Load2:Auxiliary_ Definition = (A_Load1-Order_Prodt2) Description = For order available inventory after meeting Dem2. What is left will be assigned to meet Dem3. FO_Available_Inv:Auxiliary_ Definition = [F_Order1,F_Order2,F_Order3] Description = Taking into account what I want to attend to, these are the possible ones to be produced with the available raw material. F_Order1 :Auxiliary_ Definition = INT(Min(Firm_Orders [1],(Raw_Mat_Inv/RM_Consumption [1]))) Description = The quantity of Dem_1 products that can be produced according to the products and raw material available for production. F_Order2:Auxiliary_ Definition = INT(Min(Firm_Orders [2],(RM_Stock1/RM_Consumption [2]))) Description = The quantity of Dem_2 products that can be produced according to the products and raw material available for production after meeting Dem_1. F_Order3:Auxiliary_ Definition = INT(Min(Firm_Orders [3],(RM_Stock2/RM_Consumption [3]))) Description = The quantity of Dem_3 products that can be produced according to the products and raw material available for production after meeting the first two demands. Order_Prodt:Auxiliary_ Definition = [Order_Prodt1,Order_Prodt2,Order_Prodt3] Description = Production order: Quantity of each product delivered for production, having previously guaranteed the availability of raw materials and the available production capacity. Order_Prodt1:Auxiliary_ Definition = Min(FO_Available_Inv [1],Available_Manuf) Description = quantity of products that can be handled from Dem1, according to the FO available inventory and the product requirements. Order_Prodt2:Auxiliary_ Definition = (Min (FO_Available_Inv[2],A_Load1)) Description = quantity of products that can be handled from Dem2, according to the FO available inventory and the product requirements after meeting Dem_1. Order_Prodt3:Auxiliary_ Definition = (Min(FO_Available_Inv [3],A_Load2)) Description = quantity of products that can be handled from Dem3, according to the FO available inventory and the product requirements after meeting the first two demands. RM_Stock1:Auxiliary_ Definition = (Raw_Mat_Inv-(F_Order1*RM_Consumption [1])) Description = Raw material inventory after Dem_1. RM_Stock2:Auxiliary_ Definition = (RM_Stock1-(F_Order2*RM_Consumption [2])) Description = Raw material inventory after Dem_2. What is left will be assigned to handle Dem_3.
Distribution Network
Delivered:Flow_ Definition = [Ship [1],Ship [2],Ship [3]] Description = These are the products that leave the system and are considered delivered to the final customer. O_Delivered:Flow_ Definition = Finished_O_Inv/Time_ODA Description = Product distribution order acceptance time.
Source: Own elaboration. Evolution Software.
Model Validation
Once the model was formulated, it was validated by performing the analysis proposed by [10] since it allows questioning the structure, behavior, and policies that make up the model. It is also widely studied in the literature since it helps determine the system's stability when parameters and exogenous variables are modified.
With the structure validation, the relationships used in the system are judged in comparison with the actual processes of a supply chain. In this sense, it was necessary to apply extreme conditions tests to identify structural failures. Extreme values were attributed to the parameters and exogenous variables to verify their behavior at the end of the tests.
I Source: own elaboration. Evolution Software. In the first test, it was assumed that there was no demand in the supply chain, which, at the same time, would mean that all the system states would remain at zero since the initial conditions corresponded to this value. This behavior can be seen in Figure 7.
In contrast to the extreme conditions test, some other tests were applied where the demand was modified by oversizing each of the products' quantity. Consequently, the values of the parameters purchase time, printing time, and distribution time were increased with the same extreme behavior compared to a natural system. In this way, it was verified that everything demanded leaves the system through the Order Records variable (O_Records), as shown in Figure 8. On the other hand, another model evaluation was carried out to check the system's behavior through a sensitivity test. The impact of the change of parameters that were considered highly sensitive for the production system was observed, in this case those that affect the capacity of the system, i.e., the productive capacity for the focal manufacturers, which is conditioned by the number of printers/machines and the total load of products handled by each printer/machine. Figure 9 shows the auxiliary variable "Available manufacturing" over time, with a constant productive capacity, 1×, 5×, and 15×.
Thus, the productive moment with demand is visualized, responding to the requested demand, as well as the non-productive moment that shows the installed capacity. This result is consistent with the expected logical evidence.
From the tests performed on the model, it was possible to conclude that the representations obtained from the behavior of the supply and distribution system of a focal manufacturer correspond to those referenced as a supply chain system, taking into account that the variables contemplated are associated with the management processes of a company. Since additive manufacturing in the manufacturing area is still in its infancy [44], there is no accurate data on the practice in natural environments. For this reason, simplified data of existing standard cases are proposed with assumptions that allow evaluating and concluding if it presents a feasible behavior within the expected, without presenting dramatic alterations.
Model Sensitivity Analysis
The initial conditions and model parameters were defined based on the exogenous variable "demand" to perform the model sensitivity analysis. The focal manufacturer has three types of demands to meet, as shown in Table 5. These values are independent for each product, correspond to a period of six months, and are kept the same for the traditional and additive approach so as to allow comparison. 1 1 20 30 20 2 217 16 0 0 3 452 45 30 35 4 678 10 0 5 5 913 100 115 35 6 1130 50 0 35 TOTAL 241 175 130 Source: own elaboration.
The unit of time selected for the model was hours, so it was also necessary to determine the hour at which the demand was updated. The third, fourth, and fifth columns correspond to the demand that exists at that time for each type of product. For example, in hour 217, there is a demand for 16 products for Dem1, 0 products for Dem2, and 0 products for Dem3.
After establishing the demand, it was necessary to define the parameters for the behavioral simulation, as presented in Table 6, with the values for traditional manufacturing (TM) and additive manufacturing (AM). For the machine-product relationship, by definition, the manufacture of a product through TM features four machines intervening, each one producing 25% of the final part; while in AM, a single printer is capable of producing four products since it works them as a single unit [45].
Regarding the processing time, a different value in hours is defined for each type of demand, taking into account the complexity of the design and geometry required, and the percentage increase is projected for each one of them, considering that printing requires more individual production time [46,47].
Regarding raw materials, each TM demand had a different consumption assigned, whereas for AM consumption, researchers referred to some documented cases where the material usage decreases by at least 10% compared to the traditional [44,48]. Based on the latter, the approximations were made as shown in the table above.
Finally, it is proposed that the distribution time is the same no matter the type of demand or manufacturing approach adopted since the origin-destination path is the same in this case.
The parameters mentioned above may be subject to variations by the modeler if he/she wishes to modify proposed times or quantities. The latter allows the model to be used in different application industries due to its adaptability to different environments.
With the conditions of the model defined, the simulation was carried out under the two manufacturing approaches. The first result shows the TM and AM raw material consumption, as shown in Figure 10. For the additive case (blue), it is observed that smaller quantities of raw material are required at the time of request compared to the traditional case (red). The previous fact demonstrates that its consumption is quicker given that there is faster production time, as is evident in the results. This means that the supplier must adopt agile inventory management due to a quicker order approval by the manufacturers.
Another interesting result is how the system available manufacturing responds to the demand for the requested products and approval of new ones, if necessary. Figure 11 represents production over time. As defined in the parameters, the traditional SC needs several machines to produce a single product. The model states that four devices are used to make a single product, which means a 0.25 ratio for each machine, whereas for additive SC, one printer can handle several products. The model states that one single printer is capable of producing four products at the same time.
This situation generates higher available manufacturing for the additive case since faster production remains independent from the production time per product. When reviewing the machinery utilization index, the traditional SC is higher than the additive SC because it requires more activity to generate the same result.
On the other hand, the order records for each of the products were plotted. Figure 12 represents the order record for demand 1. Considering that product 1 has the highest demand of the three products, a prolonged upward behavior can be observed, where demand satisfaction is reached more quickly in the additive approach.
In product 2, the result of the first product is maintained, i.e., the ASC satisfies demand more quickly. Figure 13 shows the behavior, where the valleys correspond to periods where demand has been satisfied, and the company is waiting for a new order to continue production. As more valleys are generated in the ASC, it is understood that the lead time is shorter because there is an immediate time response. Moreover, Figure 14 shows the behavior of product 3. Although the demand is similar to product 2, the response times and the raw material consumption are different, which generates changes in the graph, but the result remains the same. The ASC responds more quickly because the production time is faster than the TSC. Comparing the three graphs in the lower quantity demands, it is easily evident that the ASC production time (Lead) is shorter. When analyzing the order records as a whole, it is possible to note that the ASC excels in the speed of delivery. As demand increases, the advantage decreases, i.e., additive manufacturing positively impacts response times, especially in small production batches. To illustrate more precisely the comparison between the TSC and the ASC response times, Table 7 was constructed to identify the demands of each product for the simulation time and the delivery time after distribution. These are some of the main differences gathered from the order completion time analysis performed on the ASC and TSC: Demand 1: − The response was about 33.9% faster in the ASC than in the TSA. The following advantage ranged between 1% and 8.8%. − There was a difference of 3.9% for the total compliance of the 241 products.
Demand 2:
− The behavior in demand 2 remained the same. The difference was 47% in the first demand, followed by 14.8%, finishing the 175 products 22.5% faster than the TSC.
Demand 3: − The behavior patterns of demand 3 were much higher since in the first demand, it reached 59.4%, and the other requirements range between 17.5% and 29.9%.
As mentioned before, through the analysis of the total closure time, it is possible to notice better performance with additive manufacturing than the traditional approach, especially with small batches.
Discussion
The contribution of this research is to recreate a starting point (base model) through a simplified supply chain structure that will allow analyzing the behavior of industrial transformation with the appropriation of disruptive technologies, in this specific case, the analysis of the variables that affect additive manufacturing. In this sense, the model will be an experimental laboratory to study what could happen with the behavior of the network structure, generating new questions such as: What would happen if there were more actors in the supply chain? What would happen if additive production was centralized or distributed in different regions? What would happen with inventory levels and transportation times? What would happen if the structures of the chains were changed? Likewise, the model could include variables of processing costs, transportation costs, and printing costs and could visualize the economic impacts that the emergence of additive technology may have. Moreover, it could measure the environmental impacts on the structure, evaluate the supply chain behavior's optimization, and define the best response to production problems.
The model was based on systemic thinking, applying the system dynamics methodology to include the totality of the SC management processes, representing all the elements that make it up (suppliers, focal manufacturer, and distribution network): the purchasing, production, and product delivery processes to the final customer, representing the order management and the interaction of the actors. The model offers the opportunity to visualize the behavior of a complex system, such as the supply chain, allowing managers to obtain a degree of confidence through simulation, facilitating the planning of future and experimental environments. Finally, the model allows comparing additive and subtractive behavior to contrast the simulation prediction with reality in the presentation of different scenarios. By modifying the variables, it will be possible to visualize emerging characteristics of the supply chain with "better" behaviors, savings in processing times, distribution times, and inventory levels. The advantage of approaching the model through system dynamics is that it represents cycles that can be formulated as a set of nonlinear differential equations, i.e., there is no fixed solution; instead, there is an infinite number of possible solutions to the behavior of the SC, facilitating the visualization of these characteristics and the easy-tounderstand representation of a complex system.
The model has limitations in the inclusion of complex variables, such as the level of design, product customization, or the visualization of the change in information and knowledge management (patents), which impact the new business models appropriated by technology. In addition, it should be considered that the model does not provide specific optimization data since, as mentioned above, it is a starting point for considering multiple alternatives and future scenarios. Likewise, if researchers want to improve the reliability of the results, it is fundamental to base their investigation on real case studies because of the SC's complexity.
In the present study, aspects such as the decrease of machinery in the supply chain and the level of knowledge of the new human resources are not considered; therefore, it is necessary to explore the influence of new variables.
Conclusions
This paper reaffirms the importance of system dynamics to represent the behavior of the supply chain, visualizing the links that integrate it (supplier-focal manufacturerdistribution network). It also allows for appreciating the role of the delays presented in each link. The system influence diagram recreates the SCM as a whole, and the data flow diagram recreates the relationships of the production process, allowing us to consider the impact of additive technology. The vector model made it possible to represent the mathematical complexity of a set of differential equations, the smallest representation with 26 and the proposed scenario with 66, described in terms of flows, levels, and delays, allowing the reader to analyze the supply chain as a study phenomenon in a simple way. It also describes the particularity in order management (MTO) as the variation of the demand requirement and the product characteristics regarding raw material consumption, printing times, and distribution times.
In this way, a Make to Order (MTO) supply chain management was represented in a simplified form, allowing to contemplate characteristics of the AM and the TM, such as the transformation process, raw material management, inventory purchase, and product distribution. The sensitivity analysis performed on the AM and MT confirmed that the AM presents shorter lead times in the SC, higher production capacity, and lower raw material inventory levels.
Some of the potential impacts of the MA on the chain management processes were supported. The latter allows deducing high levels of customization, greater control of order traceability, lower storage levels, and less material transportation, reflected in lower costs and time.
This model constitutes a starting point to consider different alternatives for the functioning of the supply chain, which contemplates structural operating policies and operating policies in terms of process management. Furthermore, this phenomenon demands future research to formulate models that could facilitate the recreation of impact scenarios in appropriating technology, determining centralized and distributed supply chains, different roles of the elements, changes in the links, and the cost-benefit implications that this may represent.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/pr9060982/s1, Table S1: Applications-System Dynamics Modeling in Supply Chain Management; Table S2: Study variables in system dynamics models based on the Supply Chain. | 11,417 | 2021-06-02T00:00:00.000 | [
"Engineering",
"Business"
] |
$|V_{ub}|$ and $B\to\eta^{(')}$ Form Factors in Covariant Light Front Approach
$B\to (\pi, \eta, \eta')$ transition form factors are investigated in the covariant light-front approach. With theoretical uncertainties, we find that $B\to (\pi, \eta, \eta')$ form factors at $q^2=0$ are $f^{(\pi, \eta, \eta')}_{+}(0)=(0.245^{+0.000}_{-0.001}\pm 0.011, 0.220 \pm 0.009\pm0.009, 0.180\pm 0.008^{+0.008}_{-0.007})$ for vector current and $f^{(\pi, \eta, \eta')}_{T}(0)=(0.239^{+0.002+0.020}_{-0.003-0.018}, 0.211\pm 0.009^{+0.017}_{-0.015}, 0.173\pm 0.007^{+0.014}_{-0.013})$ for tensor current, respectively. With the obtained $q^2$-dependent $f^{\pi}_{+}(q^2)$ and observed branching ratio (BR) for $\bar B_d\to \pi^+ \ell \bar \nu_{\ell}$, the $V_{ub}$ is found as $|V_{ub}|_{LF}= (3.99 \pm 0.13)\times 10^{-3}$. As a result, the predicted BRs for $\bar B\to (\eta, \eta') \ell \bar\nu_{\ell}$ decays with $\ell=e,\mu$ are given by $(0.49^{+0.02+0.10}_{-0.04- 0.07}, 0.24^{+0.01+0.04}_{-0.02-0.03})\times 10^{-4}$, while the BRs for $D^-\to (\eta,\eta')\ell\bar\nu_{\ell}$ are $(11.1^{+0.5+0.9}_{-0.6-0.9}, 1.79^{+0.07+0.12}_{-0.08-0.12})\times 10^{-4}$. In addition, we also study the integrated lepton angular asymmetries for $\bar B\to (\pi,\eta,\eta')\tau \bar\nu_{\tau}$:$(0.277^{+0.001+0.005}_{-0.001-0.007},0.290^{+0.002+0.003}_{-0.000-0.003},0.312^{+0.004+0.005}_{-0.000-0.006})$.
Compared with nonleptonic B decays, semileptonicB → η (′) ℓν ℓ andB s → η (′) ℓ + ℓ − decays are much cleaner and thus might be more helpful to explore the differences among various mechanisms. In particular, a sizable flavor-singlet component of η (′) predicts larger BRs for B → η ′ ℓν ℓ than the η modes, while the chiral symmetry breaking enhancement could give the reverse results [7]. Nevertheless, before one considers various possible novel effects on η (′) , it is necessary to understand the BRs forB → η (′) ℓν ℓ decays without these exotic effects. In our previous work [7], we used the perturbative QCD approach [8] to calculate the B → η (′) form factors at large recoil; then the same whole spectrum as a function of invariant mass of ℓν ℓ for the form factors is assumed with that in the light-cone sum rules (LCSRs). Despite the predicted results for various branching ratios are consistent with the experimental data, it is meaningful to examine the same processes in other parallel frameworks. This is helpful to reduce the dependence on the treatments of the dynamics in transition form factors. The motif of this work is to employ another method to deal with the form factors: the covariant light-front (LF) approach [9,10]. Since the predictions of B → π form factors in LF model match very well with those applied to the nonleptonic charmless B decays, it is worthy to understand what we can get the B → η (′) form factors by this approach.
At the quark level, theB → η (′) ℓν ℓ is induced by b → ulν transition which will inevitably involve theūu component of the η ( ′ ) meson. Then the convenient mechanism for the η − η ′ mixing would be the quark flavor mixing scheme, defined by [11,12] where η q = (uū + dd)/ √ 2, η s = ss and angle φ is the mixing angle. By the definition of , the masses of η q,s can be expressed by Here, m qq and m ss are unknown parameters and their values can be obtained by fitting with the data. In terms of the quark-flavor basis, we see clearly that m qq and m ss are zero in the chiral limit. The advantage of the quark-flavor mixing scheme is: at the leading order in α s only the quark transition from the B meson into the η q component is necessary; while the other transitions like B → η s are suppressed by α s . The gluonic form factors (or referred to as flavor-singlet form factors) will be remarked later.
For calculating the transition form factors, we parameterize the hadronic effects as with P µ = (P ′ + P ′′ ) µ and q µ = (P ′ − P ′′ ) µ . Since the light quarks in B-meson are u-and d-quark, the meson P could stand for π and η q states.
In the covariant LF quark model, the transition form factors for B → P could be obtained by computing the lowest-order Feynman diagram depicted in Fig.1. Below we will adopt the same notation as Ref. [9] and light-cone coordinate system for involved momenta, in which the components of meson momentum are read by P ′ = (P ′− , P ′+ , P ′ ⊥ ) with P ′± = P ′0 ± P ′3 . The relationship between meson momentum and the momenta of its constitutent quarks is given by P ′ = p ′ 1 + p 2 and P ′′ = p ′′ 1 + p 2 with p 2 being the spectator quark of initial and final mesons. Additionally, one can also express the quark momenta in terms of the internal with x 1 + x 2 = 1. Here, the notation with tilde could represent all momenta in the initial and final mesons.
In order to formulate the results of Fig. 1, the quark-meson-antiquark vertex for incoming and outgoing mesons are respectively chosen to be where H ′ P is the covariant light-front wave function of the meson. Consequently, the amplitude for the loop diagram is straightforwardly written by where N c = 3 is the number of colors, N with the M ′ (M ′′ ) being the mass of the incoming (outgoing) meson. As usual, the loop integral could be performed by the contour method. Therefore, except some separate poles appearing in the denominator, if the covariant vertex functions are not singular, the integrand is analytic. Thus, when performing the integration, the transition amplitude will pick up the singularities from the anti-quark propagator so that the various pieces of integrand are led to be We work in the q + = 0 frame and the transverse momentum of the quark in the final meson is given as The new function of h ′ M for initial meson is given by with where e i can be interpreted as the energy of the quark or the antiquark, M ′ 0 can be regarded as the kinetic invariant mass of the meson system and ϕ ′ P is the LF momentum distribution amplitude for s-wave pseudoscalar mesons. The similar quantities associated with the outgoing meson can be defined by the same way.
After the contour integration, the valance antiquark is turned to be on mass-shell and the conventional LF model is recovered. The formulas of the form factors in the LF quark model shown in Eq. (7) would contain not only the terms proportional to P µ and q µ , but also the terms proportional to a null vectorω = (2, 0, 0 ⊥ ). This vector is spurious, because it does not appear in the standard definition of Eq.(3), and spoils the covariance. In the literature, it is argued that this spurious factor can be eliminated by including the so-called zero-mode contribution, and a proper way to resolve this problem has been proposed in Ref. [9]. In this method, one should obey a series of special rules when performing the p − integration.
A manifest covariant result can be given with this approach, which is physically reasonable.
Using Eqs. (7)-(9) and taking the advantage of the rules in Ref. [9,10], the B → P form factors are straightforwardly obtained by where the relation of f P − (q 2 ) to f P 0 (q 2 ) can be read by Clearly, one has f P + (0) = f P 0 (0). After we obtain the formulae for the B → P transition form factors, the direct application is the exclusive semileptonicB → P ℓν ℓ decays. The effective Hamiltonian for b → uℓν ℓ in the standard model (SM) is given by Although these decays are tree processes, however, if we can understand well the form factors, there still have the chance to probe the new physics in these semileptonic decays [14,15]. Hence, the decay amplitude forB → P ℓν ℓ is written as To calculate the differential decay rates, we choose the coordinates of various particles as follows q 2 = ( q 2 , 0, 0, 0), p B = (E B , 0, 0, |p P |), where . It is clear that θ is defined as the polar angle of the lepton momentum relative to the moving direction of the B-meson in the q 2 rest frame. With Eqs. (14) and (15), the differential decay rate forB → P ℓν ℓ as a function of q 2 and θ can be derived by Since the differential decay rate in Eq. (16) involves the polar angle of the lepton, we can define an angular asymmetry to be with z = cos θ. Explicitly, the asymmetry forB → P ℓν ℓ decay is Moreover, the integrated angular asymmetry can be defined bȳ The angular asymmetry is only associated with the ratio of form factors, which supposedly is insensitive to the hadronic parameters. Plausibly, this physical quantity could be the good candidate to explore the new physics such as charged Higgs [14], right-handed gauge boson [15], etc.
Before presenting the numerical results for the form factors and other related quantities, we will briefly discuss how to extract the input parameters for the η q in the presence of η −η ′ mixing. Following the divergences of the axial vector currents where G = G aµν are the gluonic field-strength andG =G aµν ≡ ǫ µναβ G a αβ , the mass matrix of η q,s becomes with a 2 = 0|α s GG|η q /(4 √ 2πf q ) and y = f q /f s . Using the mixing matrix introduced in Eq.
(1), one can diagonalize the mass matrix and the eigenvalues are the physical mass of η and η ′ . Correspondingly, we have the relations [13] sin and m η (′) is the mass of η (′) . Once the parameters φ, y and a are determined by experiments, we can get the information for m qq,ss and f q,s . Then, they could be taken as the inputs in our calculations.
After formulating the necessary pieces, we now perform the numerical analysis for the form factors and the related physical quantities introduced earlier. For understanding how well the predictions of LF model are, we first analyze B → π form factors at q 2 = 0.
By examining Eq. (11), we see that the main theoretical unknowns are the parameters of distribution amplitudes of mesons, masses of constitute quarks and the decay constants of mesons. As usual, we adopt the gaussian-type wave function for pseudoscalar mesons as with β ′ P characterizing the shape of the wave function. Other relevant values of parameters are taken as (in units of GeV) m B = 5.28 , m b = (4.8 ± 0.2) , m π = 0.14 , where m u,d are the constituent quark masses, the errors in them are from the combination of linear, harmonic oscillator and power law potential [16] and f P denotes the decay constant of P-meson. The shape parameters βs are determined by the relevant decay constants whose analytic expressions are given in Ref. [10]. Following the formulae derived in Eq. (11) and using the taken values of parameters, we immediately find where the first and second errors are from (i) β ′ B and β ′ ηq (ii) the quark masses m u and m b , respectively. From Eq. (11), one can see that the form factor f ηq + (q 2 ) does not depend on the mass m qq , while the dependence of m qq in f ηq T resides in the term M ′ +M ′′ (in this case m B + m qq ). The uncertainty of f T caused by the m qq is less than 2%. Furthermore, since the form factors are associated with mixing angle φ, the corresponding uncertainties for B → η (′) and BRs ofB → η (′) ℓν ℓ are expected to be 2.1% (1.4%) and 4.2% (2.8%), respectively. Despite different treatments of quarks' momenta, the results here are well consistent with that in light-cone quark model constructed in the effective field theory [18]: f ηq + (0) = 0.287 +0.059 −0.065 . Intriguingly, our results are also consistent with f η + (0)| LCSR = 0.231 +0.018 −0.020 and f η ′ + (0)| LCSR = 0.189 +0.015 −0.016 calculated by LCSRs [19]. In order to understand the behavior of whole q 2 , the form factors for B → P are parametrized by [17] where F i denotes any form factor among f +,0,T . The fitted values of a, b for B → (π, η, η ′ ) are displayed in Table I In the quark flavor mixing mechanism, the η and η ′ meson receives additional coupling with two gluons, due to the axial anomaly. Thus to be self-consistent, in the study of the transition form factors, one also needs to include the so-called gluonic form factors which is induced by the transition from the two gluons into the η ( ′ ) . In our study, the gluonic form factors have been neglected and there are two reasons for this. In the light-front quark model, the leading order contribution to the form factor is of the order α 0 s while the gluonic form factor is suppressed by the α s , where the coupling constant is evaluated at the typical scale µ ∼ Λ QCD × m B (with Λ QCD hadronic scale). The inclusion of the gluonic form factors also requires the next-to-leading order studies for the quark content, which is beyond the scope of the present work. Secondly the factorization analysis of the gluonic form factors such as the perturbative QCD study in Ref. [22] reflects that there is no endpoint singularity in the gluonic form factors and the PQCD study shows that the gluonic form factors are negligibly small. This feature is also confirmed by the recent LCSR results [19]. For terms without endpoint singularity, different approaches usually obtain similar results. Thus our results of the semileptonic B → η (′) lν will not be sizably affected by the gluonic form factors, although they are not taken into account in the present analysis.
Besides the form factors could be the source of uncertainties, another uncertain quantity in exclusive b → uℓν ℓ decays is from the Cabibbo-Kobayashi-Maskawa (CKM) matrix element V ub ∼ λ 3 with λ being Wolfenstein parameter. Results for V ub determined by inclusive and exclusive decaying modes have some inconsistencies [15,20]. For a selfconsistent analysis, we take B → π form factors calculated by LF model and the data [20] as the inputs to determine the |V ub |. Neglecting the lepton mass, one gets the differential decaying rate forB → πℓ ′ν where only the f π + form factor involves. Accordingly, the value of V ub is found by With the obtained result of |V ub | LF , the form factors in the Table I, the predicted BRs for B − → (η, η ′ )ℓν ℓ , together with the experimental results measured by BaBar collaboration [21], are displayed in Table II. The predicted result for the BR of B − → ηℓν ℓ is about two times larger than that of B − → η ′ ℓν ℓ : the form factor of B − → η is larger than the form factor of B − → η ′ ; the phase space in B → η ′ ℓν ℓ is smaller. Branching ratios for decays with a tau lepton are naturally smaller than the relevant channels with a lighter lepton. Mode Finally, we make some remark on the D decays. We find that the obtained information on η (′) can be directly applied to the semileptonic D + → η (′) ℓν ℓ decays. Since the associated respectively. It is found that B(D − → η ′ ℓν ℓ ) is almost one order of magnitude smaller than B(D − → ηℓν ℓ ). The reason for the resulted smallness is just phase space suppression. Our predictions are well consistent with the recent measurements by the CLEO collaboration [23]: B(D − → ηℓν ℓ ) = (1.33 ± 0.20 ± 0.06) × 10 −3 , This consistence is very encouraging. The D − → η ′ lν may be detected in the near future.
Our results are also consistent with the results given in Ref. [24]. | 3,837.4 | 2010-03-22T00:00:00.000 | [
"Physics"
] |
Nine Months of Hybrid Intradialytic Exercise Training Improves Ejection Fraction and Cardiac Autonomic Nervous System Activity
Cardiovascular disease is the most common cause of death in hemodialysis (HD) patients. Intradialytic aerobic exercise training has a beneficial effect on cardiovascular system function and reduces mortality in HD patients. However, the impact of other forms of exercise on the cardiovascular system, such as hybrid exercise, is not clear. Briefly, hybrid exercise combines aerobic and strength training in the same session. The present study examined whether hybrid intradialytic exercise has long-term benefits on left ventricular function and structure and the autonomous nervous system in HD patients. In this single-group design, efficacy-based intervention, twelve stable HD patients (10M/2F, 56 ± 19 years) participated in a nine-month-long hybrid intradialytic training program. Both echocardiographic assessments of left ventricular function and structure and heart rate variability (HRV) were assessed pre, during and after the end of the HD session at baseline and after the nine-month intervention. Ejection Fraction (EF), both assessed before and at the end of the HD session, appeared to be significantly improved after the intervention period compared to the baseline values (48.7 ± 11.1 vs. 58.8 ± 6.5, p = 0.046 and 50.0 ± 13.4 vs. 56.1 ± 3.4, p = 0.054 respectively). Regarding HRV assessment, hybrid exercise training increased LF and decreased HF (p < 0.05). Both conventional Doppler and tissue Doppler imaging indices of diastolic function did not change after the intervention period (p > 0.05). In conclusion, long-term intradialytic hybrid exercise training was an effective non-pharmacological approach to improving EF and the cardiac autonomous nervous system in HD patients. Such exercise training programs could be incorporated into HD units to improve the patients’ cardiovascular health.
Introduction
Cardiovascular disease is the leading cause of death in the hemodialysis (HD) population [1]. According to the literature, patients who receive HD therapy experience cardiac dysfunction and structure abnormalities [2], dysfunction of the autonomous nervous system and cardiac arrhythmias [1,3], as well as reduced cardiorespiratory fitness [4]. These factors are strongly associated with the high cardiovascular morbidity and mortality rate that characterizes the current HD population [1]. Previous research has suggested that the reduced heart rate variability (HRV) in HD patients may play an essential role in the higher risk of cardiovascular complications and sudden cardiac death [5].
Conventional HD therapy itself has been associated with various cardiovascular abnormalities and increased cardiovascular stress [1,6]. Intradialytic myocardial stunning Sports 2023, 11, 79 2 of 10 (ischemia-mediated temporary reduction in cardiac function) may, over time, lead to irreversible fibrotic changes and chronic HF, arrhythmias, and sudden cardiac death (SCD) [7]. As HD is a frontline treatment option for end-stage renal disease and is required for these patients' survival, interventions aiming to counterbalance its potential adverse effects on cardiovascular health are considered crucial.
It is well known that HD patients are usually physically inactive [8,9], and it seems that there is an association between low physical activity levels and all-cause mortality in this population [10]. Exercise training is considered to be an effective and safe non-pharmacological approach in terms of health, functional capacity and quality of life improvement in HD patients [11][12][13]. The most popular form of exercise for these patients is intradialytic aerobic exercise using cycle ergometers [13,14]. Aerobic exercise training has been shown to induce several beneficial effects on the cardiovascular system of HD patients, reducing cardiovascular events, improving autonomic function [15], increasing left ventricular ejection fraction [16,17], improving left ventricular mass [18], increasing cardiorespiratory fitness [13,19] and physical performance [20], improving stroke volume and cardiac output [17], reducing blood pressure [21] and improving their lipid profiles [22]. In addition, two recent studies showed that a single bout of intradialytic aerobic cycling reduced myocardial stunning [23,24].
Other forms of exercise, such as hybrid exercise, have been effective in improving overall health and quality of life parameters in patients with chronic diseases, including HD patients [25]. Briefly, a typical hybrid exercise session includes both aerobic (i.e., cycling) and resistance exercises (i.e., using elastic bands) and can also be implemented during the HD session [25]. A recent study from our group revealed that a single session with hybrid intradialytic exercise was well tolerated by HD patients and did not negatively affect left ventricular function during therapy [26]. The long-term effect of this form of exercise on cardiovascular risk profile, both at rest and during a HD session, has to be examined.
The present study examined whether hybrid intradialytic exercise has long-term benefits on left ventricular function and structure and autonomous nervous system in HD patients. All measures of HRV as well as LV structure and function were assessed at rest as well as during and after an acute HD therapy session, before and after the intervention period in patients on HD. The Echocardiographic scans were collected before the initialization of the HD session, during the last hour of the HD session and after the end of the HD session. The HRV parameters were collected prior to HD therapy, every one hour of HD therapy and after the end of the HD session. All parameters were collected while patients were resting on the bed.
Participants
Patients were recruited from the HD unit of the local hospital. The inclusion criteria for the study were: being on HD for at least three months or more with adequate dialysis delivery and with a stable clinical condition. The exclusion criteria included: (1) presence of diagnosed neuropathies (2) presence of a catabolic state within three months before the start of the study, (3) or unable or did not agree to participate in an exercise training program. None of the recruited patients were engaged in any systematic exercise training program 3 months prior to the initialization of the study. After the initial screening, twelve patients (10M/2F, 56 ± 19 years) fulfilled the criteria and enrolled in the study (Figure 1). The Human Research and Ethics Committee approved the study of the University of Thessaly, and it was approved by the bioethics committee of the University General Hospital of Larissa, Greece (UHL). All patients gave their written informed consent before study initialization. The whole study is registered at ClinicalTrials.gov (NCT01721551) as a clinical trial, while this current study presents a subset of data acquired under the registered RCT study.
The Human Research and Ethics Committee approved the study of the University of Thessaly, and it was approved by the bioethics committee of the University General Hospital of Larissa, Greece (UHL). All patients gave their written informed consent before study initialization. The whole study is registered at ClinicalTrials.gov (NCT01721551) as a clinical trial, while this current study presents a subset of data acquired under the registered RCT study.
Hybrid Intradialytic Exercise Program
Patients followed a 9-month intradialytic exercise training program supervised by two specialized exercise physiologists. With regards to the aerobic exercise program, supine cycle exercise was performed three times weekly for 60 min each time during the first 2 h of HD sessions using an adapted bicycle ergometer (Model 881 Monark Rehab Trainer, Varberg, Sweden) at an intensity of 50-60% of the patient's maximal exercise capacity (in Watts), which was estimated during a previous HD session using a modified
Hybrid Intradialytic Exercise Program
Patients followed a 9-month intradialytic exercise training program supervised by two specialized exercise physiologists. With regards to the aerobic exercise program, supine cycle exercise was performed three times weekly for 60 min each time during the first 2 h of HD sessions using an adapted bicycle ergometer (Model 881 Monark Rehab Trainer, Varberg, Sweden) at an intensity of 50-60% of the patient's maximal exercise capacity (in Watts), which was estimated during a previous HD session using a modified version of Åstrand Bicycle Ergometer test [27]. This test required the patient to cycle in the supine position at 50 rpm while the intensity was increased by 10 watts every 1 min until exhaustion. Afterward, the patients performed 20 min of resistance exercise using resistance bands (TheraBand© professional Latex, AKRON, OH 44,310, USA-Resistance from Green to Silver) and portable ankle weights and dumbbells. Briefly, the resistance training program consisted of 3 sets of 12 repetitions of the following exercises: (i) resistance bands exercises: chest press, triceps extension, shoulder flexion, hip abductions, seated Sports 2023, 11, 79 4 of 10 row; (ii) ankle weights and dumbells exercises: knee extension, biceps curl, hip fexion, shoulder press, side shoulder raise, straight-legged raise. The hand with the fistula was excluded from any exercise. The resistance training intensity was assessed by the Borg Rating of Perceived Exertion (RPE) scale and set to be between 14 and 16 (medium to hard). The aerobic part was implemented at the beginning of the training bout followed by the resistance component. The interval between the two types was 10 min. The workto-rest ratio regarding the resistance training was 1:1. Resting between sets and exercises included lying down doing nothing for 2 min. The exercise intensity and the resistance of the hybrid exercise program were assessed every 6 weeks and adjusted accordingly. In particular, the intensity of the aerobic part of the program was adjusted (in watts) based on the performance of the patients in the modified version of the Åstrand Bicycle Ergometer test. As mentioned in the text, the resistance training intensity was assessed by the Borg RPE scale and set to be between 14 and 16. When the patient reported values lower than 14, the resistance was re-adjusted by changing the type of the elastic bands (i.e., from Green to Blue, etc.) and increasing the dumbbell weight.
Hemodialysis Procedure
The patients underwent HD therapy (4 h × 3 times per week) (Fresenius 4008B, Oberursel, Germany) with low flux, hollow-fiber dialysers and bicarbonate buffers. An enoxaparin dose of 40-60 mg was administered intravenously before the beginning of each HD session. In addition, Erythropoietin therapy was given after the completion of the HD session in order to normalize hemoglobin levels within 11-12 (g/dL).
Echocardiography
Echocardiographic scans were performed by an experienced cardiologistechocardiographer using an iE33 echocardiographic system (Philips Medical Systems, Andover, MA, USA). All image acquisitions were made with the subject lying in the left lateral decubitus position using a 2.5 MHz transducer. Three consecutive beats were analyzed in each scan for each patient, and the mean value was used in the subsequent statistical analysis. A single-lead ECG inherent to the echocardiographic system was used for the recording of HR. Left ventricular (LV) dimensions were determined from 2-dimensional guided M-Mode images according to the American Society of Echocardiography (ASE) recommendations for chamber quantification [28] using the parasternal long-axis acoustic window. LV mass was calculated from M-Mode traces at the mitral valve level and determined in g by using the recommended ASE formula. LV mass index was calculated by dividing LV mass by body surface area (using the DuBois and DuBois formula) and height to minimize the effects of age, gender, and overweight status [28]. For the assessment of LV diastolic function, the transducer was applied apically (4-chamber view) whilst a pulsed wave Doppler sample volume (4 mm) was located at the tips of the mitral valve leaflets. Doppler gain, pulse repetition frequency, and high-pass filter were all adjusted to maximize the signal-to-noise ratio. The following parameters were evaluated: early peak flow velocity (E), late peak flow velocity (A); thus, the ratio of E to A was calculated. The ejection fraction was calculated using the biplane Simpson's method from 2-dimensional apical 2-and 4chamber orientation to evaluate the patient's systolic function. Tissue Doppler velocities were assessed at the basal septum, using pulsed-wave Doppler. The sample volume (2 mm) was placed at the basal septum at the level of the mitral annulus ring in parallel to the longitudinal movement of the septum. Peak early diastolic (E') and peak late diastolic (A') myocardial tissue velocities were assessed and the E'/A' ratio was calculated. In addition, the conventional Doppler E to tissue Doppler E' ratio (E/E') was calculated.
Heart Rate Variability Assessment
Heart rate variability was measured using heart rate monitors (RS800CX, Polar Electro Oy, Kempele, Finland) validated for heart rate variability assessment [29]. For the heart rate variability time domain, the square root of the mean of squared differences Sports 2023, 11, 79 5 of 10 between successive RR intervals and the percentage of successive normal-to-normal intervals greater than 50 milliseconds were computed [30]. For the HRV frequency domain, the low and high-frequency bands, expressed in normalised units (nu) and their ratio (low frequency/high frequency) were reported [30]. HRV indices (low-frequency activity, high-frequency activity, low-frequency/high-frequency activity, the square root of the mean of squared differences between successive RR intervals and the percentage of successive normal-to-normal intervals greater than 50 milliseconds) were analyzed using Kubios Heart Rate Variability Analysis Software V1.1 (Kubios Oy, Business ID 2740217-3,Varsitie 22, 70150 Kuopio, FINLAND). The HRV parameters were collected prior to HD therapy, every one hour of HD therapy and after the end of the HD session.
Blood Chemistry
Routine monthly biochemical results were recorded, including C reactive protein, ferritin, iron, hematocrit, and hemoglobin. The analyses were performed at the clinical biochemistry lab of the University Hospital of Larissa under standard hospital procedures.
Statistical Analysis
Statistical analysis was performed using one-way repeated-measures analysis of variance (ANOVA). When ANOVA showed statistical significant differences between measurements, Bonferroni's correction for multiple comparisons was performed to assess where specific differences occurred. In addition, for comparing initial and final values (pre and post-exercise training), paired t-tests were used. The results are expressed as mean ± SD. All the statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS for Windows, version 18.0, Chicago III). The level for statistical significance was set at p ≤ 0.05.
Results
All twelve HD patients completed the 9-month intervention program without any adverse effects. Patient basic characteristics are presented in Table 1. Echocardiographic data are presented in Table 2. Significant improvements were observed in the EF after intradialytic exercise both at baseline and after the nine-month intervention period, whilst the pre-HD value of EF appears to be significantly improved when assessed at baseline after the nine months of exercise training compared to the baseline value (p = 0.046). Both conventional Doppler and TDI indices of diastolic function did not change after the intervention period (p > 0.05). Finally, regarding the measurement Sports 2023, 11, 79 6 of 10 performed at the end of the HD session, EF% (p = 0.054) and DT (p = 0.014) increased and decreased, respectively, after the intervention. HRV indices are presented in Table 3. LF reduced and HF increased, while pNN50% was higher after the intervention (p < 0.05).
Discussion
This study investigated the effect of a 9-month hybrid exercise training regimen, undertaken during HD sessions, on left-ventricular function and structure and HRV (both assessed at rest, during the HD therapy and after the end of the HD session) in hemodialysis patients. The findings of our study demonstrated that long-term hybrid intradialytic exercise did not negatively impact the ejection fraction or HRV, and possibly could improve left ventricular function and HRV. These outcomes bear high clinical significance as HD patients are vulnerable to cardiovascular problems and increased mortality.
The present study showed that nine months of hybrid exercise during HD sessions significantly increased resting LV EF. Possible mechanisms explaining the increase of EF after systematic exercise training include increased oxygen supply of cardiac muscle, reduction in cardiac afterload and augmented function of cardiac autonomic nervous system activity [17,22,31]. The current study's findings of positive changes in EF with a hybrid exercise intervention confirm and extend previous studies using long-term aerobic intradialytic exercise training [17]. For instance, in the study by Deligiannis and colleagues, six months of intradialytic aerobic exercise resulted to a 5% increase in resting EF [17]. An approximately 4% increase in resting EF was observed in the present study. Therefore, similarly to pure aerobic exercise, hybrid intradialytic exercise could have cardioprotective properties (as it uses similar aerobic exercise exposure in the aerobic part of the exercise regimen) and thus could be recommended to these patients.
There is some evidence revealing beneficial effects of intradialytic exercise on HD therapy itself such as improved HD efficiency and increased solute removal [20,32], reduced motor restlessness [33] as well as improved psychological parameters [34], post-dialysis fatigue [25] and sleep quality [35]. Similarly, with acute intradialytic hybrid exercise [26], the application of nine months of the same exercise form did not result in improvements or impairments in left ventricular diastolic function parameters when assessed during the HD therapy. Future studies may explore the effect of even longer and different (i.e., using higher intensities) interventions using hybrid exercise on cardiovascular parameters during the HD therapy.
In the present study, the HF and the LF parameters were found to be significantly increased and decreased, respectively, after nine months of exercise training; they showed a favorable adaptation to exercise of cardiac autonomic nervous system activity. According to the literature, exercise training can reduce emotional distress and concomitantly improve HRV [36], reducing susceptibility to arrhythmias [37]. Our findings bear a high clinical significance as in previous studies; a reduction in the SDNN, LF, and LF/HF parameters that predicted cardiovascular death and, more specifically, sudden death [38].
A large body of evidence shows that HD patients are vulnerable to cardiovascular diseases and have very high mortality. Although many studies reveal that exercise can improve the functionality of various physiological systems and overall health, most HD patients are physically inactive [8] and they do not participate in exercise training programs, despite having a positive perception of exercise [39]. Hybrid exercise is a relatively new form of training that combines aerobic with resistance exercise. The current study's findings support intradialytic exercise training programs as non-pharmacological methods to improve cardiovascular system functionality in HD patients, introducing this form of exercise as an alternative to traditional aerobic exercise. Hemodialysis units can be ideal settings for delivering safe and effective exercise programs for the patients, improving health and quality of life parameters, while the incorporation of exercise professionals into these units could help the patients to engage in exercise interventions.
We must acknowledge that some studies did not report significant improvements in HRV and left ventricular function after aerobic intradialytic exercise training [40]. This can be attributed to both the reduced duration of the intervention compared to the current study and differences in the nature of exercise training (aerobic vs hybrid). The current study has some strengths and weaknesses that we wish to acknowledge. The main limitation of the study is the lack of a control group. This was a single-group design, efficacy-based nine-month intervention, and the participants were recruited from a single HD unit, thus it was difficult to find patients who were willing to undergo the examination without doing exercise. Patients on hemodialysis are unique and require continuous and extensive care to keep active and healthy, so such long-term interventions are not common. On the other hand, the study's strengths were the long duration of the supervised exercise program and the echocardiographic examination that was performed, among other points, during the HD session (a very challenging and demanding procedure). Randomized-controlled trials, with larger sample sizes, need to be conducted in the future to compare the effectiveness of hybrid exercise over other traditional forms of exercise on this specific population.
In conclusion, nine months of supervised hybrid intradialytic exercise training did not negatively impact the ejection fraction or heart rate variability indices. On the contrary, it seems that the combination of aerobic and resistance training in a single bout of exercise has a positive effect on ejection function and heart rate variability in stable hemodialysis patients. Hybrid intradialytic exercise training is well tolerated and could be suggested as a non-pharmacological approach for improving cardiovascular health in hemodialysis patients. | 4,592 | 2023-03-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
Photovoltaic Modules Diagnosis Using Artificial Vision Techniques for Artifact Minimization
: The installed capacity of solar photovoltaics has increased over the past two decades worldwide, evolving from a few small scale applications to a daily power source. Such growth involves a great impact over operating processes and maintenance practices. The RGB (red, green and blue) and infra-red monitoring of photovoltaic modules is a non-invasive inspection method which provides information of possible failures, by relating thermal behaviour of the modules to the operational status of solar panels. An adequate thermal measurement module strongly depends on the proper camera angle selection relative to panel’s surface, since reflections and external radiation sources are common causes of misleading results with the unnecessary maintenance work. In this work, we test a portable ground-based system capable of detecting and classifying hot-spots related to photovoltaic module failures. The system characterizes in 3D thermal information from the panels structure to detect and classify hot-spots. Unlike traditional systems, our proposal detects false hot-spots associated with people or device reflections, and from external radiation sources. Experimental results show that the proposed diagnostic approach can provide of an adequate thermal monitoring of photovoltaic modules and improve existing methods in 12% of effectiveness, with the corresponding financial impact.
Introduction
Accelerated demographic and economic growth in several countries has led to an increase in the electrical energy demand. Currently, Chile promotes the electric power generation with non-conventional renewable energies, mainly solar photovoltaics (PV), because of the country outstanding sunlight conditions, its climate and its geographical location [1]. However, despite favorable characteristics, technological innovations and advances of the electrical sector, there is a concern regarding to the PV-module performance and duration in desert environments. It is estimated that approximately 27% of PV-plant failures occurred as a result of damage to PV modules [1]. In this context, preventive maintenance carried out periodically could extend the PV-plant lifetime, providing trouble-free operation.
The future of the photovoltaic plant inspection is focused on the maintenance robotics with emphasis on robots able to detect and correct damaged electric equipment [2]. Nowadays, both autonomous robotics and tele-operated machines, despite being useful, have found only limited application because of payload capacity of platforms (mainly aerial platforms), limited access and rugged environments [3].
Robotic platforms can be combined with Internet of Things (IoT) and artificial intelligence algorithms, perform a variety of PV-related tasks such as visual inspection [2,4] (modules, wiring and/or other plant equipment), infrared thermography [5] (hot-spots detection) and vegetation Operational and maintenance traditional practice, commonly known as a truck roll, is a method that mainly involves dispatching personnel and mobile vehicles to a PV-plant, to carry out routinized inspections and equipment servicing tasks -preventive maintenance-or to correct equipment failures after their occurrence -corrective maintenance-. These truck rolls typically cost companies between 300 and 600 dollars per visit depending on labor and fuel costs, as well as distance of the dispatch [15]. If using robotized machinery, the costs might vary between 1.000 and 5.000 dollars for climbing robots. In addition, aerial monitoring can costs thousands of dollars, like the Unmanned Aircraft System (UAS) developed by Electric Power Research Institute, which costs between 16.000 dollars and 100.000 dollars per unit. However, these aerial platforms have operated costs relatively small [15].
Truck roll using ground robots is another inspection alternative which promises to be an adequate solution for performing the maintenance work in photovoltaic plants [3]. Nowadays, significant progress has already been made in several fields, becoming in an advanced technology, which is used to develop maintenance task in free access places due to its great autonomy [18]. These systems are ideal technology for heavy duty tasks such as detecting and correcting damages over photovoltaic panels or to perform their cleaning [11]. Nevertheless, the systems have important challenges related to detection system of damages and the navigation in the PV-plant [10].
A variety of different defects stemming from manufacturing, internal module damages, faulty interconnections, defective bypass diodes or hard and soft shadowing affect the PV equipment [19]. Early warning of these failure conditions enables to find appropriate solutions to improve efficiency, while enhancing the PV system economics and reducing the labor cost [20]. Ultrasonic methods [21], infra-red analysis [22,23] and electroluminescence imaging [24] allow to identify and reject defective cells during the early stages of manufacturing process. However new damages are possible to appear either within the assembly stage or during the operating lifetime.
Photovoltaic module maintenance technologies can be classified in two groups: invasive and noninvasive. Invasive technology uses the module proprioceptive information (e.g., voltage and current of the module, internal temperature) to sense a fault condition. With this aim, methods based on AC parameter characterization [25], comparison of the PV model in normal and test condition [26,27], laser beam induced current [28], electron beam induced current [29] and DC electrical parameters [30] have been developed. Unfortunately, these methods often require calibration of models and specific sensors network, restricting their usage. On the other hand, noninvasive methods use exteroceptive sensors [15] (e.g., monocular camera, laser, infra-red camera) to perform the PV-plant inspection. In addition, exteroceptive sensors used can be mounted in a robotic platform. However, the usage of this technology is limited by adverse environmental conditions. Imperfections over PV-modules may be seen as hot-spots in the thermal images as a consequence of the abnormal short-circuit current in at least one PV-cell [15,31]. This unusual behaviour appears when PV-cells are totally or partially shaded, cracked, damaged or electrically mismatched. Under these conditions, defective cells are forced to conduct a current higher than their generation capabilities, becoming reverse biased, entering in a breakdown regime and subsequently sinking power instead of sourcing it. These issues can be detected using a visual and a thermal analysis. However, one important challenge is related to automatic detection of hot-spots in a thermal image. Since the inspection system needs to be able to detect the PV-structure, determining hot-spots in images and localizing the damages in the PV-module, a variety of algorithms based on matching learning and computer vision have been used. For example, Tsanakas et al. proposes a hot-spot detection algorithm based on the image histogram and line profile (see [32]), although this approach strongly depends on the thermal viewing patterns (only used with high contrast mode). In addition, it does not filter false hot-spots, as well as hot-spots in the background of the image. With the same aim, in [33,34] it is proposed an algorithm based on standard thermal image processing and Canny's edge detector, with similar results and without facing the problem of the reflection, artificial light sources and external hot-spots. M. Aghaei et al. shows the defects in PV-modules using a Laplacian model and a Gaussian filter [35]; however the false hot-spots analysis is not addressed. Additionally, photovoltaic modules can be detected using calculation of the co-occurrence matrices for fragments of images, as shown in [36]. On the other hand, image segmentation using k-means was proposed in [37] which needs the previous conversion of each acquired frame into CIELAB color space, as also shown in [38].
In this work, an inspection system of PV-modules is presented with the aim of characterizing the most representative artifacts associated with the PV's functionalities, and hence to improve preventive maintenance. The proposed system detects and classifies hot-spots associated with PV-module failures, by eliminating false positives related to external radiation sources and reflections. Due to the system flexibility, it can be mounted on any manual, tele-operated or robotic platform. The issue addressed in this work can be expressed as follows: given an RGB and infra-red image set, the goal is to identify hot-spots stemming from manufacturing defects, module damage, temporary shadowing, defective bypass diodes, and faulty interconnections. The proposed system is described in two stages: PV-module detection and hot-spots classification.
In the first stage, unlike the works presented in [16,36,[39][40][41][42], we use the same visual information acquired to localize the sensor and to reconstruct the PVs. The latter is based on an artificial vision system strategy called photogrammetry, but instead of using RGB (red, green and blue) information, as used in [43][44][45], we use the same thermal information. The method was empirically and theoretically validated in a previous work of the authors shown in [46]. The second stage is focused on using the 3D reconstruction and visual information processing to extract artifacts from the PVs. This stage includes the artificial intelligence algorithms used for classification and pattern recognition described herein.
This manuscript is organized as follows. Section 2 presents in detail the hardware, software and protocols followed to test our methods for PV characterization; Section 3 shows the results obtained when tested our approach on PVs under different environmental and lightning conditions. Section 4 shows a conscientious pros and cons of our approach as well as a comparative table of our approach with others already published that face similar issues as the ones presented herein. Additionally, we include an analysis of the financial impact of our methodology in preventive maintenance tasks. Finally, Section 5 shows the conclusions and our future work.
Material and Methods
The infra-red inspections performed in this work uses long wave infra-red detection methods, suitable for detecting a host of different risks in PV-modules. We use a monocular camera to acquire the RGB and thermal environmental features. The data are sent to a companion station for its processing and evaluation, where the risky hot-spot cells are identified, and the size of the affected surface is computed. All algorithms presented here were implemented in C/C++ under Windows 10 operating system and in Matlab R2017a programming environment. The system operation is summarized as follows: 1. The device is exclusively used for RGB and thermal inspection of PV-modules. The system can be used under variable lighting conditions, but not under direct sunlight. 2. The device is positioned facing the PV-structure at different locations. The distance from the tripod to the structure base and tilt angle of the camera vary depending on field conditions. 3. The distance of the tripod locations between two consecutive images was empirically computed (about 20 cm), in order to ensure proper performance of the matching algorithm (as stated in [47]). 4. The structure supporting the camera can be moved around a horizontal axis, as shown in Figure 1a.
Roll and yaw rotations are blocked on the tripod, as shown in Figure 1b. 5. The RGB and thermal images, from a single location, are merged to obtain a new 2D representation with visual and thermal information. The used thermal camera has a fusion mode, which allows to directly merge thermal and visual images for each location. The latter is important since photovoltaic modules need to be displayed at the proper angle relative to their surface to obtain accurate thermal measurements and reduce the probability of misinterpretations. 6. The proposed two-step algorithm isolates the PV-structure and detects collapsed cells by analysing the visual and thermal information. 7. The PV-module inspection involves storing a merged image (thermal and visual information) of each tranche of PV-structure (between modules), with reference position. In addition, the system returns an inspection report with the location of each hot-spot and the damaged area. 8. Finally, a 3D view, providing thermal and geometrical information of the PV structure, is made available. Following, each step is explained in detail.
Hardware Design
The thermal camera was mounted on a manually-operated tripod, as shown in Figure 2. The exteroceptive data provided by the sensor are stored in a local memory card and later processed on a ground computer. Briefly, • The device deployed in real applications is a commercial tripod Soligor WT-330A, whose technical characteristics such as the weight (0.73 kg) and height (51.5 cm), allow for its portability in inhospitable and rugged environments. The tripod has a load capacity of 3 kg, that is suitable for this type of applications. • Visual and thermal images are acquired with a Ti25 Fluke thermal camera. This device is mounted on the plate of the tripod, aligned with the vertical axis. Each new frame is stored locally in a 16 GB internal memory card with its respective time stamp. • Both cameras have been previously calibrated to find the focal point and estimate their main parameters.
Figure 2.
Proposed measurement system. The system is composed of an thermal camera Fluke Ti25, which is provided with a monocular RGB and infra-red camera. The device was mounted on a commercial tripod.
The technical specifications of the proposed system are summarized in Table 2.
Photovoltaic Module Segmentation
To filter all hot-spots it becomes mandatory the implementation of an efficient segmentation of the PV-modules, especially when such hot-spots are not associated with PV-structure. In this section, the mathematical formulation and the detailed analysis of the segmentation algorithms are derived.
The architecture of the module extraction algorithm is shown in Figure 3, and summarized in the following sections. Hotspots are detected and classified using probabilistic analysis.
A full 3D view,
is then available. The false hot-spots are providing thermal and geometrical information of the PV-structure filtered.
1.2
Pre-processing PV-module detection Data analysis Hot-spot 2
Pre-Processing Images
Lens distortion and noise are two phenomenons that directly disturb the acquired images (both RGB and thermal). To measure PV-panel distances in world-units and to compute the camera's position on the environment, these phenomenons must be filtered out. With this aim, traditional camera calibration method is used as presented in [48]. In addition, digital processing algorithms -presented herein-help to reduce the noise and correct image defects, guaranteeing the PV-module detection.
Matching Algorithm
Due to the low resolution of the infra-red camera and with the aim of detecting false hot-spots, the PV-module is acquired in different images and in multiple frames. All pre-processed images are then merged. To do this, we implemented the matching algorithms previously published in [47], to get an image with the desired PV-structure. Additionally, the camera position is determined in this step using structure from motion, which is a process that estimates the 3D structure of a scene from a set of 2D views [43]. In addition, a pre-processing stage based on Brightness and Contrast Adjustment, previously published in [49], it is used to improve the photovoltaic module detection.
Background Filtering
The merged image has additional objects, which are not related to the PV-structure. These objects from the scene have to be filtered or eliminated to isolate the PV-modules. With this goal, the merged image is subjected to two thresholding conditions. First, the color constrain is applied to obtain a gray-scale image. Then, since the PV-module brightness is related to the saturation of the image, it is possible to eliminate all secondary objects by manipulating these parameters. The thresholds are manually defined by evaluating the histograms of the gray-scale image and the saturation image in off-line mode. Finally, the system refines the PV-structure estimation using a filter, which aim is to remove all connected components that have fewer than P pixels. Such parameter, P, is determined off-line and it is related to the module size. This step returns a binary mask associated with the surface of PV-modules.
Perspective Correction
The inclination of PV-modules is reflected on the binary mask as a perspective distortion. This phenomena must be considered and corrected to compute the real position of the each detected hot-spot. Homographic mapping method illustrates the relationship between two different views of the same real world scene. Let p and p be the corresponding projected image points on the image plane of two different views of the same point located in the 3D real world coordinates system, where the coordinates of this pair of matching points in homogeneous form can be respectively denoted as (x 1 , y 1 , z 1 ) T and (x 2 , y 2 , z 2 ) T . The homographic mapping is a planar projective transformation, that can be expressed as shown in Equation (1) for an homogeneous form. The main challenge is the selection of vectors [x 1 , y 1 , z 1 ] T to compute the homogeneous transformation matrix H: Due to the fact that the system is positioned facing rectangular PV-modules, the proposed algorithm searches for patterns with similar characteristics to module edges on the binary mask. Hough transform can be successfully used to solve this problem, since this method allows to identify the section of the binarized image where high probability of finding straight lines exits (see [50] for further reading). The Hough transform defines a straight line as a co-linear set of points, mapping R 2 into the function space of sinusoidal functions defined by: where ρ, θ are the perpendicular distance of the line i to the center of the coordinates and the angle between the normal of this line and x-axis, respectively. Figure 4 illustrates the relation between ρ, θ and line i . Hough's parameters (ρ and θ) allow to find the horizontal and vertical metal edges of the PV-module. Since there are several metal edges in the merged image, related to the amount of inspected PV-modules, the algorithm selects those edges that do meet two conditions: (a) two horizontal lines, that are parallel and equidistant, and have the maximum separation among them; (b) two diagonal lines, that have a slope between [−30 • , −30 • ], are complementary and have the maximum separation among them.
Once the four lines have been detected, the system finds the specific cut points, which are then used for solving the homogeneous transformation matrix. This step returns a binary mask correcting the perspective distortion and the homogeneous transformation matrix.
Photovoltaic Structure Refined
Sometimes, the corrected binary mask eliminates a portion of the PV-structure due to the thresholding process. To avoid this, the system uses Normalized Cross Correlation (NCC) to evaluate the similarity between different surfaces of the image with a fixed pattern, which is determined in off-line mode, and it is related to the dimension of the module in an image. The NCC is a cosine-like correlation coefficient, which is defined as: where: f is the corrected binary mask; t is the mean of the fixed pattern selected; and f u,v is the mean of f (x, y) in the region under the fixed pattern selected.
If the value of NCC is closer to 1, then it represents that two images are more similar. Finally, the algorithm returns a refined binary mask with the surface of PV-modules and the number of the analysed PV-modules.
Inspection Algorithm
Once the surface of PV-modules is segmented from the rest of the environment with the previous method, the algorithm detects probable hot-spots using infra-red information provided by the 2D cloud.
Temperature Scale Adjustment
The correlation between the PV-module operating temperature T c and the three basic environmental variables (the ambient air temperature T a , the air velocity V f and the incident irradiance flux G) is computed by the following semi-empirical equation [51]: With this information, the algorithm finds a simple and fast correlation between the expected temperature of each PV-cell, and the measured ones obtained from the field thermographic inspection. The estimated temperature of PV-cell is subtracted from each acquired image, in order to obtain an image with the temperature variation.
The temperature scale adjustment procedure of a thermal image I t (x, y) (gray-scale mode) can be formulated as follows: where (x, y) is the image coordinates, T max and T min are the maximum and minimum temperature of the thermal image respectively and T c is the estimated temperature of the PV-cell.
Hot-Spot Detection
Ideally, the estimated temperature of PV-cell is similar to the temperature provided by the infra-red sensor. In this work, it was empirically determined that a variation of 5 • C induces an abnormal overall temperature pattern, witnessing a potential hot-spot. In addition, this temperature was selected based on the thermal camera accuracy. The algorithm filters all temperatures that are not more than 5 • C, and puts a one in those temperatures that fulfil this condition.
The algorithm clusters the measurements on the raw binary image. To reduce computational time, the algorithm uses an edge detector. Edge detector stage extracts the edges of tentative hot-spots from the rest of the image, and it is based on a combination of contrast adjust and morphological operations.
Then, all measurements related to same hot-spot are merged using Fuzzy C-Means algorithm [52] and each characteristic is stored in a matrix of measurements M t for each time t.
where x t i and y t i are the coordinates of the probable hot-spot in pixels.
Since several hot-spots associated with the same characteristic are detected in different frames, it is necessary to relate the frame t + 1 to frame t. Therefore, the corresponding points between two sequential images are initially found in order to compute the displacement and the rotation between both images. These points allow for the system to deliver a transformed version refereed to an initial image, which is then analysed in order to detect the probable hot-spots and to create the matrix M t+1 . The merging between matrices M t and M t+1 at time t and t + 1 can be performed using Mahalannobis distance, as shown in Equation (8).
where M t i are the coordinates of i-th detected characteristic by the sensor, M j t+1 are the coordinates of j-th previously detected characteristic and Σ is the co-variance matrix of the hot-spot, associated with the j-th previously detected characteristic. The algorithm begins with an empty matrix M 0 .
Then, if such distance is greater than an established threshold, the detected characteristic is a new hot-spot. Otherwise, the system merges the detected characteristic with the associated hot-spot. The new mean is defined as: where µ n and µ n−1 are the coordinates of the center of the hot-spot for n and n − 1 detection respectively. Due to the system mainly searches damaged PV-cells, the threshold is directly determined with the PV-cell width measured in pixels. This parameter is determined in off-line mode.
The new covariance matrix associated with the hot-spot is computed as: where Σ i M k,m is the element (k row, m column) of the covariance matrix of the detected characteristic.
False Hot-Spot Extraction
In this work, our system detects false hot-spots from reflections by analysing the statistical behaviour of the hot-spots.
Once there are not new measurements related to a previously detected hot-spot, it is necessary to verify if this hot-spot is a consequence of module defects. An internal hot-spot always appears in the same placement in an image, regardless of the camera position. In this work, since the camera moves around the y-axis, false hot-spots will be located in different position on each image. To differentiate between a false hot-spot and a true hot-spot, the system computes the difference between the hot-spot covariance matrix Σ and the covariance matrix of the last hot-spot detection Σ last M . Frobenius distance is used, as shown in Equation (9).
If this distance d mat is less than a threshold, which is determined as 10% of the value of the PV-cell width squared, the hot-spot is a consequence of external radiation sources.
3D Reconstruction
The set of images (both RGB and infra-red) must be attached to a common global reference frame, to achieve a complete view of the PV-structure. To fully characterization the PV-structure, an algorithm based on image matching, perspective correction, norm cross-correlation and Hough transform was implemented. Briefly: 1. The matching algorithm previously detailed in Section 2 returns a panoramic view of all PV-structure and the camera's locations for each image respect to the first image. 2. The perspective distortion of the panoramic view is corrected with the homogeneous transformation matrix computed in Section 2. The binary corrected mask is applied to this corrected panoramic view, obtaining the photovoltaic surface viewed from a parallel plane. 3. The normalized cross-correlation computed in Section 2 provides the number of PV-panels in the analysed images and the location of each PV-module. The algorithm takes advantage of the PV-module shape and determines the width and height of each panel by comparing the distance between boxes determined in the images and the real distances of the PV-module. 4. The system eliminates false hot-spots detected, and replaces their area with the estimated temperature of the PV-cell T c . 5. The system returns the 3D fully characterization of the PV-structure and the location of the camera.
Results
The experimental part consisted on the evaluation of an array with eight PV-modules. The thermographic measurements took place in the city of Valparaíso, Chile, by three daily sets, i.e., January 14, 15 and 16 of 2018, under variable sky conditions. Each set included three instant measurements, according to the time: (i) 08:00 hs, power on of modules; (ii) 12:00 hs, steady-state conditions; and (iii) 18:00 hs, power off of modules. To compute the algorithm robustness regarding variations of camera position and lightning conditions, the images were acquired at two distances from the PV-structure: at 3 m (on January 14th) and 4 m (on January 15th and 16th). An overview of the recorded environmental conditions is shown in Table 3. Ambient air temperature, wind velocity and humidity were obtained by a local weather station and a portable temperature sensor. We measured the solar irradiance flux using a pyranometer. The illuminance values were measured using a conventional luxometer.
The inspection system was mounted on a commercial tripod. The complete system was positioned in front an array of monocrystalline silicon photovoltaic modules, whose main technical characteristics are summarized in Table 4. To obtain the entire PV-structure, the system was displaced in a straight line path, equidistantly to PV-structure, maintaining the camera plane fixed. About 40 images (both thermal and RGB) were acquired in each test.
The status of the tested photovoltaic modules is in proper operating condition. Under this state, we generated four hot-spots in the structure by shading two PV-cells. Figure 5 shows a picture of the tripod facing the PV-structure tested here. Figure 6 presents the resultant images in all four stages of the PV-module detection approach, with regards to: (a) Photovoltaic array analysed at 8:00, the system was positioned at 3 m with respect to PV-structure; (b) Photovoltaic array analysed at 18:00, the system was positioned at 4 m with respect to PV-structure; and (c) Photovoltaic array analysed at 12:00, the system was positioned at 4 m with respect to PV-structure. Figure 6, in all cases, depicts the merged image. The raw merged image is subjected to thresholding condition, which is based on the image saturation and a color filter. The thresholds are determined by analysing the histograms of the gray scale image and the saturation image in off-line mode, as shown in Figure 7. For the first case, the usual range for module detection factor is [30,85], as shown in Figure 7a. The usual range of the saturation in this experiments for each analysed image is [0.6, 0.8], as shown in Figure 7b. The usage of Hough transform allows to correct the perspective distortion, as shown in third image row of each case. In addition, this step returns the homogeneous transformation matrix. The segmentation is refined by finding the maximum values in the Normalized 2D cross-correlation, as shown in Figure 8. This step provides us the location of each panel in the general image, as shown in Figure 9. The resultant images from this stage provide a valuable sum of binary data, overly clear from possible erroneous variation. These data, in the form of the images with the location of each PV-module, constitute the mask of all images. The PV-surface detection is an important action in order to guarantee the true detection of hot-spots. The algorithm in its first part applies image processing tools and develops a cropped module image with only cell regions, isolating the PV-module from the rest of the environment, as shown in Figure 10.
Hot-spot 1 True Hot-spot 3 True
Hot-spot 2 True Hot-spot 4 True Figure 10. Results of the hot-spot detection: The system searches for two hot-spots over the analysed photovoltaic module. The camera is positioned facing the PV-structure at 3 m.
Four measures are used to evaluate the obtained results quantitatively. On the one hand, the numbers of correctly detected pixels, either belonging to the object or to the background, are respectively called true positives (TP) and true negatives (TN). On the other hand, the numbers of incorrect detection are, respectively, called false positives (FP) for background pixels included into the object or false negatives (FN) for object pixels included into the background. These different measures are used in the computation of six parameters, two special parameters: Dice coefficient (DIC) and Jaccard index (JCD), as well as the four traditional parameters: precision, sensitivity, specificity, and accuracy. Table 5 shows the statistical analysis of the photovoltaic module detection algorithm. It is worth noticing that our proposal's accuracy rises up to 96.33% in the best case and 94.05% in the worst case, whereas its precision is 95.93% and 92.86% for the best and worst case respectively.
Hot-Spots Detection
Once the system isolates the PV-modules from the rest of the environment, the algorithm identifies the position of hot-spots on each PV-module. Four hot-spots were generated in the PV-structure by shading two cells, which generated a temperature change in the cell surface and two punctual hot-spot above of each one. The results of hot-spot detection are shown in Figure 10. It is possible to observe that the covariance matrix -coloured circles-is maintained constant when the system detected a true hot-spot, and it wraps the area of the hot-spot. In addition, the mean value of the estimated hot-spot -red circle-was approximately located in the center of the real hot-spot.
As previously mentioned, the performance of the algorithm is analysed through four measures: the number of correctly detected pixels belonging to TP, TN, FP and FN. These different measures are used in the computation of three parameters: precision, sensitivity and accuracy. The statistical analysis for each experiment is shown in Table 6. It is possible to note that the algorithm is able to detect surfaces affected by hot-spots with a precision of 98.0% and accuracy of 97.1% in the worse cases respectively. Table 6. Statistical analysis of hot-spot detection algorithm.
Day
January 14th January 15th January 16th
False Hot-Spots Detection
To test our system, we simulated three hot-spots associated with the linear edge shunt (see [53]) in a monocrystalline photovoltaic module, and using an external radiation source, we generate a false hot-spot in the PV-module surface, as shown in Figure 11. Data acquisition consisted on taking thirty visual and thermal images in order to completely scan the PV-module. The complete system was located facing the PV-structure at a distance of approximately one meter to ensure that the cameras acquired all PV-module in each frame. This distance was determined by performing a first scan and then manually verifying that the entire PV-module was acquired on each frame. The angle between to ground and PV-module planes is approximately 80 • , in order to generate the false hot-spot associated with the external source radiation used. Initially, each acquired image was pre-processed using the photovoltaic module detection algorithm. Figure 12 shows the results of the fusion algorithm and the results of the hot-spot detection algorithm. It is possible to observe that the covariance matrix -color circles-is maintained constant when the system detected a true hot-spot, and it wraps the area of the hot-spot. In addition, the mean value of the estimated hot-spot -red circle-was approximately located in the center of the real hot-spot.
On the other hand, it is possible to note that the covariance matrix increased for false hot-spots, wrapping a greater area than the area of detected hot-spot. In this case, the mean value of estimated hot-spot was located outside of the real hot-spot. The fault diagnostic of each detected hot-spot is shown in Table 7. The system returns the location of each hot-spot respect to upper border of PV-module, the damaged surface and the reason the possible failure.
3D Reconstruction
The system provides the 3D visual and thermal reconstruction of PV-modules, as shown in Figure 13, which offers a complete morphological characterization of each PV-module. If the system detects false hot-spots, the surfaces associated with false hot-spots are replaced with average temperature from PV-module in good condition. Figure 14a,b respectively show the results of 3D-thermal reconstruction with and without false hot-spots.
Discussion
In this work, a fault diagnostic algorithm based on thermographic analysis was proposed. The experimental results showed that the system was capable of detecting PV-modules with approximately 94.05% accuracy for each studied case. The system used the geometric patterns of photovoltaic panels. For this reason, the previous training is not necessary. The computational burden was less than 0.1 s. for each acquired frame. An important disadvantage was related to data acquisition, since it was completely manual in this work. This drawback can be overcome by mounting the inspection system in an autonomous or tele-operated robotic platform.
Once that PV-modules were segmented, the system was able to detect hot-spots associated with photovoltaic failures on PV-modules. In contrast with traditional and commercial systems, our proposal was capable of determining if a detected hot-spot was related to panel failures (true hot-spot) or external agents such as speculator objects present in the background (false hot-spot), which could cause false alarms. Unfortunately, there are further limitations referring to the lack of damage classification ability, which is under research by the authors.
Comparison with Existing Approaches
To test the effectiveness of our approach against others previously published, we repeated the experimentation shown in Section 3 with the following approaches [32,33,37]. We chose such works since they better resemble the goals of our research. However, the three methods has two limitations: (i) the analysed methods extract manually the regions of interest (PV-modules). In this context, we test the PV-module segmentation algorithm with these techniques; (ii) the three methods cannot detect false hot-spots, they are limited to true hot-spots, unlike our work.
ROI, Line Profile Analysis and Image Histogram Analysis
Tsanakas et al. proposed in [32], an algorithm for detection of damaged PV-cells, which is based on image histogram analysis and line profile analysis. The main aim of the algorithm is to find regions on the PV-module with elevated risk of failures. In contrast with Tsanakas' algorithm, in our work, we automatically find the regions of interest (ROI) by applying the proposed PV-module detection algorithm. Since our procedure finds all PV-module in the merged image, the histogram analysis is simplified. Figure 15 shows, the histograms of ROI 1, 2 and 3, for the measurement of January 14th at 8:00, with regards to detected modules (5 and 8). Although in the three histograms, there is a main pixel (Y-axis) distribution in the temperature (X-axis) range from 24.48 • C to 24.61 • C that practically corresponds to estimated cell temperature, the histograms of ROI 2 and 3 are characterized by a second distribution data, approximately between 28.1 • C and 30 • C, that corresponds to damaged cells in each ROI. In addition, Tsanakas et al. locates hot-spots using line profile analysis of the regions of interest previously analysed. The temperature is relatively fixed in the the line profile analysis of the ROI 1. It implies that the ROI 1 is not affected by a hot-spot. On the other hand, the temperature increased between pixels 225 and 255 for the ROI 2, indicating that the region is affected by a hot-spot between these pixels. Similarly, the ROI 3 is affected by a hot-spot between pixels 800 and 900.
Color Segmentation with k-Means
Salazar et al. proposed a hot-spot detection algorithm, which segments colors in an automated fashion using the CIELAB color space and K-means clustering. This technique has two important disadvantages: (i) the algorithm does not extract the PV-module from the rest of the environment; (ii) The algorithm can be only used with images acquired with high contrast mode. Figure 16 shows the results of the algorithm. Sometimes, the algorithm performs a bad segmentation because of the confusion of the colors. In addition, if the hot-spots are small, the system eliminates these failures.
Panel 5
Panel 8 Panel detected Hot-spot detection
Bad segmentation
Hot-spot Hot-spot Figure 17 shows the resulting images in all four stages of the intended diagnostic approach with regards to: module 5 and module 8. The first column depicts the raw thermal images of each case. The binary mask is presented in the second column. The last two columns show the results from the applied edge detector and the output of the algorithm. Depending of threshold selected in the stage of binarization, the small hot-spot can or cannot be detected.
Raw thermal image
Thresholding Canny edge detector Output edge map/count Figure 17. Results of Canny edge detection and output edge map/count applied to: module 5 and 6.
Statistical Analysis
The three methods are evaluated with the metrics described previously in Section 2. The comparison is shown in Table 8. It is possible to note a 12% increase in the algorithm precision with respect to other existing methods, improving the thermal monitoring of the photovoltaic structure. An attractive and novel advantage of our system is the capacity of filtering the false hot-spots automatically. In addition, system has a particular characteristic that makes it more attractive for industrial applications: its detection time is less than 2.7 s, which compared to other methods has a significant improvements (e.g., the algorithm based on color segmentation takes 25 s).
Potential Benefits
Currently, the infra-red inspections of PV-modules utilizes Long Wave Infrared detection methods, suitable for detecting several damages in PV-cell. Such defects can be stemmed from a host of different errors (e.g., manufacturing defects, module damage, partial shadowing, PV-cell delamination). The traditional systems carry out a zone-inspection to reduce the maintenance time. This inspection allows to identify defective modules or rows of defective modules and, sometimes, damaged cells or rows of damaged cells . However, several defects are significantly small, and due to the low resolution of thermal cameras, they cannot identified. Unlike traditional systems, our proposed system performs a detailed inspection that allows to detect a single cell, partial cell and regional cell heating.
Methods based on short wave infra-red technology are carried out in field to detect snail trail, dead spots, voids or micro-cracks, but they requires of controlled conditions (e.g., Electroluminecence technology detects PV-cell defects in a dark environment).
In addition, our system is able to automatically isolate PV-structure from the environment background. This characteristic will allow to mount the system in a robotic platform to optimize the inspection time. Table 9 summarizes the main contributions of the proposed system, comparing with other methods analyzed in this work.
Assessing tHe Costs and Benefits of System
Determining the cost-benefit of system is a difficult proposition given the early stage of the system and a general lack of available data. Generally, most crystalline silicon solar cells decline in efficiency by 0.50%/ • C and more amorphous cells decline by 0.15-0.25%/ • C. In Chile, it is estimated that approximately 27% of PV-plants failures occurred as a result of damages in PV-modules. It is known that the performance of PV-modules reduces with the temperature increase and, sometimes, this increment can be elevated. The monthly losses can be analyzed using the following factor L t , which is computed as follows: L t = g × (T c + 25) where g is a temperature factor provided by manufacturer, that indicates the power decrease when the cell temperature increases 1 • C, and T c is the PV-cell temperature. Considering a period time of one year, it is obtained a mean value for L t of 9.5%. On the other hand, two phenomena that affect the PV-module are dust, particles and dirt, and the shadowing. Different researches show that the losses associated with dust, particles and dirt must be less than 3%, and losses related to shadows must be less than 2%. Our approach can detect these failures using the visual inspection and thermography monitoring in a simple and easy way, which helps to increase the PV-panel efficiency.
Conclusions
In this work, a PV-module fault diagnosis algorithm based on infra-red thermography was proposed and experimentally assessed. The system was able to detect and classify hot-spots stemming from manufacturing defects, module damage, temporary shadowing, defective bypass diodes, and faulty interconnections. The two-stages algorithm allowed us to isolate the PV-modules from the rest of the environment and to detect real hot-spots while filtering false hot-spots. Concerning our PV-module detection algorithm, we found the PV-module with a precision of 92.86% and an accuracy of 94.05% in the worst case. On the other hand, the system had a precision of 95.12% and an accuracy of 94.19% in the hot-spots detection. In addition, our system was capable of filtering false hot-spots due to the analysis of hot-spot position in each frame and the proper segmentation provided by the detection algorithm. Experimental results showed that the quality of the output depends on the accuracy in the classification and segmentation of the module in the RGB camera readings and the thermal image, respectively. Misclassification produced due to the mixed pixels problem could lead to incorrect conclusions about the thermal status. This work pushed forward some artificial vision methods applied to the exploitation of information provided by different sensors. The absence of such technology commercially available will lead the authors future work in order to design, develop and test more efficient hardware and accurate processing algorithms. | 9,674 | 2018-06-28T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Commissioning and initial experience with the first clinical gantry‐mounted proton therapy system
The purpose of this study is to describe the comprehensive commissioning process and initial clinical experience of the Mevion S250 proton therapy system, a gantry‐mounted, single‐room proton therapy platform clinically implemented in the S. Lee Kling Proton Therapy Center at Barnes‐Jewish Hospital in St. Louis, MO, USA. The Mevion S250 system integrates a compact synchrocyclotron with a C‐inner gantry, an image guidance system and a 6D robotic couch into a beam delivery platform. We present our commissioning process and initial clinical experience, including i) CT calibration; ii) beam data acquisition and machine characteristics; iii) dosimetric commissioning of the treatment planning system; iv) validation through the Imaging and Radiation Oncology Core credentialing process, including irradiations on the spine, prostate, brain, and lung phantoms; v) evaluation of localization accuracy of the image guidance system; and vi) initial clinical experience. Clinically, the system operates well and has provided an excellent platform for the treatment of diseases with protons. PACS number(s): 87.55.ne, 87.56.bd
I. INTRODUCTION
Proton therapy has been known for the capability of delivering highly precise radiation doses to tumor volumes while protecting healthy tissue from radiation side effects for better outcomes. (1,2) However, this advanced technology has not been widely accepted because of: i) cost, exceeding $150 million for multiroom systems, ii) the space required to host accelerator, beamline, transport systems, and treatment rooms, and iii) complex delivery systems that require engineers specially trained and certified to run and maintain.
A compact proton therapy machine, specifically a single-room proton therapy unit, is appealing to hospitals with a population of regional patients who benefit from this technology, based on current clinical evidence. The benefits of a single-room system include reduced cost for i) the machine, ii) space, due to smaller footprint, iii) the construction and installation, and iv) maintenance, due to lower system complexity and power consumption. A compact system is much easier to integrate with the rest of the hospital geometrically and administratively, instead of being detached at remote location. It improves the flexibility of moving patients between proton rooms or from proton service to adjacent IMRT service in the event of a system breakdown. A compact system operates similar to a photon system as the beam is no longer orchestrated among multiple rooms by a team of engineers With a lower financial barrier, the demand for single-room systems is expected to grow rapidly as more institutions can practically support such technology.
The world's first single-room proton therapy system, the Mevion S250 (Mevion Medical Systems, Littleton, MA) was installed and commissioned in the S. Lee Kling Proton Therapy Center at Barnes-Jewish Hospital in St. Louis, MO, USA. The system has been in clinical operation since the December of 2013. In this study, we present the comprehensive commissioning process and initial clinical experience with the Mevion S250.
II. MATERIALS AND METHODS
The Mevion S250 system includes a synchrocyclotron (1.8 m in diameter and 22 tons in weight) mounted on a gantry that rotates from -5° to 185° around isocenter. A pair of annular superconducting coils made of Nb-Sn superconductor and a pair of shaped ferromagnetic poles are used to generate a solenoid magnetic field that peaks at 8.7 Tesla at the center of the synchrocyclotron to produce a bundle of protons with nominal energy of 250 MeV. The coils, hosted by a stainless steel bobbin inside a cryostat, are cooled to 4K by cyrocoolers connected to liquid helium. A magnetic regenerator is placed close to the extraction point to produce a bump in the magnetic field that disrupts the vertical focusing of protons. Angle and pitch of proton orbits are altered toward the extraction channel.
The unique design in mounting the synchrocyclotron on the gantry eliminates the need for a transportation beamline and further reduces the requirement on space. However, the output and energy are impacted. As the gantry rotates, the gravity on the superconducting coils can shift the magnetic field by tenths of a millimeter with respect to the magnetic field regenerator designed to deviate protons into the extraction channel. To compensate for the gantry-angle-dependent energy fluctuations of protons into the extraction channel, a variable-thickness wedge is introduced at the entrance to the extraction channel to fine-tune the proton energy. Although this mechanism produces consistent mean energy output at all gantry angles, the energy spectrum differs slightly as protons go through various thickness of scattering material. As no energy selection system is present downstream in the beamline to filter out undesired energies, the variations in energy spectrum could have a direct impact on the monitor unit (MU) chamber and cause moderate output dependence on gantry angle. This effect has to be evaluated in commissioning, and accounted for in determination of output for each treatment field.
The beam extracted at 250 MeV is adjusted to the energy required for the treatment by absorber wheels that are made of graphite, and a pair of coarse energy absorber and a fine absorber that are made of Lexan. However, a magnetic analyzer is not present in the beamline to maintain a tight energy width after being degraded. This design is different from commercially available models from other vendors. Its dosimetric impact needs to be evaluated at the entry region and distal falloff.
Twenty-four options are available for treating targets with range up to 32 cm with maximum field size of 14 cm, and up to 25 cm in depth with maximum field size increased to 25 cm. The beam-shaping system includes a first scatter with 21 steps, a pair of alternate coarse range absorbers, a fine range absorber, three secondary scatters and 14 range modulation wheels.
Steps in the modulation wheels are compensated for scattering with an appropriate ratio of lead and Lexan. All options are categorized into three groups, "small," "deep," and "large," based on maximum field size and range. "Small" options treat targets up to 20 cm in depth with a maximum field size of 14 cm and modulation width from 2 cm to the range. "Deep" options treat targets with depth less than 32 cm but larger than 20 cm. The maximum field size of deep options is 14 cm and the maximum modulation width is limited to 10 cm. The deep options are mainly designed for prostate treatment. "Large" options treat targets up to 25 cm in depth with maximum field size of 25 cm and modulation width up to 20 cm. Options in the same group share the same secondary scatter. The nominal SAD of the machine is 200 cm.
A. CT Calibration
Conversion from Hounsfield units (HU) to relative proton stopping power ratio (PSPR) plays a vital role in proton therapy. General perception of uncertainty of 3.5% on range is accepted and applied widely in proton therapy due to the uncertainty associated with CT imaging and conversion from HU to PSPR. It has been found that stoichiometric CT calibration is more precise than the tissue substitute calibration. (3) Following Schneider's original implementation, (4) an electron density phantom CIRS model 062 (5) (Computerized Imaging Reference Systems Inc., Norfolk, VA) was selected for evaluation of HU vs. PSPR due to its popular availability and easy access to the published data on substitutes' composition. The phantom was made of an inner cylindrical part (head) and an outer ring (body). The base material of the phantom was water-equivalent plastic with holes to accommodate 17 inserts. The dimensions of the phantom were 33 cm in width and 27 cm in height. All tissue substitutes were built with physical and electron densities similar to the recommendations of ICRU Report 44.
To evaluate the uncertainty in our CT calibration approximately, we did an inter-institute comparison with the institutional calibration curves employed in three other proton institutions: University of Pennsylvania, MD Anderson Cancer Center, and ProCure Oklahoma. The same phantom with inserts in exactly the same arrangement was circulated through all participating institutions. After the phantom was scanned with available CT protocols for proton therapy in participating proton institutions, the acquired CT images and CT calibration curves were collected through a secure FTP server. However, CT calibration curves from all participating institutions could not be compared directly because the CT scanners involved in building the calibration curves were different in terms of manufacture, model, and energy. The PSPRs of the inserts were instead determined by applying the corresponding institutional CT calibration curves on the collected CT images, and plotted with HUs from our institution. This process rebuilt CT calibration curves from all participating institutions on the same CT scanner for direct comparison. Assuming the average of all four calibration curves the ground truth, range uncertainties from the variations of CT calibration were evaluated on our clinical plans. This process transferred variations in CT calibration into range variations. It is estimated that the range uncertainty is ± 0.5% from imaging and calibration, and ± 0.5% from CT conversion to tissue if ionization energy is excluded. (6) We expect our range uncertainty from the CT calibration alone is close to that estimation. Since we were only interested in comparing PSPRs obtained with the active CT protocols used in the participating institutions, where the selection of kVp, FOV, slice thickness, and filter matched the parameters used for commissioning their CT calibration curves, the impact of the variations in institutional CT protocols was not studied and reported in this manuscript.
The CT calibration was also tracked on historic data from 2008 to 2014 using measurements from annual QA tests. The purpose of this assessment was to evaluate the long-term reliability of CT calibration for proton therapy, which, if understood, would help to decide the frequency of quality assurance for the CT scanner. A replacement of X-ray tube was performed in 2011, which offered the best opportunity to check the stability of the CT scanner.
B. Beam Data
The general guideline for acquiring beam data for photon beam commissioning has been described in the report of AAPM Task Group 106. (7) Many aspects apply to proton beam commissioning. Two special considerations apply to the Mevion S250.
First, the acquisition time for each measurement point has to be integrated over 2.08 s in order to allow the beam spot to be distributed equally over a rotating range modulation wheel (RMW). This period, coming from the interplay between the pulse frequency and modulation wheel rotation, allows the beam spot to be distributed evenly on the RMW. Otherwise, significant measurement uncertainties would be observed. However, this increases time for data acquisition.
Second, the output dependency on gantry angle has to be measured due to the presence of a fine wedge that compensates for the shifting of magnets during gantry rotation.
The beam model was commissioned in an Eclipse V11.0.30 (Varian Medical Systems, Palo Alto, CA), which employs a pencil-beam algorithm. (8) As all TPS commercially available use similar algorithms with slight differences in the modeling of Bragg peak and the calculation of spread-out Bragg peak (SOBP), the results and discussion in this study apply generically.
For commissioning the pencil-beam algorithm for passive scattering, four sets of measurements are required. They include (i) percent depth dose in water, (ii) longitudinal profile in air under a open block, (iii) lateral profiles in air under a half-block, and (iv) lateral profiles in air without blocking. Percent depth-dose curves were acquired using a 3D scanning tank (Blue phantom, IBA Dosimetry America, Bartlett, TN) at nominal source-to-surface distance (SSD) of 200 cm with water surface leveled at radiation isocenter, using a parallel-plate chamber (PPC05, IBA Dosimetry). The PPC05 chamber has a sensitive volume of 0.046 cc with 9.9 mm in diameter and 0.6 mm in electrode spacing. An open ring aperture was used for all depth-dose measurements to minimize collimator scatter. The entrance window of the PPC05 is made of air-equivalent plastic C-552 with physical thickness of 1 mm. A shift of 2.2 mm away from the source was applied on the acquired depth-dose curve to account for i) 1.55 mm downstream shift of the effective measurement point at the inner surface of the entrance window, and ii) 0.65 mm rise of water surface after the moving parts holding the chamber holder submerged under the water surface. The dimensions of the moving parts are illustrated in Fig. 1.
The width of a pristine Bragg peak is defined at the 90% of the peak dose. Distal penumbra is defined as the distance from the 80% to the 20% in the distal falloff region. Both properties were measured and reported as they provided important information on range straggling and energy spread of protons.
Source size was determined by acquiring profiles in air at various distances from the nominal source position under half-beam block using edge detector (Sun Nuclear Corporation, Melbourne, FL). The diode in the edge detector had a dimension of 0.8 mm in both width and length, making it ideal for measuring the sharp dose gradient. The snout position for all lateral profiles was fixed to 40 cm. The source size was modeled as a function of residual range of proton, which was achieved with various nozzle-equivalent thicknesses (NET) by setting the modulation wheel at various steps. Virtual source-axis-distance (VSAD) was determined by acquired profiles in air at transverse planes, 20 cm upstream from the machine isocenter, along isocenter, and 20 cm downstream from the machine isocenter under a square block using the edge detector.
Effective source-axis-distance (ESAD) was determined by acquired longitudinal profiles along the central axis using a cylindrical type ionization chamber (FC65, IBA Dosimetry). Measurements were taken at 11 points equally distributed around the machine isocenter on a span of 40 cm. The ESAD was calculated by fitting measurements according to the inverse square law. The ESAD was a function of residual range of proton, as well.
C. Dosimetric commissioning
Validation measurements were taken for all 24 options using open fields. They included percent depth dose along the central axis and transverse profiles at various depths (middle of SOBP, one-third of modulation width upstream, and downstream from the middle of SOBP). Measured doses were compared to predictions from the TPS. We performed 1D gamma analysis using 3%/1 mm criteria and deemed pass with results larger than 90%. The use of 1D gamma analysis was mainly used as a metric for evaluating the accuracy of beam modeling on SOBP except the distal falloff region. Any discrepancy larger than 3% on the proximal shoulder or larger than 5% in the entrance region was tuned by adjusting the partial shinning correction (9,10) and entrance region correction.
As historically noted, prediction of output and monitor unit (MU) is not supported in current treatment planning systems. Although our treatment planning system was fully commissioned to ensure accurate calculation of dose distribution, MUs are assigned to each field by explicit measurement. A verification plan was generated for each beam by duplicating the beamline onto a digital water phantom that mimicked the measurement phantom. Apertures were maintained and the range compensator was removed in the verification plans to eliminate the perturbation in dose measurement from compensator scattering. The treatment isocenter was set in the middle of SOBP where the dose output (cGy / MU) was measured. Measurements were performed with same geometry and beam parameters. A standard 10 cm × 10 cm square aperture and a 0.6 cc cylindrical ionization chamber were used for all measurements, except fields with radius less than 2 cm. Special measurements for small fields were taken with the corresponding apertures in place and a small cylindrical ionization chamber due to its small sensitive volume. In addition, a MU prediction model based on the work by Kooy et al. (11) was developed as a secondary check for measurements. (12) The input parameters of the MU prediction model were range and modulation width. The model predicted the output by fitting all measured data. The accuracy of the prediction is expected to increase with the accumulation of additional measurements.
An inhomogeneous phantom with half-bone-half-water interface was used to evaluate the TPS regarding heterogeneous media. The physical thickness of the bone slab was 2 cm and the relative stopping power ratio was 1.63. Profile was acquired on the transverse plane at the middle of SOBP in water with a pinpoint chamber (CC04, IBA Dosimetry). A proton field with range 32 cm and modulation width 10 cm was used for the test. It presented the worst-case scenario as the measurement plane was 27 cm from the bone-water interface, maximizing the width and amplitude of the heterogeneity in dose distribution.
The Imaging and Radiation Oncology Core (IROC) performed further validation via an onsite audit as well as off-site dosimetry verification by anthropomorphic phantoms irradiation. The on-site visit reviewed all the aspects of our practice, including CT calibration, treatment planning, delivery, QA, dosimetry, and workflow. In addition, dosimetric validation was performed on anthropomorphic phantoms representing four clinical sites including craniospinal, prostate, brain, and lung. (13,14) Irradiation of the IROC phantoms served both as an admission to NCI-funded cooperative group clinical trials and a stringent test of our institute's capability of planning and delivering treatments with heterogeneity correctly accounted for.
D. Imaging guidance and 6D robotic couch
The Mevion S250 proton therapy unit is equipped with a 6D robotic couch and image guidance system (Verity). Setup images were provided by a pair of orthogonal planar X-ray imagers with sources embedded in lateral wall and floor. The patient alignment process allows corrections in six degrees of freedom: translation {x,y,z}, pitch, roll, and yaw {θ, ϕ, ψ}. Geometric accuracy of couch corrections and imaging vs. radiation isocenter coincidence were quantified before clinical implementation. In addition, a gantry star shot and a couch star shot were performed to evaluate the isocentricity of gantry and couch rotation around radiation isocenter.
A commercial phantom with 16 2 mm tungsten BBs was mounted rigidly on the couch and imaged with CT. Seventeen rigid translations/rotations of known magnitude were digitally applied to the original CT image using commercial software, initially validated with Varian's OBI system. For each altered image, phantom was mounted on robotic couch in original position, then Verity 2D:2D match -posterior-anterior (PA), and left lateral (LLAT) -was performed using DRRs from the altered images. Corrections were recorded and applied. The phantom was imaged a second time and residual corrections recorded. Physical measurements verified that applied couch corrections coincided with both physical couch shifts/rotations and known CT image translations/rotations. Additionally, imaging vs. radiation isocenter coincidence was quantified over couch treatment angles (± 90° from the setup position) using radiochromic film and an image-guided couch star shot. The PA and LLAT kV radiographs were taken before each beam was delivered to verify imaging/radiation isocentricity.
A. CT calibration
Four operating proton therapy facilities participated in this study. As demonstrated in Fig. 2(a), reproduced CT calibration curves agreed well in general in the soft-tissue region with maximum deviation of 1.1% from the average, but deviated more significantly in the bone region with variation up to 2.8%. Range uncertainty from the deviation was determined to be 0.7% ± 0.2% in our lung cases, and 0.9% ± 1.2% in brain cases.
Our CT calibration curve is plotted against the one predicted by IROC from their site visit six months after the first patient in Fig. 2(b). Maximum deviation was measured to be only1.2%.
Calibration curves generated with annual QA data demonstrated tight variations from 2008 to 2013, as shown in Fig. 2(c). The absolute variation (maximum -minimum) in relative stopping power ratios was measured to be 0.019 in the span of six years in a hard bone insert with physical density of 1.25 g / cm 3 , or 1.27% with respect to the mean value. The measured pristine Bragg peaks of all 24 options are plotted in Fig. 4. The curves were corrected for the beam divergence and independent of any specific beamline parameters. All overlapped for comparison. Widths of the pristine Bragg peaks varied between 3.9 mm and 8.0 mm, and distal penumbra varied between 5.9 mm and 6.7 mm as plotted for all options in Fig. 4(b) and 4(c). Fitting curves to demonstrate the trend with options in large, deep, and small bands were plotted as well.
C. Dosimetric commissioning
Examples of SOBP measurements are plotted against TPS modeling for option 1, 13, and 18 in Fig. 5, each of which possesses the largest range for the large, deep, and small bands. Prediction from modeling agreed very well with average 1D gamma rate 95.7%, ranging between 91.8% and 100%. Slight discrepancies on the order of 2% were observed near the distal falloff due to the soft distal shoulder systematically presented for all options. Discrepancies in SOBP between measurements and TPS modeling are summarized in Table 1.
Examples of profile measurements are shown in Fig. 6. All measurements were taken at the nominal SAD. Red lines were crossline profiles for a proton beam with range 15 cm and modulation width 10 cm under a nondivergent block. Collimate scatter was observed on both shoulders at shallow depths. The magnitude of the collimator scatter tapered off with depth. After changing to divergent apertures with inner surface in perfect alignment with the beam divergence, the measured profiles (green lines) agreed very well with TPS modeling (black dots). The passing rate of Gamma analysis increased as well, as demonstrated in Fig. 6(d). Divergent apertures are now used routinely in our clinic.
Measurements on the dependence of output on gantry angles are plotted in Fig. 7. The maximum variation was measured 0.7% in large options, 1.1% in deep options, and 2.0% in small options. As the maximum variation was less than 1% in large options, MU was corrected for gantry angle only for fields involving small and deep options in our clinic. Discrepancies of the output predicted by our MU prediction model and measured with the FC65 chamber in a water tank for the first 400 fields are plotted in Fig. 8. The maximum discrepancy was measured -2.60%. The mean discrepancy was 0.53%. Figure 9(a) shows the phantom used to evaluate dose distribution under heterogeneous conditions. The measurement was taken at the nominal SAD with a depth of 27 cm in the water tank. Measured crossline profile under the bone-air interface is plotted against prediction from TPS in Fig. 9(b). Maximum discrepancy was measured to be 4.7% right under the bone-air interface. 1D gamma rate (3%/1 mm) for this measurement was 94.6%.
Once we hit the six-month milestone for treating patients and had treated at least three different disease sites, the IROC Houston group performed a two-day review of our system including independent measurement of absolute dose, profiles and percent depth dose, and CT calibration curves along with imaging verification accuracy. The output measured by TLDs was within 1% of our institution's designated output. The beam parameters including range, modulation width, flatness, and symmetry were all within the tolerance. The site visit revealed no issues.
In addition, four IROC phantoms have been irradiated and deemed to pass for spine, brain, prostate, and lung. The phantom end-to-end testing utilized the same personnel (physicists, dosimetrists, and therapists) as for any patient to conduct simulation, contouring, treatment planning, plan review, MU measurement, QA, and delivery. The results and criteria are summarized in Table 2; all deemed pass as assessed by TLD and film measurement.
The directionality of all translations and rotations were qualitatively verified. Figure 11 shows the KV images taken for the couch star shot. The image vs. radiation isocenter coincidence was < 1 mm and radiation isocenter precision was < 1 mm over the 180° of couch motion, as indicated by film analysis. These tests are conducted monthly.
E. Initial clinical experience
The majority of treatments (54/100) in our proton center involved brain and central nervous system (CNS) out of the first 100 patients who had completed their treatments by December 2014. Among the 54 patients, 22 were pediatric patients, of whom 8 underwent craniospinal irradiation (CSI) under anesthesia. Other sites that had been treated were lung (26), prostate (6), pelvis (4), head and neck (4), esophagus (4), and GI/liver (2). We currently treat 20-24 patients per day and up-time for most months has been better than 95%. Fig. 11. Dedicated star shot phantom kV radiographs (left) and resulting radiochromic couch star shot used for calculating radiation isocenter precision (< 1 mm) and distance between imaging and radiation isocenters (< 1 mm).
IV. DISCUSSION
The commissioning process and initial clinical experience were summarized in this study. With the lack of an energy selection system, the Mevion S250 system utilizes a coarse absorber in 5 mm step and a fine absorber in 1 mm step to degrade protons to designed energy all the way down from 250 MeV. This design simplified the system and reduced the cost while offered some unique features: (i) all proton fields, regardless of range, modulation width and option, were measured with similar distal penumbra of 6.3 mm ± 0.3 mm, and (ii) the ratio of peak to entrance dose is higher in Mevion S250 than other systems due to the inelastic secondary particles generated in the beamline. An outlier was observed in the derived VSAD of option 24 which didn't follow the fitted trend. Option 24 has the least range for all small options. We believe that this outlying data point was caused by the lack of lead for scatter compensation in the range modulation wheel exclusively used by this option. This range modulation wheel is the only one without lead in all 14 wheels.
Profiles were taken for a beam with range 6.9 cm and full modulation width at various depths to verify our modeling of VSAD for option 24. The discrepancy in field size between measurements and TPS prediction was less than 0.5 mm, predominantly from measurement noise. No systematic deviation was observed.
V. CONCLUSIONS
The Mevion S250 has been fully commissioned and is in clinical operation in the S. Lee Kling Proton Therapy Center. Its characteristics as a compact single-room unit are well suited for our requirements on space and budget. The KV imaging is tightly integrated with a 6D robotic couch. Some unique features that come with the design of the system, such as the output dependency on gantry angle and lack of energy selection system, have been investigated and incorporated into our MU model. A variety of sites have been treated, including brain and spine tumors, lung, and other tumor sites. We passed the four IROC credentialing phantoms, complementing our phantom based end-to-end testing. Clinically, the system operates well and has provided an excellent system for the treatment of diseases with protons. | 6,186.6 | 2016-03-01T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
Stable spike clusters for the one-dimensional Gierer–Meinhardt system
We consider the Gierer–Meinhardt system with precursor inhomogeneity and two small diffusivities in an interval $$\begin{equation*}
\left\{
\begin{array}{ll}
A_t=\epsilon^2 A''- \mu(x) A+\frac{A^2}{H}, &x\in(-1, 1),\,t>0,\\[3mm]
\tau H_t=D H'' -H+ A^2, & x\in (-1, 1),\,t>0,\\[3mm]
A' (-1)= A' (1)= H' (-1) = H' (1) =0,
\end{array}
\right.
\end{equation*}$$ $$\begin{equation*}\mbox{where } \quad 0<\epsilon \ll\sqrt{D}\ll 1, \quad
\end{equation*}$$ $$\begin{equation*}
\tau\geq 0
\mbox{ and
$\tau$ is independent of $\epsilon$.
}
\end{equation*}$$ A spike cluster is the combination of several spikes which all approach the same point in the singular limit. We rigorously prove the existence of a steady-state spike cluster consisting of N spikes near a non-degenerate local minimum point t 0 of the smooth positive inhomogeneity μ(x), i.e. we assume that μ′(t 0) = 0, μ″(t 0) > 0 and we have μ(t 0) > 0. Here, N is an arbitrary positive integer. Further, we show that this solution is linearly stable. We explicitly compute all eigenvalues, both large (of order O(1)) and small (of order o(1)). The main features of studying the Gierer–Meinhardt system in this setting are as follows: (i) it is biologically relevant since it models a hierarchical process (pattern formation of small-scale structures induced by a pre-existing large-scale inhomogeneity); (ii) it contains three different spatial scales two of which are small: the O(1) scale of the precursor inhomogeneity μ(x), the $O(\sqrt{D})$ scale of the inhibitor diffusivity and the O(ε) scale of the activator diffusivity; (iii) the expressions can be made explicit and often have a particularly simple form.
Introduction
In his pioneering work [31] in 1952, Turing studied how pattern formation could start from an unpatterned state. He explained the onset of pattern formation by the presence τ > 0 and τ is independent of ε.
In the standard Gierer-Meinhardt system without precursor, it is assumed that μ(x) ≡ 1.
Precursor gradients in reaction-diffusion systems have been investigated in earlier work. The original Gierer-Meinhardt system [11,16,18] has been introduced with precursor gradients. These precursors were proposed to model the localization of the head structure in the coelenterate Hydra. Gradients have also been used in the Brusselator model to restrict pattern formation to some fraction of the spatial domain [14]. In that example, the gradient carries the system in and out of the pattern-forming part of the parameter range (across the Turing bifurcation), thus effectively confining the domain where peak formation can occur.
In this paper, we study the Gierer-Meinhardt system with precursor and prove the existence and stability of a cluster, which consists of N spikes approaching the same limiting point.
More precisely, we prove the existence of a steady-state spike cluster consisting of N spikes near a non-degenerate local minimum point t 0 of the positive inhomogeneity μ(x) ∈ C 3 (Ω), i.e. we assume that μ (t 0 ) = 0, μ (t 0 ) > 0 and we have μ(t 0 ) > 0. Further, we show that this solution is linearly stable.
We explicitly compute all eigenvalues, both large (of order O (1)) and small (of order o (1)). The main features of studying the Gierer-Meinhardt system in this setting are as follows: (i) It is biologically relevant since it models a hierarchical process (pattern formation of small-scale structures induced by a pre-existing inhomogeneity).
(ii) It is important to note that this system contains three different spatial scales two of which are small (i.e. o (1) To have a non-trivial spike cluster, we assume throughout the paper that Before stating our main results, let us review some previous results on pattern formation for the Gierer-Meinhardt system (1.1) , in particular concerning spiky patterns.
(1) I. Takagi [30] proved the existence of N-spike steady-state solutions of (1.1) in an interval for homogeneous coefficients (i.e. μ(x) = 1) in the regime ε 1 and D 1, where N is an arbitrary positive integer. For these solutions, the spikes are identical copies of each other and their maxima are located at The proof in [30] is based on symmetry and the implicit function theorem.
(2) In [15] (using matched asymptotic expansions) and [43] (based on rigorous proofs), the following stability result has been shown: For the N-spike steady-state solution derived in item 1 and 0 6 τ < τ 0 (N), where τ 0 (N) > 0 is independent of ε, there are numbers D 1 > D 2 > · · · > D N > · · · (which have been computed explicitly) such that the N-spike steady state is stable for for D < D N and unstable for D > D N .
(3) In [15] (using matched asymptotic expansions) and [33] (based on rigorous analysis), the following existence and stability results have been shown: For a certain parameter range of D, the Gierer-Meinhardt system (1.1) with μ(x) = 1 has asymmetric N-spike steady-state solutions, which consist of exact copies of precisely two different spikes with distinct amplitudes. They can be considered as bifurcating solutions from those in item 1 such that the amplitudes start to differ at the bifurcation point (saddle-node bifuraction). The stability of these asymmetric N-peaked solutions has been studied in [33].
(4) In [45], the existence and stability of N-peaked steady states for the Gierer-Meinhardt system with precursor inhomogeneity has been shown. These spikes have different amplitudes. In particular, the results imply that precursor inhomogeneities can induce instabilities. Single-spike solutions for the Gierer-Meinhardt system with precursor including spike motion have been studied in [32].
(5) In [42], the existence of symmetric and asymmetric multiple spike clusters in an interval has been shown.
Compared to each of the items listed above, the setting and results in our paper have marked differences. We now consider two small parameters, D and ε √ D which results in new types of behaviour. The leading-order asymptotic expression of the large and small eigenvalues depend on the index of the eigenvalue quadratically, whereas in items 1 and 2, this relation is oscillatory (involving trigonometric functions). In our study, the spikes in leading order have equal amplitudes and uniform spacing, although there is precursor inhomogeneity in the system, in contrast to item 3. The amplitudes, positions and eigenvalues in our study can be characterized explicitly and have a simpler form than in item 4. We can also prove the stability of clusters not merely their existence as in item 5. In particular, we show here that the clusters may be stable, whereas in item 5 they are expected to be unstable. In the shadow system case (D = ∞), the existence of single-or N-peaked solutions has been established in [12,13,21,22] and other papers. It is interesting to remark that symmetric and asymmetric patterns can also be obtained for the Gierer-Meinhardt system on the real line, see [7,8]. We refer to [23] for the singular limit eigenvalue problem (SLEP) approch for the existence and stability of multi-layered solutions for reaction-diffusion systems. For two-dimensional domains, the existence and stability of multi-peaked steady states has been proved in [38][39][40] and results similar to items 1 and 2 have been derived. Hopf bifurcation has been established in [6,34,35,40]. The repulsive dynamics of multiple spikes has been studied in [9].
Another study with three different spatial scales, two of which are small, considers a consumer chain model allowing for a novel type of spiky clustered pattern which is stable for certain parameters [46].
The model in our paper shows some similarity to variational models for material microstructure [1,20,48]. In both models, the solutions have two small scales. However, in our case, we have two parameters to control each of them independently, whereas in the microstructure case they are expressions of different orders depending on the same small parameter and so they are related to one another.
Results on the existence and stability of multi-spike steady states have been reviewed and put in a general context in [47].
We plan to continue the investigation of stable clusters in future research. In particular, we are interested in two-dimensional patterns. Whereas in one space dimension the spikes in the cluster are aligned, in two dimensions we expect a rather rich geometric picture of possible spike locations. This paper has the following structure: In Section 2, we state our main results on existence and stability. Then, we show some numerical simulations to illustrate the main results. We also study the dynamics of pattern formation, even outside the regime covered by the results. Next, we present four highlights of the results and their proofs. Finally, we sketch the main steps of the proofs. In Section 3, we introduce some preliminaries. In Sections 4-7, we prove the existence of steady-state spike clusters: In Section 4, we introduce suitable approximate solutions, in Section 5, we compute their error, in Section 6, we use the Liapunov-Schmidt method to reduce the existence of solutions of (1.2) to a finite-dimensional problem, in Section 7, we solve this finite-dimensional reduced problem. In Sections 8, 9 and Appendix B, we prove the stability of these steady-state spike clusters: In Section 8, we study the large eigenvalues of the linearized operator and show that it has diagonal form. We give a complete description of their asymptotic behaviour which is stated in Lemma 15. In Section 9, we characterize the small eigenvalues of the linearized operator and show that they all have negative real part. This includes deriving the eigenvalues of a matrix which is needed to compute the small eigenvalues explicitly. We give a complete description of their asymptotic behaviour in leading order which can be found in Lemma 16. Our approach here is to interpret the main matrix as the finitedifference approximation of a suitable ordinary differential equation, compute the solution of this approximation explicitly and get the eigenvectors by taking its values at uniformly spaced points. In Section 10, we conclude with a discussion of our results with respect to the bridging of length scales and the hierarchy of multi-stage biological processes. In Appendix A, we state a few results on NLEPs which are needed throughout the paper. In Appendix B, we perform the technical analysis needed to derive the small eigenvalues.
Main results on existence and stability
In this section, we state our main results on existence and stability of solutions and present four highlights of our approach and sketch the proofs of the main results.
Let t 0 ∈ (−1, 1) and set compare (4.30). We set Our first result is about the existence of an N-spike cluster solution near a non-degenerate local minimum point of the precursor.
Next, we state our second result which concerns the stability of the N-spike cluster steady states given in Theorem 1.
Theorem 2 (Stability of an N-spike cluster) For ε √ D 1, let (A ε , H ε ) be an N-spike cluster steady state given in Theorem 1. Then, there exists τ 0 > 0 independent of ε and √ D such that the N-spike cluster steady state (A ε , H ε ) is linearly stable for all 0 6 τ < τ 0 . Figure 1. Clustered spiky steady states of (1.1) for ε 2 = 0.00001, D = 0.001, μ(x) = 6 + 400x 2 and μ(x) = 6 + 200x 2 , respectively. Shown are a six-spike cluster and an eight-spike cluster. In both cases, the activator a is displayed in the left graph and the inhibitor h in the right graph. Now by performing some numerical simulations, we study patterns for the Gierer-Meinhardt system with precursor gradient given in (1.1) systematically in various situations.
Throughout all these simulations, we choose τ = 0.1, ε 2 = 0.00001 and take varying values for D ranging between a few times ε 2 and 1.
Further, we vary the strength of the precursor gradient and observe two distinct types of behaviour: For strong precursor gradient, the spikes assemble as a cluster near the global minimum point of the precursor, for weak precursor gradient, the spikes are distributed over the whole interval.
We will observe a rather rich dynamical behaviour which by far exceeds the immediate vicinity of the spike cluster which will be analysed in detail in this paper.
First, we show the results of computations of spike cluster steady states of (1.1). These have been obtained as long-time limits of (1.1) and are numerically stable.
In Figure 1, we note that the amplitudes of the different spikes of a cluster in the activator component are very close to each other.
On the other hand, the inhibitor values differ substantially at different spike locations and are highest near the centre of the cluster. This stands in contrast to the precursor μ(x) which attains its global minimum at the centre of the cluster. The combination of these two effects leads to almost equal spike amplitudes.
Both the activator and inhibitor peaks have almost equal distance. Now, we show the initial condition which has been used in all simulations of clusters and multiple spikes.
Next, we display the dynamics of getting a cluster from the initial condition given in Figure 2.
In Figure 3, for t = 0, we start with very fine oscillations of the activator a. Then, for t = 0.70, we reach a pattern which is very close to zero except in the centre of the interval where fine oscillations still prevail. Starting from t = 0.80, we see eight spikes which generally increase in amplitude and whose positions are mainly fixed. Looking more closely, we can also see that the amplitudes show some oscillatory behaviour, first overshooting the final amplitude of around 0.57 and then oscillating around the final amplitude and approaching it.
Next, we consider a different regime of a weaker precursor gradient (all other parameters remain unchanged from before). We observe spikes which are distributed over the whole interval. Now, we display the dynamics of getting a multiple spike pattern from the initial condition given in Figure 2.
In Figure 5, we start with very fine oscillations of the activator a. Immediately, the amplitudes of the oscillations change driven by the precursor gradient, but their period remains mainly unchanged. Then, spikes start to form, first at the boundary, then further and further inside the interval. Finally, the spikes in the centre of the domain form almost simultaneously and there is also some oscillatory behaviour between their amplitudes. It takes longest for the amplitudes of the two spikes in the centre to increase to their steady-state value. The positions of the spikes are mainly unchanged but the increase in amplitude for the two central spikes is coupled with them slowly moving apart and pushing the remaining spikes away from the centre.
In Figure 6, we show how the number of multiple spikes depends on the diffusion constants.
Finally, in Figure 7, we show the behaviour as the two small diffusion constants ε 2 and D come close to each other.
Remark 3
For the stability, we assume that 0 6 τ < τ 0 for some τ 0 > 0. Stability in the case where τ is large has been investigated in [35] for single spikes and those results on Hopf bifurcation are expected to carry over to the case of an N-spike cluster considered here. We remark that stability in the case of large τ for the shadow system has been studied in [6,34]. It turns out that this Hopf bifurcation leads to oscillations of the amplitudes. The Hopf bifurcation at τ = τ 0 , where τ 0 is of order 1, still arises even in the regime 0 ε √ D 1 considered in this paper. For the spike cluster, τ 0 is independent of N which can be shown by the analysis in Sections 8 and 9.
Remark 4
It is an interesting question to consider the maximum number of stable spikes in the regime 0 ε √ D 1 studied in this paper. We expect that there are stable multispike solutions if N < c √ D (in leading order of D). In the regime N ∼ c √ D , the spikes will be distributed over the whole interval and at N ∼ c √ D , we expect an overcrowding instability. This threshold would be an extension of the corresponding result for D = 1 (see [43]). The cluster solutions studied in this paper are only possible if N < c √ D log 1 D (in leading order of D) due to the distance between spikes. We have presented some numerical simulations of
Remark 5 If ε
√ D ∼ 1, the spike solutions will change into other types of patterns, e.g. spatial oscillations, which could again be stable. It is also possible that the patterns will vanish. We have presented some numerical examples to illustrate this behaviour (see Figure 7).
Remark 6
Previous studies of the precursor case can be found amongst others in [2,27,28]. We also refer to results for the dynamics of pulses in heterogeneous media [24,49]. This clustered spike pattern and multiple spikes distributed over the whole interval are more regular than multiple spike patterns observed for the Gierer-Meinhardt system with precursor and order 1 inhibitor diffusivity studied in [45]. In that case, multiple spike patterns have irregular distances and amplitudes since the precursor interacts with the geometry of the domain (represented by the Green's function) globally. On the other hand, for the regime covered in this paper, the precursor acts globally but the Green's function acts only locally between neighbouring spikes.
The proofs of both Theorems 1 and 2 will follow the approach in [47], where we reviewed and discussed many results on the existence and stability of multi-spike steady states. Before providing a sketch of the proofs of Theorems 1 and 2, we first state four highlights of the results and proofs in an informal manner. For each of these, we also indicate the novelty in comparison to previous work.
Highlight 1: For the proof in Theorem 1, we use Liapunov-Schmidt reduction to derive a reduced problem which will determine the positions of the spikes. This reduced problem in leading order is given by where c 1 , c 2 > 0 are constants which are independent of the small parameters and which implies (compare (7.6)). The distance between neighbouring spikes in the cluster is small (converging to zero) and in leading order, it is the same between any pair of neighbours. Further note that the distance is determined by the large diffusion constant D. In previous work for D = O(1), the spikes either have distance of order 1 or, in the case of clusters, they have small distance which is determined by the small diffusion constant ε. Thus, the spacing between neighbouring spikes follows a new asymptotic rule not encountered before.
The deeper reason for this behaviour is that the spike cluster we consider in this paper is formed by balancing the interactions of the inhibitor between neighbouring spikes and the inhomogeneity. On the other hand, in [42] and other previous work on spike clusters, they are established by balancing the interactions of the activator between neighbouring spikes and the inhomogeneity.
Highlight 2: The large eigenvalues with λ ε → λ 0 = 0 and their corresponding eigenfunctions where φ ε,i (y) is the restriction of the re-scaled eigenfunction of the activator A ε near t i , in (see (8.6)). This NLEP has diagonal form. Thus, with respect to large eigenvalues each spike only interacts with itself and not with the other spikes. It follows that the spike cluster is stable with the respect to large eigenvalues. In previous work for the case D = O(1), the stability problem of large eigenvalues for multiple spikes leads to a vectorial NLEP. It has to be studied by considering the spectrum of the matrix involved. Depending on the parameters, the multiple spikes can be stable or unstable. In the case of clusters for D = O(1), the stability has not been considered rigorously but we expect that the solution is unstable. Highlight 3: The small eigenvalues λ ε → 0 in leading order are given by the eigenvalues of the matrix where c 3 > 0 is independent of the small parameters, t 0 = (t 0 , . . . , t 0 ) and with δ N,0 = δ 1,N+1 = 0 (compare 9.13). The tridiagonal matrix M(t 0 ) derived here indicates that with respect to small eigenvalues each spike only interacts with its nearest neighbour and not with the other spikes. This is different from the case D = O(1): For multiple spikes, the matrix typically has strictly positive entries only (although it could have zero eigenvalues in the presence of symmetries), for clusters, a similar tridiagonal matrix which depends on ε has been studied to show the existence of spike clusters, see [42].
Highlight 4:
We determine all the eigenvalues of the matrix M(t 0 ) (see Highlight 3) explicitly by a method based on exactly finding a finite-difference approximation to a suitable ordinary differential equation.
These eigenvalues are given by Further, there is an eigenvalue of smaller size given by Lemma 16). This implies that the spike cluster is stable with respect to small eigenvalues. This result seems to be new in the literature. Therefore, we provided this proof.
Finally, we give a sketch of the proofs of Theorem 1 (existence) and Theorem 2 (stability).
We begin by stating the main steps in the proof of Theorem 1: (1) The existence problem (4.20) is reduced to a non-local one-dimensional problem is an integral operator solving (4.21).
(2) The ansatz for a spike cluster is (3) The amplitude identity is crucial to show that S ε [Â] is small in an appropriate norm. Therefore, is intuitively almost a solution.
(4) Using the estimate on S ε [Â], one can perform Liapunov-Schmidt reduction resulting in a small such that ⊥ C amounts to equating N L 2 -inner products to zero. This leads to a system of equations to leading order given by (2.11) in the spike points (t 1 , . . . , t N ), which can be shown to have a solution.
The main steps in the proof of Theorem 2 can be stated as follows: The eigenvalue problem is derived by linearizing the Gierer-Meinhardt system (1.1) around the clustered steady state A ε derived in Theorem 1. It is stated in (8.1) and for τ = 0, we havẽ where λ ε is some complex number and φ ε ∈ H 2 (Ω) satisfying Neumann boundary conditions. Further, for φ ∈ L 2 (Ω), is an integral operator solving (8.3). Then, we consider the eigenvalues in three cases separately as follows: (1) We first study large eigenvalues λ ε = O(1).
(i) Using (8.1) and the decay of the spikes, it is shown that in leading order an eigenfunction satisfies This means that the eigenfunction can be decomposed into parts which are located near each of the spikes.
This means that we get an NLEP which has diagonal form.
(iii) By a result in [43], it follows that the spike cluster is stable with respect to large eigenvalues.
(i) The ansatz for an eigenfunction is (ii) Then, we show that φ ⊥ = o (1). Note that εw ε,j is of exact order 1 and so it dominates the eigenfunction.
(iii) Taking the spatial derivative of the steady-state problem (4.20), we get an identity forw ε,j (up to small error terms) which we subtract from the eigenvalue problem.
(iv) We expand the terms in (8.1) around the spikes using the expansion of the Green's function G D and the inhomogeneity μ(x), both around the spike points, collecting the leading terms and giving rigorous estimates for the remainder.
Using the results in (iv), we derive the matrix M(t 0 ) stated in (9.13) which determines the stability properties caused by small eigenvalues.
(vi) We determine the eigenvalues and eigenvectors of the matrix M(t 0 ) explicitly by considering it as the finite-difference approximation of a suitable ordinary differential equation, compute the solution of this approximation explicitly and get the eigenvectors by taking its values at uniformly spaced points. See Lemma 13 and its proof.
It follows that the eigenvalue problem is stable with respect to small eigenvalues.
(3) Finally, we show that there are no eigenvalues λ ε with |λ ε | → ∞, if we assume that Re(λ ε ) > −c for some fixed c > 0. (If the latest condition is not satisfied the eigenfunction will be uniformly bounded in time and so does not cause any instability.) (i) We multiply (8.1) on both sides by φ ε . This leads to a quadratic form in φ ε .
Preliminaries: scaling property, Green's function and eigenvalue problems
In this section, we will provide some preliminaries which will be needed later for the existence and stability proofs. Let w be the ground state solution given in (2.1). By a simple scaling argument, the function is the unique solution of the problem z) (and the left-hand and right-hand limits are considered for x = z). We calculate to be the singular part of G D (x, z). Let the regular part H D of G D be defined by where d 0 = 1 − |t 0 | and η 0 > 0 is an arbitrary but fixed constant. Forξ 0 , we estimatê Let us denote ∂ ∂ti as ∇ ti . When i = j, we can define ∇ ti G(t i , t j ) in the classical way because the function is smooth.
Similarly, we define (3.9) For convenience and clarity, we introduce a re-scaled version of the Green's function which has a finite limit as D → 0. Thus, we set Throughout the paper, let C, c denote generic constants which may change from line to line.
Existence proof I: approximate solutions
Let t 0 ∈ (−1, 1) be a non-degenerate local minimum point of the precursor inhomogeneity, i.e. we assume that (2.7) is satisfied. In this section, we construct an approximation to a spike cluster solution to (1.2) which concentrates at t 0 .
The approximate cluster consists of the spikes μ i w √ μ i x−ti ε which are centred at the points t i and have the scaling factors and η > 0 is a constant which is small enough and will be chosen in Section 7 (see equation (7.9)). The reason for assuming (4.2) and (4.3) will become clear in Section 7 when we solve the reduced problem. We further denote and set To simplify our notation, for t ∈ Ω η and k = 1, . . . , N, we set where χ is a smooth cut-off function which satisfies the conditions and ε δ ε 20 Further, we compute, using (3.5), where d 0 = min(1 − t 0 , t 0 + 1), η 0 > 0 is an arbitrary but fixed constant (compare (3.7)). We haveĜ Generally, we havê For the derivatives, we estimate and analogous results hold for the mixed derivatives. By rescaling = ξ ε A,Ĥ = ξ ε H with ξ ε defined in (2.6), it follows that (1.2) is equivalent to the following system for the re-scaled functionsÂ,Ĥ: (4.20) From now on, we shall work with (4.20) and drop the hats. Next, we rewrite (4.20) as a single equation with a non-local term.
For a function A ∈ H 2 (−1, 1), we define T [A] to be the solution of For t ∈ Ω η , we define an approximate solution to (4.22) by the ansatz where t ∈ Ω η ,w k has been defined in (4.7) andξ k satisfies the amplitude identitŷ Intuitively, this ansatz is close to a solution of (4.22) since by the choice ofw k , the first equation is approximately satisfied and due to (4.24) the second equation holds approximately.
Next, we are now going to determine the amplitudesξ k to leading order so that (4.24) will be satisfied. From (4.21), we have We have O(ε 10 )).
In the section, we will compute its error.
Existence proof II -error of approximate solution
In this section, we compute the error terms caused by the approximate solutions in Section 4. We begin by considering the spatial dependence of the inhibitor near the spikes which is given by . . . , x N ) ∈ Ω η and t ∈ Ω η , where the non-local operator T [A] has been defined in (4.21) and the approximate solution has been introduced in (4.23).
To simplify our notation, we let where ξ ε has been introduced in (2.6) and J 1 is defined by the latest equality. For J 1 , we have by (4.7). We further compute using (4.14). Let x = t k + εz. For k = s, we have where w μs has been defined in (3.1) and is an even function, using (2.6) and (4.10). For k = s, we have using (4.10) and (2.6). Combining (5.5) and (5.7), we have
Remark 7
(i) The second line in (5.8) is an even function in the inner variable y which will drop out in many subsequent computations due to symmetry.
(ii) The third line in (5.8) is an odd function in the inner variable y. For t ∈ Ω η , we have Thus, the third line in (5.8) is of exact order O ε √ D log 1 D y .
Next, we compute and estimate the error terms of the Gierer-Meinhardt system (4.20) for the approximate solution w ε,t . We recall that a steady state for (4.20) is given by and T [A] is defined by (4.21), combined with Neumann boundary conditions A (−1) = A (1) = 0. We now compute the error term Using the amplitude identityξ k = T [A](t k ) and the equation (3.2) of the spike profile for a = μ s , we have Using μ (t 0 ) = 0, we get Now, we readily have the estimate (5.11)
Remark 8
The estimates derived in this section will be needed to conclude the existence proof using Liapunov-Schmidt reduction in Section 6. In particular, they will imply an explicit formula for the positions of the spikes in Section 7.
Existence proof III -Liapunov-Schmidt reduction
In this section, we study the linear operator defined bỹ where A = w ε,t and T [A] has been defined in (8.3).
We will prove results on its invertibility after suitable projections. This will have important implications on the existence of solutions of the non-linear problem including bounds in suitable norms. The proof uses the method of Liapunov-Schmidt reduction which was also considered in [10,12,13,25,26,37] and other works.
We define the approximate kernel and co-kernel of the operatorL ε,t , respectively, as follows: Recall that the vectorial linear operator L has been introduced in (A 2) as follows: By Lemma 21, we know that with X 0 = span{ dw dy } is invertible and possesses a bounded inverse. We also introduce the orthogonal projection π ⊥ ε,t : L 2 (Ω) → C ⊥ ε,t and study the operator L ε,t := π ⊥ ε,t •L ε,t , where orthogonality has been defined in L 2 (Ω) sense. We will show that L ε,t : K ⊥ ε,t → C ⊥ ε,t is invertible with a bounded inverse provided max( ε √ D , D) is small enough. In proving this, we will use the fact that this system is the limit of the operator L ε,t as max( ε √ D , D) → 0. This statement is contained in the following proposition.
Proposition 9
There exist positive constantsδ, λ such that for max ε √ D , D ∈ (0,δ) and all t ∈ Ω η , we have Further, the map We define φ ε,i , i = 1, 2, . . . , N and φ ε,N+1 as follows: At first (after rescaling), the functions φ ε,i are only defined on Ω ε . However, by a standard result, they can be extended to R such that their norm in H 2 (R) is bounded by a constant independent of ε, D and t for max( ε √ D , D) small enough. In the following, we will study this extension. For simplicity of notation, we keep the same notation for the extension. Since for i = 1, 2, . . . , N, each sequence {φ k i } := {φ ε k ,i } (k = 1, 2, . . .) is bounded in H 2 loc (R) it has a weak limit in H 2 loc (R), and therefore also a strong limit in L 2 loc (R) and L ∞ loc (R). Call these limits φ i . Then, passing to the limit in the equation (6.5) in each of the sets Ω j = {x ∈ Ω : x = t j + εy, |y| 6 δε 2ε } (we refer to Appendix A of [40] for further details), Therefore, we conclude φ N+1 = 0 and φ k N+1 H 2 (R) → 0 as k → ∞. This contradicts φ k H 2 (Ωε k ) = 1. To complete the proof of Proposition 9, we just need to show that the conjugate operator to L ε,t (denoted by L * ε,t ) is injective from K ⊥ ε,t to C ⊥ ε,t . The proof for L * ε,t follows along the same lines as for L ε,t and is omitted. Now, we are in the position to solve the equation , we can rewrite this as where where λ > 0 is independent of δ > 0, ε > 0, D > 0 and c(δ) → 0 as δ → 0. Similarly, we show that where c(δ) → 0 as δ → 0. If we choose then for max √ D , D small enough, the operator M ,t is a contraction on B ε,δ . The existence of a fixed point φ ,t now follows from the standard contraction mapping principle and φ ,t is a solution of (6.9).
We have thus proved Lemma 10 There exists δ > 0 such that for every pair of , t with 0 < < δ and t ∈ Ω η , there exists a unique φ ,t ∈ K ⊥ ,t satisfying S [w ε,t + φ ε,t ] ∈ C ε,t . Furthermore, we have the estimate Following the same decomposition into leading even and odd terms as discussed in Remark 7 (see also (5.10)) and applying the linear operator L ε,t to both of them, we get where φ ε,t,1 is an even function in the inner variable y which can be estimated as and φ ε,t,2 is an odd function in the inner variable y which can be estimated as Note that the even term is bigger than the odd term but it will drop in many subsequent calculations.
Existence proof IV: reduced problem
In this section, we solve the reduced problem. This completes the proof of our main existence result given by Theorem 1. By Lemma 10, for every t ∈ Ω η , there exists a unique solution φ ,t ∈ K ⊥ ,t such that To this end, let Then, the map W (t) is continuous in t ∈ Ω η and it remains to find a zero of the vector field W ε (t).
We compute We first compute the main term given by Let x = t s + εy. By (5.10), we can decompose S ε [w ε,t ] into odd and even functions. In leading order, only the odd components of S ε [w ε,t ] matter and we have In summary, we have Next, we estimate Integration by parts and and using the derivative of (3.2) gives This implies which follows from (6.11) and (6.12). Now, for given small ε > 0, we have to determine t ε ∈ Ω η such that W ε,s (t ε ) = 0 for s = 1, . . . , N.
We first consider the limiting case which only takes into account the leading terms and set By (7.6) and (7.7), we have t * ∈ Ω η if D is small enough. We need to find t ε ∈ Ω η such that W ε (t ε ) = 0. Setting e = (1, 1 . . . , 1) T , we have For t ∈ Ω ε , we expand Noting that for t * + τ ∈ Ω η , we have |v| 6 η √ D and α 6 η √ D log 1 D , we get This implies Setting τ = t − t * , we have to determine τ such that and so τ must be a fixed point of the mapping Using projections, we have We now determine when the mapping M ε,D maps from B 1 into B 1 for max( ε √ D , D) small enough. We need to have and we assume By Brouwer's fixed point theorem, the mapping M ε,D possesses a fixed point τ ε ∈ B 1 . Then, t ε = t * + τ ε ∈ Ω η is the desired solution which satisfies W ε (t ε ) = 0.
Thus, we have proved the following proposition.
Proposition 11
For max √ D , D small enough, there exist points t ∈ Ω η with t → t 0 such that W (t ε ) = 0.
Finally, we complete the proof of Theorem 1.
Stability proof I: large eigenvalues
In this section, we study the large eigenvalues which satisfy λ ε → λ 0 = 0 as max( ε Then, we need to analyse the eigenvalue problem where λ ε is some complex number, A ε = w ε,t ε +φ ε,t ε with t ε ∈ Ω η determined in the previous section, and for φ ∈ L 2 (Ω), the function T [A]φ is defined as the unique solution of First, we consider the special case τ = 0. Because we study the large eigenvalues, there exists some small c > 0 such that |λ ε | > c > 0 for max( √ D , D) small enough. We are looking for a condition under which Re(λ ε ) 6 c < 0 for all eigenvalues λ ε of (8.1), (8.2) if max( ε √ D , D) is small enough, where c is independent of ε and D. If Re(λ ε ) 6 −c, then λ ε is a stable large eigenvalue. Therefore, for the rest of this section, we assume that Re(λ ε ) > −c and study the stability properties of such eigenvalues.
Proof (1) of Theorem 12 follows by asymptotic analysis similar to Section 6.
To prove (2) of Theorem 12, we follow a compactness argument of Dancer [6]. The main idea of his approach is as follows: Let λ 0 = 0 be an eigenvalue of problem (8.6) with Re(λ 0 ) > 0.
Then, we can rewrite (8.1) as follows: where R ε (λ ε ) is the inverse of −Δ + (μ(x) + λ ε ) in H 2 (R) (which exists if Re(λ ε ) > − min x∈R μ(x) or Im(λ ε ) = 0) and the non-local operators have been defined in (4.21) and (8.3), respectively. The main property is that R ε (λ ε ) is a compact operator if max( ε √ D , D) is small enough. The rest of the argument follows in the same way as in [6].
We now study the stability of (8.1), (8.2) for large eigenvalues explicitly and prove Theorem 2.
When studying the case τ > 0, we have to deal with NLEPs as in (A 1), for which the coefficient γ of the non-local term is a function of τα. Let γ = γ(τα) be a complex function of τα. Let us suppose that γ(0) ∈ R, |γ(τα)| 6 C for Re(α) = α R > 0, τ > 0, (8.8) where C is a generic constant which is independent of τ and α. In our case, the following simple example of a function γ(τα) satisfying (8.8) is relevant: where √ 1 + τα denotes the principal branch of the square root function, compare [35]. Now, we have where γ(τα) satisfies (8.8). Then, there is a small number τ 0 > 0 such that for τ < τ 0 , (1) if γ(0) < 1, then there is a positive eigenvalue to (A 1); (2) if γ(0) > 1, then for any non-zero eigenvalue α of (8.9), we have Proof Lemma 13 follows from Theorem 19 by a regular perturbation argument. To make sure that the perturbation argument works, we have to show that if α R > −c (for some c > 0) and 0 6 τ < τ 0 (for some τ 0 > 0), where α = α R + √ −1α I , then |α| 6 C, where C is a generic constant which is independent of τ. In fact, multiplying (8.9) by the conjugatē φ of φ and integration by parts, we obtain that From the imaginary part of (8.10), we obtain that where C 1 is a positive constant (independent of τ). By assumption (8.8), |γ(τα)| 6 C and so |α I | 6 C. Taking the real part of (8.10) and noting that l.h.s. of (8.10) > C R |φ| 2 for some C ∈ R, we obtain that α R 6 C 2 , where C 2 is a positive constant (independent of τ > 0). Therefore, |α| is uniformly bounded and hence a regular perturbation argument gives the desired conclusion.
In conclusion, we have finished the study of the large eigenvalues (of order O(1)) and derived results on their stability properties.
It remains to study the small eigenvalues (of order o(1)) which will be done in the next section.
Stability proof II: characterization of small eigenvalues
Now, we study the eigenvalue problem (8.1), (8.2) with respect to small eigenvalues. Namely, we assume that λ ε → 0 as max ε √ D , D → 0. We will show that that the small eigenvalues are given by The matrix M(t 0 ) will be defined in (9.8) and given to leading order in (9.12). Before defining and computing the matrix, we have to make a few preparations.
where t ε = (t ε 1 , . . . , t ε N ) ∈ Ω η . After re-scaling, the eigenvalue problem (8.1), (8.2) becomes Here and in the rest of the proof for small eigenvalues, we set τ = 0. Since for small eigenvalues, we have τλ ε → 0 the proof and results extended to the case of a fixed constant τ > 0. Throughout this section, we denote By the implicit function theorem, there exists a (locally) unique solutionξ(t) = (ξ 1 (t), . . . ,ξ N (t)) of the equation Moreover,ξ(t) is C 1 for t ∈ Ω η . Note that we do not want to consider the solutionξ(t) = 0 since it does not correspond to a strictly positive solution.
We have the estimateŝ As a preparation, we first compute the derivatives ofξ(t). Now from (9.3), we calculate Here, F(t) is the vector field We compute Comparing with (7.5) and Proposition 11, we have . In addition, if M(t ε )) is positive definite, then we will show that all small eigenvalues have negative real part when 0 6 τ < τ 0 for some τ 0 > 0.
Next, we compute M(t) using (9.4). For |i − j| = 1, we compute in case and a similar result holds for This implies Therefore, using (7.6), (7.7) and the estimate (9.7), we have The matrix M(t ε ) will be the leading-order contribution to the small eigenvalues (compare Lemma 23 and the comments following it). Thus, we study the spectrum of the symmetric The corresponding eigenvectors are computed recursively from (9.17).
The matrix A has eigenvalue λ 1 = 0 with eigenvector v 1 = e. To compute the other eigenvalues and eigenvectors of A, we remark that this problem is equivalent to finding a suitable finite-difference approximationũ of the differential equation in the interval (0, 1) for uniform step-size h = 1 N . More precisely, we identify To determine the eigenvectors v i , we have to solve this finite-difference problem exactly. We assume that the solutions are given by polynomials of degree n (which will be shown later and n will be specified). Using Taylor expansion around x = x k−1/2 and the identities the finite-difference problem is equivalent to Substituting the ansatzũ a k x k into this equation, considering the coefficient of the power x k , k = 0, . . . , n, implies that (λ n − k(k + 1))a k + (k + 1) 2 a k+1 where for k = 0, we put (0 − 1)! = 1 in the second line of (9.17). For k = n, n = 0, 1, . . . , N − 1, this gives (λ n − n(n + 1))a n + (n + 1) 2 a n+1 = 0.
Case 2. n > N: Then, v n = 0 althoughũ ) 0. The resulting eigenfunctions for A are trivial and so in this case there are no new eigenpairs. Thus, we have found N eigenpairs with linearly independent eigenvectors.
Remark 17
The eigenvector v 0 with eigenvalue λ 0 = 0 corresponds to a rigid translation of all N spikes.
The leading eigenpair for mutual movement of spikes is (λ 1 , v 1 ).
The eigenvector for λ 1 = 2 can be computed as follows: The components of v 1,k are linearly increasing and have odd symmetry around the centre of the spike cluster which corresponds to k = N+1 2 or x = 1 2 .
Remark 18
The stability of the small eigenvalues follows from the results in [29] but the eigenvalues have not been determined explicitly.
The technical analysis for the small eigenvalues has been postponed to Appendix B.
Conclusion
We end this paper with a discussion of our results. We have considered a particular biological reaction-diffusion system with two small diffusivities, the Gierer-Meinhardt system with precursor. We have proved the existence and stability of cluster solutions which have three different length scales: a scale of order O(1) coming from the precursor inhomogeneity and two small scales which are of the same size as the square roots of the small diffusivities. In particular, the cluster solution can be stable for a suitable choice of parameter values. Such systems and their solutions play an important role in biological modelling to account for the bridging of length scales, e.g. between genetic, nuclear, intra-cellular, cellular and tissue levels. Our solutions incorporate and combine multiple scales in a robust and stable manner. A particular example of biological multi-scale patterns concerns the pattern formation of head (more precisely, hypostome), tentacles and foot in hydra. Meinhardt's model [17] correctly describes the following experimental observation: With tentacle-specific antibodies, Bode et al. [3] have shown that after head removal tentacle activation first reappears at the very tip of the gastric column. Then, this activation becomes shifted away from the tip to a new location, where the tentacles eventually appear. There are different lengthscales involved for this tentacle pattern: diameter of the gastric column, distance between tentacles and diameter of tentacles.
Let us describe the relation of this paper to [17] in more detail. The model in [17] can be explained in simplified form as follows: It consists of three activator-inhibitor systems, accounting for the formation of head, foot and tentacles, respectively. These subsystems are coupled by a joint source density. Further, there is direct interaction between the tentacle and head components to account for suppression of tentacle peaks at the site of head peaks. Altogether, the model is a seven-component reaction-diffusion system.
The main link to the results in this paper is to understand tentacle activation near a maximum of the source density. For this effect, the foot components can be neglected and so we are dealing with a five component system only. It is observed experimentally in [3] and computed numerically in [17] that near a sufficiently high local maximum of the source density tentacle peaks appear. Two different cases are studied: (i) if there is already a peak of head activator at this position, the tentacle peaks will appear at ring-shaped positions with the head activator peak in the centre of the ring, or (ii) if there is no previous head activator peak at this position, a tentacle activator peak will form, followed by a head activator peak which causes the tentacle peak to split into multiple peaks which are finally displaced to positions in ring-shaped positions with the activator peak in its centre.
For this effect to happen, it is assumed that the source density changes very slowly in time and acts on a rather long length scale. This corresponds to the precursor inhomogeneity in our model which is independent of time and has an O(1) length scale. The way the source density enters into the model is set up differently than in our paper resulting in a local maximum in [17] having a similar effect to a local minimum in our paper. We try to model some of these phenomena in a "minimal model" which consists of only two components corresponding to the two tentacle components in [17] coupled to a time-independent source density acting on an O(1) length scale.
The cluster pattern of spikes located in a sub-interval studied in our paper resembles the ring of tentacle peaks reduced to one dimension. (Work on the two-dimensional case is currently in progress.) It is interesting to note that our paper is successful in modelling isolated tentacles (without head formation) observed in some experimental situations as discussed in [17].
Comparing the two models in this paper and [17] leads to immediate possible extensions of the spike cluster analysis to models which are biologically more realistic by taking into account the following phenomena: (i) the effect of the head activator-inhibitor system could be added to show that the head activator peak pushes out the tentacle activator peaks; (ii) the tentacle activator peaks split easily due to saturation non-linearities in the tentacle subsystem (whereas in this paper, we do not consider the effect of saturation); (iii) add the foot activator-inhibitor system; (iv) replace the precursor inhomogeneity by a time-dependent source density which interacts with the other subsystems dynamically, e.g. it is enhanced by head activator, suppressed by foot activator, diffuses and possibly has its own predetermined inhomogeneity.
There are links of the model in [17] to other fields in biology such as the periodic spacing of secondary structures around a primary organizing region which is observed in the arrangement of leaves and flower elements in plants around the primary meristem [5].
The molecular basis underlying the model in [17] has recently been confirmed experimentally: After treatment of hydra with Alsterpaullone (which stabilizes β-catenin and thus increases the source density), it has been found that tentacle formation occurs over the whole body column [4]. Numerical computations have confirmed this behaviour [19]. This is in agreement with the pattern of multiple spikes covering the whole interval computed in our paper (see .
Systems of the type considered in this paper are a key to understanding the hierarchy of multi-stage biological processes such as in signalling pathways, where typically first largescale structures appear which induce patterns on successively smaller scales. The precursor can represent previous information from an earlier stage of development leading to the formation of fine structure at the present time. The multi-spike cluster in this paper is a typical small-scale pattern which is established near a pre-existing large-scale precursor inhomogeneity. where where u ∈ H 2 (R). Then, the conjugate operator of L under the scalar product in L 2 (R) is given by Then, we have the following result. To prove (A 6), we proceed in a similar way for L * . The lth equation of (A 4) is given as follows: Multiplying (A 9) by w and integrating, we obtain R w 2 Ψ l dy = 0.
Proof This result follows from the Fredholm Alternative and Lemma 20.
Finally, we study the eigenvalue problem for L: We have | 12,301.8 | 2016-11-08T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Covert Channels in One-Time Passwords Based on Hash Chains
We present a covert channel between two network devices where one authenticates itself with Lamport's one-time passwords based on a cryptographic hash function. Our channel enables plausible deniability. We also present countermeasures to detect the presence of such a covert channel, which are non-trivial because hash values are randomly looking binary strings, so that deviations are not likely to be detected.
INTRODUCTION
Covert channels (CCs) are unforeseen communication channels in a system design. While first CCs for local computers were described in the 1970's (cf. [7]), research of recent decades discovered a plethora of new and sophisticated CCs that aid the secret exchange of information between hosts, databases, network hosts, and IoT devices. Due to their stealthy and policy-breaking nature, CCs enable several actions related to cybercrime, such as the secret extraction of confidential information, barely detectable botnet command & control channels, and unobservable communication for cybercriminals. While legitimate use cases are imaginable as well, e.g. journalists using CCs for secure exchange of dissident-related information, criminal use seems foremost, so that presentation of new CCs also serves presentation of countermeasures.
We present the first CC that exploits cryptographic hash chains, which have become popular because some form (block chain) is Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). EICC 2020, November 18, 2020, Rennes, France © 2020 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-7599-3/20/11. . . $15.00 https://doi.org/10.1145/3424954.3424966 used in crypto currencies. Such CCs can be considered an example of plausible deniability: Alice communicates with Bob over the CC. Both can state that that every possible hash value is equally likely to occur. By modifying (alternating) bits of hash values, Alice and Bob can thus plausibly deny the existence of the CC. We also sketch possible countermeasures, which are non-trivial because hash values are randomly looking bit strings.
BACKGROUND
A hash function is a function ℎ : → that maps a possibly infinite universe, e.g. = {0, 1} ★ , onto a finite set of hash values, e.g. = {0, 1} [8]. A cryptographic hash function should have preimage resistance, i.e. from a hash value ∈ it is not possible (with reasonable effort) to compute a string ∈ such that ℎ( ) = , 2nd pre-image resistance, i.e. from a given with ℎ( ) = it is not possible to compute ′ ∈ , ′ ≠ such that ℎ( ′ ) = , and collision resistance, i.e. it is not possible to compute two strings , ′ ∈ , ′ ≠ such that ℎ( ) = ℎ( ′ ). As ⊂ , a hash function can be applied repeatedly, i.e. for a seed 0 = , we can compute +1 = ℎ( ) for = 0, 1, 2, . . . , − 1. This sequence is called a hash chain. While it is easy to compute a hash chain in forward direction, it cannot be computed in backward direction, starting from , because of the pre-image resistance property.
A one-time password (OTP) is a password only used once to authenticate an entity. As such, it is secure as long as it is secret and cannot be guessed prior to its use. Lamport [6] presented a scheme to generate OTPs from a hash function with pre-image resistance. Entity uses a random and secret seed to compute a hash chain 0 , . . . , , and transfers to entity via a secure channel. To authenticate, sends = −1 over an insecure channel to . checks if ℎ( ) = . If yes, the authentication is approved, and replaces by = −1 . If no, authentication has failed. For the next authentication, sends = −2 , and so on. Each password is only used once, and by the pre-image resistance, the next password cannot be guessed from the previous password. Also need not keep secret, as nothing can be inferred from this value. Haller's S/Key [5] and the TESLA protocol [9] use similar ideas.
A CC defines a parasitic communication channel nested inside a computing environment that allows at least two different legitimate processes (or nodes) to access a shared resource. The CC exploits the legitimate processes in a way that it allows the signaling of hidden information via the shared resource. The sender and receiver of a CC are called covert sender (CS) and covert receiver (CR), respectively. CS and CR are not necessarily identical with the overt sender and overt receiver (OS and OR) as CS and CR might act in an indirect manner or as man-in-the-middle. CCs inside computer networks follow a set of known hiding patterns [10] to transfer information, e.g. by placing secret data inside unused or reserved header bits of network packets, by modifying header fields with random values, or by modulating the timings between succeeding network packets.
Related Work: Plausible deniability for steganography was proposed by several authors, especially for filesystems (e.g. [2]). PRNGoutput was also applied for this purpose, e.g. in the context of video streams [4]. Abad [1] presents a CC in the check sum of IP packets, which however does not use a cryptographic hash function. Calhoun et al. [3] describe a CC in which OTPs are used as an element for generating random-data for a CC that exploits rate switching in 802.11b WiFi networks. In comparison, our approach allows an Internet-wide application that is not limited to wireless environments. Moreover does our approach represent an indirect CC based on hash chain exploitation. To the best of our knowledge, no other CC was published that exploits cryptographic hash chains.
COVERT CHANNELS IN HASH CHAINS
We present two variants of a CC that can be used in OTPs based on hash chains. We assume that each hash value is transmitted as part of network packets between and , so that a modification of the hash value can be hidden by re-computing the packet's check sum. In our scenario, CS is located close to (or on OS) and CR close to , i.e. both have at least indirect access to the communication between and .
Variant 1: The message that CS wants to send is broken into pieces of log 2 bits each (we assume to be a power of 2), i.e. represented as symbols over alphabet {0, 1, . . . , − 1}. To send a symbol , CS flips bit of the password that is going to be transmitted. CR, who knows the previous password +1 , intercepts the modified password ′ upon arrival and tests ′ with bit flipped, for = 0, 1, . . ., until a hash results in +1 . Then CR stores symbol and forwards the corrected password to . This is repeated until the complete message is transmitted (assuming that the number of pieces is less than , the length of the chain).
Variant 2: Here, CS only sends one bit with each transmitted password . To send a 1, bit 0 of the password is flipped. CR intercepts the possibly modified password ′ and checks if ℎ( ′ ) = +1 . If yes, then CR stores a 0, and forwards the password to . If no, then CR stores a 1, corrects the password by flipping bit 0, and forwards the corrected password to . This is repeated until the complete message is transmitted. Obviously, one is not restricted to always use bit 0. The bit that is possibly flipped can be decided depending on , synchronized timing events or other methods.
Both variants perform a manipulation of (pseudo-)random packet fields as described by the random value pattern [10]. Random value techniques have been implemented by several authors, rendering our proposed CC variants applicable in realistic environments.
Note that in both variants, CR needs no knowledge of stored by . In the first transmitted password −1 , no symbol is transmitted. Then CR knows −1 when intercepting and forwarding that password, and can refer to it later on, duplicating 's work.
Variant 1 can be detected more easily than variant 2 on the receiver side, as variant 1 on average encompasses /2 evaluations of ℎ, which is a notable amount of computation. This gives rise to intermediate forms where only ′ < bits can be used, i.e. the message is represented in symbols over alphabet {0, . . . , ′ − 1}. Parameter ′ will be chosen as large as possible, and as small as necessary to avoid detectability. In contrast, variant 1 provides higher steganographic bandwidth than variant 2.
On the sender side, both variants have very low effort and are thus hardly detectable. During transmission, an isolated packet will look innocent, as the one-time passwords look like random bitstrings, so that a possible bit flip is not likely to leave any trace or signature. Hence, detection during transmission seems non-trivial.
COUNTERMEASURES
Possible countermeasures against variant 1 on the receiver side have been described in the previous section. Countermeasures during transmission require that a warden has knowledge of two successive packets with passwords ′ and ′ −1 . In this case, the warden can check if ℎ( ′ −1 ) = ′ . If either password has been modified by CS, this equality will not hold because of the hash function's properties.
Another possible countermeasure would be that and modify the passwords themselves, in a way that is not foreseeable by CS or CR. When is about to send , it already has knowledge about the following password −1 . Let us assume that both and have knowledge of a hash function : {0, 1} → {0, 1}. If ( −1 ) = 0, then sends . If ( −1 ) = 1, then sends , i.e. with all bits inverted. If is a good hash function, then the second possibility will occur half of the time. If receives a password ′ , then it checks if either ℎ( ′ ) or ℎ( ′ ) equals +1 . In the latter case, stores ′ .
As only two instead of one password value will be accepted, this scheme is still secure. But life is now more complicated for CR. In variant 2, it cannot simply compare hash values, but must use all possible combinations of inverted and non-inverted passwords. More complicated schemes are possible. Even if they are not preventing this CC, they can restrict bandwidth and/or can increase detectability by forcing CR to perform more computation.
CONCLUSIONS
We have presented the first CC in hash chains, detailed different variants, and sketched non-trivial countermeasures against them. Instead of using network packets, the CCs could also use a local system to break a security policy if authenticates against over a shared medium that both, CS and CR, have access to under some timing constraints. Future work will comprise implementation and validation of the CC and the countermeasures. | 2,612.4 | 2020-11-18T00:00:00.000 | [
"Computer Science"
] |
MOLECULAR VARIABILITY OF OAT BASED ON GENE SPECIFIC MARKERS
Oat (Avena sativa L.) is a grass planted as a cereal crop. Cultivation of oat is increasing in the recent years because of its good nutrition value. The aim of our study was to analyze genetic variability of oat accessions based on SCoT markers. Eighteen primers were used to study polymorfism of 8 oat genotypes. All 18 primers produced polymorphic and reproducible data. Altogether 153 different fragments were amplified of which 67 were polymorphic with an average number of 3.72 polymorphic fragments per genotype. The number of polymorphic fragments ranged from one (SCoT9, SCoT62) to nine (SCoT40). The percentage of polymorphic bands ranged from 14.29% (SCoT9) to 60% (SCoT59) with an average of 41.62%. Genetic polymorphism was characterized based on diversity index (DI), probability of identity (PI) and polymorphic information content (PIC). The diversity index of the tested SCoT markers ranged from 0 (SCoT9, SCoT62) to 0.878 (SCoT40) with an average of 0.574. The polymorphic information content ranged from 0 (SCoT9, SCoT62) to 0.876 (SCoT40) with an average of 0.524. Dendrogram based on hierarchical cluster analysis using UPGMA algorithm grouped genotypes into two main clusters. Two genotypes, Taiko and Vok were genetically the closest. Results showed the utility of SCoT markers for estimation of genetic diversity of oat genotypes leading to genotype identification.
INTRODUCTION
Cereals belong to a group of key foods of plant production.Oat together with corn and barley is the most used for feed but for human nutrition is used only a little.Cultivated oats are hexaploid cereals belonging to the genus Avena L., which is found worldwide in almost all agricultural environments.Recently, oats have been receiving increasing interest as human food, mainly because the cereal could be suitable for consumptions by celiac patients (Gálová et.al., 2012).In the Nordic countries and Northern Europe it became a wellestablished crop both for food and feed.Oat belongs to alternative cereals which are used mainly as a supplement to traditional species of cereals (Daou and Zhang, 2012).
Recently, the studies of genetic diversity based mainly on the molecular analysis.Worldwide collections of oats were described by several types of dominant molecular markers, for example AFLP (Fu et al., 2003), RAPD (Baohong et al., 2003) and ISSR (Boczkowska and Tarczyk, 2013).With initiating a trend away from random DNA markers towards gene-targeted markers, a novel marker system called SCOT (Collard and Mackill, 2009) was developed based on the short conserved region flanking the ATG start codon in plant genes.SCOT markers are generally reproducible, and it is suggested that primer length and annealing temperature are not the sole factors determining reproducibility.They are dominant markers like RAPDs and could be used for genetic analysis, quantitative trait loci (QTL) mapping and bulk segregation analysis (Collard and Mackill, 2009).In principle, SCOT is similar to RAPD and ISSR because the same single primer is used as the forward and reverse primer (Collard and Mackill, 2009;Gupta et al. 1994).SCoT marker system has gained popularity for its superiority over other dominant DNA marker systems like RAPD and ISSR for higher polymorphism and better marker resolvability The aim of our study was to detect genetic variability among the set of 8 oat genotypes using 18 SCoT markers and to testify the usefulness of a used set of SCoT primers for the identification and differentiation of oat genotypes.
MATERIAL AND METHODOLOGY
Eight oat (Avena sativa L.) genotypes were used in the present study.Seeds of oat were obtained from the Gene Bank of the Slovak Republic of the Plant Production Research Center in Piešťany.Genomic DNA of rye cultivars was isolated from 100 mg freshly-collected leaf tissue according to GeneJETTM protocol (Fermentas, USA).The concentration and quality of DNA was checked up on 1.0% agarose gel coloured by ethidium bromide and detecting by comparing to λ-DNA with known concentration.SCoT analysis: For analysis 18 SCoT primers were chosen (Table 2) according to the literature (Collard a Mackill, 2009).Amplification of SCoT fragments was performed according to (Collard a Mackill, 2009) (Table 2.).Polymerase chain reaction (PCR) was performed in 15 μL mixture in a programmed thermocycler (Biometra, Germany).Amplified products were separated in 1% agarose gels in 1 × TBE buffer.The gels were stained with ethidium bromide and documented using gel documentation system UVP PhotoDoc-t®.Size of amplified fragments was determined by comparing with 100 bp standard lenght marker (Promega).
Data analysis: For the assessment of the polymorphism between castor genotypes and usability of SSR markers in their differentiation diversity index (DI) (Weir, 1990), the probability of identity (PI) (Paetkau et al., 1995) and polymorphic information content (PIC) (Weber, 1990) were used.The SCoT bands were scored as present (1) or absent (0), each of which was treated as an independent character regardless of its intensity.The binary data generated were used to estimate levels of polymorphism by dividing the polymorphic bands by the total number of scored bands and to prepare a dendrogram.A dendrogram based on hierarchical cluster analysis using the unweighted pair group method with arithmetic average (UPGMA) with the SPSS professional statistics version 17 software package was constructed.
RESULTS AND DISCUSSION
The development of molecular markers has opened up numerous possibilities for their application in plant breeding.For detecting polymorphisms new molecular marker system called SCoT (Collard a Mackill, 2009) was developed which tag coding sequences of the genome.SCoT marker system had initially been validated in the model species rice (Oryza sativa) (Collard and Mackill 2009).
For the molecular analysis of 8 oat genotypes 18 SCoT primers were used.PCR amplifications using 18 SCoT primers produced total 153 DNA fragments that could be scored in all genotypes.The selected primers amplified DNA fragments across the 8 genotypes studied with the number of amplified fragments varying from 4 (SCoT62) 3).The percentage of polymorphic bands ranged from 14.29% (SCoT9) to 60% (SCoT59) with an average of 41.62%.The polymorphic information content (PIC) values varied from 0 (SCoT9, SCoT62) to 0.876 (SCoT40) with an average of 0.524 and index diversity (DI) value ranged from 0 (SCoT9, SCoT62) to 0.878 (SCoT40) with an average of 0.574 (Tab.3).The most polymorphic SCoT40 marker is showed on Figure 2. A dendrogram was constructed from a genetic distance matrix based on profiles of the 18 SCoT primers using the unweighted pair-group method with the arithmetic average (UPGMA).According to analysis, the collection of 8 diverse accessions of oat was clustered into two main clusters (Figure 1).The first cluster contained unique genotype Azur coming from the Czech Republik.Second cluster contained 7 genotypes of oat which were further subdivided into two subclusters (2a, 2b).Subcluster 2a contained unique Austrian genotype Euro and rest of genotypes (6) were included in the subcluster 2b.Genetically the closest were two genotypes, Vok (coming from the Czech Republik) and Taiko (coming from Netherland).
Lower average polymorphism (21%) obtained by SCoT technique was detected by Kallamadi et al. (2015) who analysed molecular diversity of castor (Ricinus communis L.).Out of 36 SCoT primers tested, all primers produced amplification products but only 10 primers resulted in polymorphic fingerprint patterns.Out of a total of 108 bands, 23 (21%) were polymorphic with an average of 2.1 polymorphic bands per primer.The total number of bands per primer varied from 5 and 20 in the molecular size range of 100 -3000 bp.The PIC/DI varied from 0.06 for SCoT28 to 0.45 for SCoT12 with an average of 0.24.On the other side, higher polymorphism with SCoT primers has been reported in crops like peanut (Xiong et --------+---------+---------+---------+--------- Level of polymorphism in analysed oat genotypes was also determined by calculated polymorphic information content (PIC) (Table 3).Lower PIC values compare to our analysis (0. 2009), which is based on the short conserved region flanking the ATG translation start codon in plant genes.The technique is similar to RAPD or ISSR in that a single primer acts as the forward and the reverse primer, amplicons can be visualized by standard agarose gel electrophoresis, without the need for costly automated electrophoresis systems (Collard and Mackill, 2009).The higher primer lengths and subsequently higher annealing temperatures ensure higher reproducibility of SCoT markers, compared to RAPD markers (Rajesh et al., 2015).Gorji et al. (2011) presented that SCoTs markers were more informative and effective, followed by ISSRs and AFLP marker system in in fingerprinting of potato varieties.
CONCLUSION
The present work reported utilization of SCoT markers for the detection of genetic variability of oat genotypes.In summary, SCoT marker analysis was successfully developed to evaluate the genetic relationships among the genus of oat accessions originated from various regions.The hierarchical cluster analysis divided oat genotypes into 2 main clusters.SCoT markers are generated from the functional region of the genome; the genetic analyses using these markers would be more useful for crop improvement programs.Polymorphism revealed by SCoT technique was abundant and could be used for molecular genetics study of the oat accessions, providing high-valued information for the management of germplasm, improvement of the current breeding strategies, construction of linkage maps, conservation of the genetic resources of oat species and QTL mapping.
Table 1
List of analyzed genotypes of oat.
Table 2
List of used SCoT markers.
Table 3
Statistical characteristics of the SCoT markers used in oat.Dendrogram of 8 oat genotypes prepared based on 18 SCoT markers.
Poczai et al., 2013).
Functional markers developed from the transcribed region of the genome have the ability to reveal polymorphism, which might be directly related to gene function (Start codon targeted polymorphism (SCoT) is a simple and novel marker system first described by Collard and Mackill ( | 2,182.6 | 2017-05-16T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology"
] |
Fibrinogen-elongated γ Chain Inhibits Thrombin-induced Platelet Response, Hindering the Interaction with Different Receptors*
The expression of the elongated fibrinogen γ chain, termed γ′, derives from alternative splicing of mRNA and causes an insertion sequence of 20 amino acids. This insertion domain interacts with the anion-binding exosite (ABE)-II of thrombin. This study investigated whether and how γ′ chain binding to ABE-II affects thrombin interaction with its platelet receptors, i.e. glycoprotein Ibα (GpIbα), protease-activated receptor (PAR) 1, and PAR4. Both synthetic γ′ peptide and fibrinogen fragment D*, containing the elongated γ′ chain, inhibited thrombin-induced platelet aggregation up to 70%, with IC50 values of 42 ± 3.5 and 0.47 ± 0.03 μm, respectively. Solid-phase binding and spectrofluorimetric assays showed that both fragment D* and the synthetic γ′ peptide specifically bind to thrombin ABE-II and competitively inhibit the thrombin binding to GpIbα with a mean Ki ≈ 0.5 and ≈35 μm, respectively. Both these γ′ chain-containing ligands allosterically inhibited thrombin cleavage of a synthetic PAR1 peptide, of native PAR1 molecules on intact platelets, and of the synthetic chromogenic peptide d-Phe-pipecolyl-Arg-p-nitroanilide. PAR4 cleavage was unaffected. In summary, fibrinogen γ′ chain binds with high affinity to thrombin and inhibits with combined mechanisms the platelet response to thrombin. Thus, its variations in vivo may affect the hemostatic balance in arterial circulation.
that fibrin fibers containing ␥Ј chains are more resistant than ␥ chains to proteolysis by fibrinolytic enzymes (11), so that fibrin clots containing a more abundant amount of ␥Ј chains could be associated with higher thrombotic risk. Notwithstanding the decreased sensitivity to fibrinolytic enzymes, the influence of a reduced expression of ␥Ј/␥A on enhanced risk for venous thromboembolism was prevalently demonstrated in clinical studies, although the detailed mechanism was only partially unraveled.
At variance with venous thromboembolism, the significance of altered expression of ␥Ј chain on arterial thrombosis remains largely elusive (6,7,12,13). Platelets are major players of arterial thrombus formation, as also demonstrated by the clinical efficacy of anti-platelet agents in cardiovascular prevention. The fibrinogen ␥Ј chain, through its ability to bind to thrombin, might enhance the amount of clot-bound thrombin, known to be active in the presence of the heparin-antithrombin complex, and thus scarcely inactivated by traditional anticoagulants (heparins, indirect Factor Xa inhibitors) (3). Thus, clot-bound, active thrombin may represent a storage pool of the enzyme, facilitating arterial thrombus formation and growth.
In this study, we investigated the effect of the fibrinogen ␥Ј and also of its 20-amino acid-insertion peptide on the thrombin interaction with the platelet receptors glycoprotein (Gp) Ib␣ and protease-activated receptors 1 and 4 (PAR1 and PAR4), responsible for the thrombin-induced platelet activation. Fragment D was used as the best surrogate to selectively study the high affinity binding site for thrombin in ␥ chain in a conformation similar to that present in the native fibrinogen molecule and suitable for thrombin binding studies. This experimental approach was aimed at assessing whether ␥Ј chain can affect platelet activation by inhibiting competitively the interaction between the enzyme and GpIb␣ and by acting as an allosteric effector on PAR hydrolysis by thrombin. The obtained results may shed light on the possible role of fibrinogen ␥Ј chain on the thrombin-induced platelet activation and thus on possible implications on both anti-thrombotic and pro-thrombotic properties of fibrinogen in arterial circulation, where platelets play a central role in thrombo-hemorrhagic syndromes.
MATERIALS AND METHODS
Synthesis of Fibrinogen ␥Ј Peptide-The fibrinogen ␥Ј 408 -427 peptide ( 408 VRPEHPAETEYDSLYPEDDL 427 ) and its scrambled sequence peptide (PTAHDYVDEERPYLPEELSD) as a control were synthesized by the peptide synthesis facility of the Brain Research Center at the University of British Columbia (Vancouver, Canada). The tyrosine residues 418 and 422 were phosphorylated, as these residues were sulfated in natural ␥ chains (5). The RP-HPLC analysis showed that these peptides were 95% pure, with a molecular mass of 2580.3 Ϯ 0.2 atomic mass units, as determined by mass spectrometry.
Purification of Fibrinogen ␥A/␥A and ␥A/␥Ј D Fragments-Both ␥A/␥A (D) and ␥A/␥Ј (D*) fragments of fibrinogen were purified by a modified procedure, as reported previously (14). Human fibrinogen, free of plasminogen was purchased from Calbiochem. This preparation was chromatographed on a DEAE-Sepharose fast flow XK column connected to a fast protein liquid chromatography apparatus (GE Healthcare) to sep-arate the fibrinogen fraction rich in ␥Ј chains. The column was equilibrated with 5 mM sodium phosphate, 40 mM Tris, pH 8.50, at a flow rate of 1 ml/min. One gram of fibrinogen was adsorbed on the column. After the elimination of nonadsorbed proteins, fibrinogen fractions were eluted using a stepwise gradient and three different eluting buffer solutions as follows: 1) 30 mM sodium phosphate, 60 mM Tris, pH 7.60; 2) 50 mM sodium phosphate, 80 mM Tris, pH 6.80; 3) 500 mM sodium phosphate, 0.5 M Tris, pH 4.40. All these buffers contained 1 mg/ml aprotinin as protease inhibitor. Three major peaks were obtained, and the fibrinogen fraction containing one ␥A and one ␥Ј chain was eluted with the third buffer solution, whereas the fraction containing two ␥A chains was obtained with the first buffer. D fragments were prepared from plasmin digests of the first and the third peak obtained by DEAE chromatography and gel-filtered on DG-10 columns (Bio-Rad) equilibrated with 50 mM Tris-HCl, 0.15 M NaCl, 10 mM CaCl 2 , pH 8.50. Human plasmin (specific activity, 5 units/mg; Calbiochem) was added at a final concentration of 0.05 unit/ml (0.01 mg/mg fibrinogen) and incubated with the pooled and concentrated fibrinogen peaks 1 and 3 of the DEAE chromatography in the above buffer at 25°C for 120 min. The fibrinogen was pretreated for 15 min with 5 mM iodoacetamide to inhibit any minimal trace of contaminating Factor XIII before the addition of plasmin. The reaction was stopped by the addition of aprotinin (10 mg/ml final concentration). Fragment D (containing ␥A chains only) and D* (containing ␥Ј chain only) were purified from peaks 1 and 3, respectively, using a second DEAE column (Supelco, Sigma), 4.6 ϫ 25 mm, and a two-pump HPLC apparatus (Jasco Easton, MD), equipped with a spectrophotometric device (model 2075), and a spectrofluorometric detector (FP-2020, Jasco). The spectrophotometric detection of the eluted peaks was accomplished at 280 nm, whereas the fluorescence of the proteins was monitored by using ex ϭ 280 nm and em ϭ 340 nm. The developed gradient was 0 -0.5 M NaCl in 20 mM Tris-HCl, pH 8.0, in 60 min. The flow rate was 1 ml/min. Fragment D was eluted at about 0.2 M NaCl, whereas fragment D* was obtained at ϳ0.45 M NaCl. The concentration of fragment D and D* was calculated spectrophotometrically at 280 nm using an extinction coefficient of E 0.1% ϭ 2.0 cm 2 ⅐mg Ϫ1 , using the primary sequence of fragment D and the spectrophotometric method by Pace et al. (15). The fractions containing the fragment D and D* were pooled and concentrated, and their purity was checked by SDS-PAGE using 4 -12% gradient gels under both not reducing and reducing conditions. The identity of the ␥Ј chain was checked by immunoblotting of the bands obtained in SDS-PAGE of reduced fragment D*, using a mouse anti-human monoclonal antibody (clone 2.G2.H9) from Millipore S.p.A. (Milano, Italy), a secondary anti-mouse horseradish peroxidase-conjugated antibody, and an ECL TM Western blotting detection system (GE Healthcare). The ␥A chains obtained from reduced fragment D did not react with the monoclonal antibody 2.G2.H9 (data not shown).
Thrombin-Fragment D* Interaction-Human ␣-thrombin was purified and characterized as reported previously (16). Binding of thrombin to purified fibrinogen fragment D* was studied by a solid-phase binding assay, immobilizing fragment D* (5 g/ml) on microtiter plates (96-well; Nunc-Immuno
Fibrinogen ␥ Chain and Thrombin-Platelet Interaction
Maxisorp Nunc), overnight at 4°C in 50 mM bicarbonate buffer, pH 9.60. The plate surface was blocked at 37°C for 4 h with 250 l/well of a buffer solution containing 1 mg/ml bovine serum albumin, 50 mM Tris-HCl, pH 7.5. After aspiration of the blocking solution, plates were dried at room temperature and stored over desiccant at 4°C. Use of the anti ␥Ј chain monoclonal antibody 2.G2.H9 conjugated to Alexa Fluor 488 (Invitrogen) allowed us to obtain a quantitative estimate of the amount of immobilized fragment D*. Under the above conditions, the amount of immobilized fragment D* was equal to about 10 ng/well (about 0.12 pmol/well). This estimate was based on the use of serial dilutions of a reference solution of the 2.G2.H9 monoclonal antibody, whose Alexa 488 fluorescence was measured using ex ϭ 494 nm and em ϭ 520 nm. Thus, at maximum saturation using 100 l of the buffer solution, about 1 nM thrombin could be bound by immobilized fragment D*. Control experiments were also performed using fragment D instead of fragment D* at the same concentration.
Thrombin (78 nM to 5 M) was incubated for 30 min in the absence and presence of the C-terminal domain 45-57 of hemadin, a specific ligand for ABE-II of thrombin (17). The C-terminal peptide of hemadin had the sequence NH 2 -SEFEEFEIDEEEK-OH and was synthesized by the solid-phase Fmoc (N-(9-fluorenyl)methoxycarbonyl) method (18) on a p-alkoxybenzyl ester polystyrene resin, using a method detailed previously (19) The chemical identity of the purified material was established by high resolution mass spectrometry in positive ion mode on a Mariner electrospray ionization-time-offlight instrument from PerSeptive Biosystems (Foster City, CA), which gave mass values in agreement with the expected amino acid composition within 10 ppm mass accuracy. The concentration of the peptide was determined by UV absorption at 257 nm on either a double-beam l-2 (PerkinElmer Life Sciences) or a Varian Cary 2200 spectrophotometer (Assoc. Inc., Sunnyvale, CA), using a molar absorption coefficient of 400 M Ϫ1 ⅐cm Ϫ1 . The hemadin peptide was used at a fixed concentration spanning from 2 to 16 M. The binding buffer was 10 mM Tris-HCl, 0.15 M NaCl, 0.1% PEG 6000, pH 7.50, at 25°C (TBSP). In a different experimental setup, the C-terminal hemadin peptide was substituted by the ␥Ј peptide, used over a 22.5-180 M concentration range, to test whether or not the latter behaves as a pure competitive inhibitor. After incubation at 25°C for 30 min, and aspiration with three washing cycles with TBSP, a sheep anti-thrombin polyclonal antibody (ϳ10 mg/ml, from US Biological, Milan, Italy) was added at an optimal dilution of 1:500 in TBSP and incubated for 120 min. After aspiration of the solutions and three washing cycles, 100 l of rabbit anti-sheep horseradish peroxidase-conjugated polyclonal antibody (ϳ2 mg/ml, dilution 1:250) from US Biological (Milan, Italy) was added and incubated for 60 min at 25°C. After aspiration and three washing cycles, 100 l of 5 mM 3,5,3Ј,5Ј-tetramethylbenzidine in the presence of 5 mM H 2 O 2 was added, and the reaction was stopped after 15 min using 1 M H 2 SO 4 . This end point was chosen based on preliminary experiments showing a linear increase of the absorbance (15 points, R Ͼ 0.95) over that time interval even at the highest concentration of thrombin. This finding ruled out that the absorbance measured after 15 min of incubation did not reflect the real amount of thrombin bound to fragment D* and was not because of substrate depletion. An entire data set of thrombin binding to fragment D* (35 points) was simultaneously fitted to the Equation 1, where A is the value of the absorbance measured at 450 nm; A max is the asymptotic value of the absorbance; T is the thrombin concentration; and K d * is the apparent equilibrium dissociation constant of thrombin binding to fragment D*, equal to K d 0 (I/K i ), with K d 0 as the real equilibrium binding constant; I is the concentration of either fragment D* or ␥Ј peptide, and K i is the equilibrium dissociation constant of binding of these ligands to thrombin.
Binding of Fibrinogen ␥Ј Peptide to Thrombin Studied by Tryptophan Fluorescence-Binding of ␥Ј peptide to thrombin was studied by recording the increase in tryptophan fluorescence of thrombin at max (i.e. 334 nm) as a function of fibrinogen ␥Ј peptide. The interaction of the latter with thrombin was monitored by adding, under gentle magnetic stirring, to a solution of thrombin (1.4 ml, 50 nM) in 5 mM Tris-HCl buffer, pH 7.5, 0.1% PEG, in the presence of 0.15 M NaCl, aliquots (2-5 l) of ␥Ј peptide (2.33 mM). Fluorescence spectra were recorded on a Jasco (Tokyo, Japan) model FP-6500 spectrofluorometer, equipped with a Peltier model ETC-273T temperature control system from Jasco. Excitation and emission wavelengths were 295 and 334 nm, respectively, using an excitation/emission slit of 10 nm. For all measurements, the long time measurement software (Jasco) was used. Control experiments were also performed to ruled out not specific effects, using the ␥Ј peptide scrambled peptide at a concentration of 100 M. Under these conditions, at the end of the titration, a Trp photobleaching lower than 2% was observed. The absorbance of the solution at both 295 and 334 nm was always lower than 0.05 unit, and therefore no inner filter effect occurred during titration experiments. Fluorescence intensities were corrected for dilution (2-3% at the end of the titration) and subtracted for the contribution of the ligand at the indicated concentration. The fluorescence values, measured in duplicate, were analyzed as a function of the ␥Ј peptide concentration by a hyperbole equation to obtain the value of the F max (corresponding to the fluorescence at ␥Ј peptide concentration ϭ ∞). This parameter was used to calculate ⌬F max ϭ F max Ϫ F o (where F o is the fluorescence value in the absence of the peptide). The fluorescence changes expressed as (F obs Ϫ F o )/⌬F max were analyzed as a function of the total ␥Ј peptide concentration according to a single site binding isotherm. Nonlinear least squares fitting was performed using the program Origin 7.5 (MicroCal Inc.), which allowed us to obtain the best fitting parameter values along with their standard errors.
Effect of ␥Ј Peptide and Fragment D* on Thrombin-GpIb␣ Interaction-Solid phase binding experiments to evaluate the effect of fibrinogen ␥Ј peptide on thrombin-GpIb␣-(1-282) interaction were performed as detailed above by immobilizing purified GpIb␣-(1-282) fragment (10 g/ml) on polystyrene plates. Purification of platelet GpIb␣-(1-282) fragment was performed as detailed previously (20). Thrombin (20 nM to 1.28 M) was incubated in the presence of both 408 -427 ␥Ј peptide and fragment D* at fixed concentrations spanning from 10 to 320 M and from 0.2 and 3.2 M, respectively. The binding buffer was TBSP. Both the experimental procedure of the binding assay and the analysis of the experimental data sets were the same as those used to study the thrombin-fragment D* interaction, detailed above. Control experiments, in which different concentrations of GpIb␣(1-282) fragment from 0.31 to 10 g/ml were immobilized on the microplate wells for binding to 10 nM thrombin, showed that in the time scale of the horseradish peroxidase reaction with 3,5,3Ј,5Ј-tetramethylbenzidine (15 min), the signal at 450 nm was always linear for all tested GpIb fragment concentrations. These results validated the assumption that in this solid-phase binding assay the absorbance measured at 450 nm after 15 min reflected the amount of thrombin bound to GpIb. Additional control experiments were also carried out with the synthetic peptide analog GpIb␣-(268 -282), as a competitive inhibitor of thrombin binding to the immobilized GpIb-(1-282) fragment. This peptide, encompassing the C-terminal tail 268 -282 of GpIb␣, was synthesized and characterized as detailed previously (19). The three sulfated tyrosines, present in the natural peptide sequence (residue 276, 278, and 279), were replaced by phosphotyrosine.
Hydrolysis of Chromogenic Substrate D-Phe-Pip-Arg-pNA by Thrombin in the Presence of ␥Ј Peptide and Fibrinogen Fragment D*-Steady state hydrolysis of the chromogenic substrate S-2238 was studied in the absence and presence of six different ␥Ј peptide concentrations ranging from 2.5 to 320 M and fragment D* concentrations spanning from 0.2 to 3.2 M. Thrombin was used at 1 nM in 10 mM Tris-HCl, 0.15 M NaCl, 0.1% PEG 6000, pH 7.50, at 25°C.
Hydrolysis of PAR1- (38 -60) and PAR4(44 -66) Peptide by Thrombin-PAR1-(38 -60) ( 38 LDPRSFLLRNPNDKYEPF-WEDEE 60 ) and PAR4-(44 -66) ( 44 PAPRGYPGQVCANDS-DTLELPDS 66 ) peptides were synthesized by PRIMM (Milan, Italy). Cleavage of these peptides by 0.1-1 nM thrombin was monitored by RP-HPLC as detailed previously (22). The Michaelis-Menten parameters k cat and K m were calculated in the absence and presence of fixed concentrations of the ␥Ј peptide ranging from about 2.5 to 320 M. The k cat /K m values of PAR1 peptide hydrolysis in the presence of fragment D* (from 0.2 to 6.4 M) was calculated at a peptide concentration of 1 M, which is a concentration lower than the K m value of the thrombin-PAR interaction. Under these conditions, the first order rate constant of the peptide hydrolysis was proportional to the k cat /K m value, as experimentally verified. The hydrolysis reac-tion was performed in 10 mM Tris-HCl, 0.15 M NaCl, 0.1% PEG 6000, pH 7.50, at 25°C. The k cat /K m values were analyzed as a function of both ␥Ј peptide and fibrinogen fragment D* using the following linkage Equation 2 (23), where Z ϭ 1 ϩ I/K i ; K i is the equilibrium dissociation constant of either ␥Ј peptide or fragment D* binding to thrombin; I is the inhibitor concentration; and the superscript 0 and 1 refer to the k cat /K m value pertaining to free and ␥Ј peptide-or D*-bound thrombin form, respectively. Control experiments were also carried out using 320 M scrambled ␥Ј peptide to exclude spurious effects generated by ionic strength phenomena. 54 -65 (PO 3 H 2 ), having the sequence GDFEEIPEEY(PO 3 H 2 )LQ, was synthesized as described previously (24). Binding of this peptide to ABE-I of thrombin was studied by monitoring the decrease of the peptide fluorescence occurring upon interaction with thrombin, as reported previously (25). Fluorescence spectra were recorded on a Jasco (Tokyo, Japan) spectrofluorometer, as detailed above. Excitation and emission wavelengths were 492 and 516 nm, respectively, using an excitation/emission slit of 3/5 nm. During titration experiments, the decrease of fluorescence intensity at 516 nm was recorded as a function of thrombin concentration. For all measurements, the long time measurement software (Jasco) was used. Fluorescence intensities were corrected for dilution (i.e. 8 -10%) at the end of the titration.
Data were analyzed by the following binding isotherm Equation 3 (26), using the program Origin 7.5 (MicroCal Inc.), where ␣ is the maximum fluorescence change; K d is the dissociation constant; L is the total concentration of thrombin; and P 0 is the concentration of [F]-hirudin 54 -65 (PO 3 H 2 ). Thrombin-induced Aggregation of Gel-filtered Platelets-Platelets from healthy volunteers were gel-filtered on Sepharose 2B columns (GE Healthcare) as reported previously (22). Born's aggregation of gel-filtered platelets, performed on a 4-channel PACKS-4 aggregometer (Helena Laboratories, Sunderland, UK) as detailed previously (22), was induced by 1 nM thrombin in the absence or presence of different concentrations of ␥Ј peptide and fibrinogen fragment D*. Control experiments were performed with both 50 M PAR1-and 1 mM PAR4-activating peptides (PAR1-AP (SFLLRN-NH 2 ) and PAR4-AP (AYPGKF-NH 2 ), respectively, from PRIMM), 10 M ADP, and 10 g/ml collagen from Helena Laboratories. The specific effect of fragment D* was also evaluated by using fragment D at the same concentrations.
Fibrinogen ␥ Chain and Thrombin-Platelet Interaction
Monitoring of Full-length PAR1 Hydrolysis by Thrombin on Intact Platelets by Flow Cytometry-Gel-filtered platelets from healthy controls were mixed with 1 nM thrombin at 25°C in the absence and presence of the ␥Ј peptide ranging from 27 to 310 M and of fibrinogen fragment D* from 0.1 to 32 M. After 120 s, the hydrolysis of PAR1 molecules on platelet membrane was stopped with 1 M D-Phe-Pro-Arg-chloromethyl ketone, and the uncleaved PAR1 molecules were detected by flow cytometry, as described previously (22). Briefly, after cleavage reaction was stopped, platelets were labeled for 30 min at 4°C with saturating amounts of phycoerythrin-conjugated antithrombin receptor monoclonal antibodies (SPAN-12 clone; Beckman Coulter, Milan, Italy), as detailed elsewhere (22). Isotype-matched, phycoerythrin-conjugated irrelevant antibodies were used to measure background fluorescence. Samples were run through a FACSCanto flow cytometer (BD Biosciences) with standard equipment. Uncleaved PAR1 expression levels were reported in terms of mean fluorescence intensity (MFI) ratio of the SPAN-12ϩ platelet population.
RESULTS
Purification of Fragment D and D*-The purifications of both fibrinogen fragment D*, containing one ␥A and one ␥Ј chain, and of normal fragment D were successfully accomplished by DEAE chromatography. Fragment D in SDS-PAGE showed a molecular mass of about 85 kDa, whereas fragment D* had a slightly higher molecular mass as compared with fragment D, in agreement with the presence of the elongated ␥Ј chain (Fig. 1A). SDS-PAGE under reducing conditions and immunoblotting of the reduced sample with an anti-␥Ј monoclonal antibody allowed us to identify the genuine presence of fibrinogen fragment D*, as shown in Fig. 1, B and C. Purified fragment D* was then used in the functional and solid-phase binding experiments, where the nominal concentration of the ␥Ј chain was assumed the same as that of the entire fragment D*.
Characterization of the Fibrinogen Fragment D* Interaction with Thrombin-The interaction of purified fragment D* with thrombin was studied by a solid-phase binding assay that showed a specific interaction with a K d value of 0.4 Ϯ 0.03 M (Fig. 2). The sequence of 20 amino acids of the ␥Ј peptide present in the fragment D* drives this interaction, as the purified ␥Ј peptide competitively inhibited with a K i value of about 47 M of the thrombin-fragment D* interaction, as shown by Fig. 2A. This interaction involved the ABE-II of thrombin, as its binding was competitively inhibited by specific ligands of this thrombin exosite. In fact, the C-terminal 45-57 peptide of hemadin, which binds to ABE-II (27), was able to competitively inhibit the thrombin-fragment D* interaction, with a K i value of 4.2 Ϯ 0.4 M, as shown in Fig. 2B. No significant interaction was observed with fragment D (Fig. 2C). The involvement of the ABE-II of thrombin was also confirmed by the inhibition of the binding of 500 nM thrombin to immobilized fragment D* by the ssDNA aptamer HD22 (IC 50 ϭ 81 Ϯ 6 nM; see Fig. 2C), whereas no effect was observed using the ssDNA aptamer HD1, which binds to ABE-I of the enzyme (data not shown).
Binding of ␥Ј Peptide to Thrombin Monitored by Tryptophan Fluorescence-The binding of ␥Ј peptide to thrombin causes a significant increase of tryptophan fluorescence, without appreciable change in the max value. Hence, we exploited this change for estimating the affinity of the ␥Ј peptide for thrombin, as shown in Fig. 3. The corresponding K d value for ␥Ј peptide binding was calculated as 30 Ϯ 5 M, in good agreement with the value determined by the solid-phase binding experiments reported above. The increase in the fluorescence quantum yield suggests that the chemical environment of Trp residues in thrombin becomes, on average, more rigid and apolar than in the ligand-free enzyme (28).
Effect of ␥Ј Peptide and Fibrinogen Fragment D* on Platelet Aggregation-The fibrinogen ␥Ј peptide inhibited dosedependently the thrombin-induced aggregation of gel-filtered platelets, up to about 70%, in a specific manner, as demonstrated by the lack of effect by the scrambled ␥Ј peptide (see Fig. 4, A and B). Likewise, purified fibrinogen fragment D* inhibited platelet aggregation up to about 70%, although it was impossible to reach full inhibition, even at higher fragment concentrations (Fig. 4B). At variance with these findings, no significant effect was observed with fragment D (Fig. 4B). The analysis of these data provided IC 50 Effects of ␥Ј Peptide and Fibrinogen Fragment D* on Thrombin-GpIb␣ Interaction-Both ␥Ј peptide and fibrinogen fragment D* inhibited competitively the binding of thrombin to immobilized GpIb␣-(1-282) with a K i of about 40 and 0.5 M, respectively (Fig. 5, A and B). Instead, no effect was observed with fragment D (data not shown). The competitive nature of the observed inhibition by both ␥Ј peptide and fragment D* was confirmed by control experiments performed with the synthetic peptide analog GpIb␣-(268 -282), which binds to thrombin with a K i value of 9 M (19). These findings can explain in part the inhibitory effect of the ␥Ј peptide and fragment D* on thrombin-induced platelet aggregation, because of the activat-ing role of thrombin-GpIb interaction on platelet aggregation (22,29).
Effect of ␥Ј Peptide and Fibrinogen Fragment D* on Thrombin-catalyzed PAR1 and PAR4 Cleavage-Fibrinogen ␥Ј peptide inhibited the cleavage of the PAR1 substrate, as shown in Fig. 6A. The inhibitory effect was allosteric in nature, as PAR1-(38 -60) substrate interacts with the ABE-I and the active site of thrombin and not with ABE-II (30), where ␥Ј peptide binds (9). Moreover, the inhibitory effect concerned mostly the k cat value, as shown in Table 1. When the k cat /K m values were analyzed by a linkage equation (Equation 2) as a function of ␥Ј peptide concentration, a best-fit K i value of about 40 M was obtained, in good agreement with the value derived from the GpIb solidphase binding and fluorescence titration experiments (Fig. 6B). No significant effect was observed using 320 M scrambled ␥Ј peptide, thus ruling out spurious ionic strength effects. Like-
Fibrinogen ␥ Chain and Thrombin-Platelet Interaction
wise, the k cat /K m values of PAR1 hydrolysis as a function of fragment D* concentration decreased, reaching an asymptotic value (Fig. 6C). In this case, the value of the equilibrium dissociation constant was about 0.5 M, about 80-fold lower than that measured for ␥Ј peptide, in analogy to the results obtained in solid-phase binding experiments with GpIb␣. No significant effect was instead observed using the fragment D (Fig. 6C).
At variance with PAR1, the hydrolysis of PAR4 was not affected by ␥Ј peptide, as the k cat /K m values measured as a function of the ␥Ј peptide concentration were scattered around a mean of about 4 ϫ 10 5 M Ϫ1 s Ϫ1 (data not shown). In addition, experiments were carried out using the synthetic peptide substrate D-Phe-Pip-Arg-pNA (S-2238). Both ␥Ј peptide and fragment D* reduced the catalytic competence of thrombin toward OCTOBER 31, 2008 • VOLUME 283 • NUMBER 44
JOURNAL OF BIOLOGICAL CHEMISTRY 30199
S-2238, with this effect linked mostly to a reduction of the k cat values, as listed in Table 2.
A dose-dependent inhibition of the hydrolysis of full-length PAR1 molecules on intact platelets was also observed as a function of increasing concentrations of ␥Ј peptide, as shown in Fig. 7A. At high peptide concentrations the inhibition reached an asymptotic value, in agreement with the results obtained with the synthetic PAR1 peptide. Similar effects were observed with the fragment D* (Fig. 7B). Thus, ␥Ј peptide can exert its inhibitory effect on platelet activation by inhibiting competitively the interaction between the enzyme and GpIb␣ and by causing an allosteric inhibition of PAR1 hydrolysis.
The effect of ␥Ј peptide on the interaction of thrombin with [F]-hirudin 54 -65 (PO 3 H 2 ) was also investigated to assess whether or not the negative influence of the ␥Ј peptide on PAR1 but not PAR4 hydrolysis arose from a conformational change induced in the ABE-I, where PAR1 but not PAR4 binds. These experiments showed that the K d value of [F]-hirudin 54 -65 (PO 3 H 2 ) was not significantly changed in the presence of a high concentration of the ␥Ј peptide (i.e. 74 M), as shown in Fig. 8. The equilibrium dissociation constants of the hirudin peptide was in fact equal to 19.7 Ϯ 1.2 and 14.3 Ϯ 1.5 nM, in the absence and presence of the ␥Ј peptide, respectively. Altogether, these findings suggest
Fibrinogen ␥ Chain and Thrombin-Platelet Interaction
that the inhibiting effect of the ␥Ј peptide on the cleavage of both PAR1 and the synthetic substrate S-2238 stems mainly from conformational changes induced in the catalytic site of thrombin.
DISCUSSION
This study showed for the first time that the fibrinogen sequence 408 -427 in the elongated ␥Ј chain inhibits the thrombin-induced aggregation of platelets through a combined mechanism, impairing both GpIb␣ and PAR1 interactions. Because it has been shown previously that binding of thrombin to GpIb␣ could enhance the efficiency of thrombin cleavage of PAR1 (22), the double effect of ␥Ј peptide on both GpIb␣ interaction and PAR1 cleavage may cooperatively determine a strong inhibition on platelet activation/aggregation, as indeed observed. These results are unprecedented, as they show how the same ligand may hinder at the same time the thrombin interaction with the two thrombin-elicited receptors involved in platelet activation, i.e. GpIb and PAR1.
The interaction of fragment D* is energetically driven by the insertion of the ␥Ј sequence 408 -427, which specifically binds to ABE-II, as demonstrated by various experimental strategies.
In fact, both ␥Ј peptide and fragment D* are able to displace from ABE-II any ligand, which is known to interact with this site, such as GpIb␣, the C-terminal hemadin peptide and the ssDNA aptamer HD22. However, the synthetic ␥Ј peptide and the fragment D* showed a different affinity for thrombin. The K d value of fragment D* was about 80-fold lower than that of the synthetic peptide. These results parallel previous findings on the binding of fibrinogen ␥ chain to platelet GpIIb-IIIa, where a 70-fold difference in affinity between the synthetic peptide 400 -411 of the fibrinogen ␥ chain and the native fibrinogen fragment D was found (31). Similar results were obtained for the binding of the N-terminal domain 1-282 of GpIb␣ to thrombin, where the affinity of the properly sulfated C-terminal peptide 268 -282 is about 50 times lower than that of the fulllength GpIb␣-(1-282) fragment (19,20). These observations can be reasonably explained by assuming that the protein core may orient the C-terminal tail of the ␥ chain in a conformation productive for binding or that the main body of the protein, beyond the C-terminal extension, enhances affinity by directly interacting with thrombin. The latter situation is documented by the crystal structure of the GpIb␣-thrombin complex (32), where numerous hydrophobic and electrostatic interactions, not involving the C-terminal tail, further stabilize the complex. Although the isolated C-terminal segment of the ␥ chain displays some nascent secondary structure element, NMR data indicate that it is highly flexible and intrinsically disordered in solution (33). This conformational flexibility is also confirmed by the poor electron density observed for the C-terminal ␥-segment in the crystallographic structures of fibrinogen D fragment (PDB codes 1LT9, 1FZC, and 1FIC) (34 -36). On the other hand, the structure of a smaller ␥ chain fragment (PDB code 1FIC) reveals that the segment Leu 392 -Gly 403 extends along the protein surface making numerous hydrogen bonds with the rest of the ␥ chain (36). Hence, these contacts may facilitate inter- OCTOBER 31, 2008 • VOLUME 283 • NUMBER 44 action with thrombin by orienting the elongated ␥Ј-segment in a conformation productive for binding.
Fibrinogen ␥ Chain and Thrombin-Platelet Interaction
It is known that ϳ10% of circulating fibrinogen molecules contain the elongated ␥Ј chain. If we refer to a normal plasma fibrinogen concentration (200 -400 mg/dl corresponding to Ϸ6 -12 M), this would correspond to a concentration of elongated ␥Ј chain of about 0.6 -1.2 M, nicely overlapping the K d value of the fragment D* interaction with the enzyme (K d ϳ 0.5 M). Thus, variations in the ratio between normal and elongated ␥Ј chain can significantly affect thrombinЈs ligation in vivo, in keeping with the notion that the maximum change of the fractional saturation of a macromolecule as a function of its ligand concentration occurs when the latter is present at levels similar to the K d value (37).
Moreover, it has to be outlined that in this study a surrogate for ␥Ј fibrin was used. The latter actually interacts with thrombin engaging both exosite 2 and 1 (3), although exosite 1 binds to fibrinogen fragment E with low affinity (3). Thus, we can speculate that ␥Ј fibrin can inhibit thrombin-induced platelet activation more extensively than fragment D*, as it competitively blocks both PAR1 cleavage, via engagement of exosite 1, and GpIb ligation, via binding to exosite 2, that allosterically down-regulates also PAR1 cleavage, as demonstrated in this study.
The allosteric effect linked to binding of the elongated ␥Ј chain to thrombin ABE-II resulted in a decrease of the catalytic specificity of the enzyme for good substrates such as PAR1 and the synthetic tripeptide S-2238, whereas no significant effect was observed with the PAR4 peptide- (44 -66). Thus, platelet activation induced by PAR4 hydrolysis, which occurs mainly at high thrombin concentrations and signals independently from PAR1 (38), is not affected by either ␥Ј peptide or fragment D*. This may also contribute to explain the lack of complete inhibition of the thrombin-induced platelet aggregation observed even at high ␥Ј peptide and fragment D* concentrations (see Fig. 4B). The extracellular region of PAR1 interacts with thrombin active site through the sequence 38 LDPR 41 and with ABE-I using a hirudin-like sequence (24,30). This is not the case for PAR4, which orients Pro 44 and Pro 46 of the sequence 44 PAPR 47 in the catalytic pocket but does not interact with ABE-I residues, using the C-terminal segment (39,40). Recently, the crystal structure of murine thrombin in complex with the extracellular fragment of murine PAR4 confirmed this mode of binding (41). Perturbation of ABE-I by hirudin or PAR1 exodomain (residues 42-60) allosterically induces significant structural changes in the free catalytic pocket of thrombin, mainly at and around Ser 195 (16,30,42), that result in altered reactivity of the enzyme toward synthetic and natural substrates (24 -26, 43) and for binding of inhibitors (44 -46). Notably, ligand binding to ABE-I can either enhance or inhibit the cleavage of small chromogenic substrates carrying an Arg residue at P1 position, according to their chemical structure at P2 and P3 positions (24,26). Similar conclusions can be drawn for the perturbation of ABE-II, where binding of some ligands such as prothrombin fragment F2 or GpIb␣ causes negligible or even opposite effects on thrombin-mediated cleavage of chromogenic substrates (26,47). For instance, hydrolysis of S-2238 is inhibited in the presence of F2, whereas cleavage of tosyl-Gly-Pro-Arg-pNA is enhanced at a similar extent (26). These findings confirm the extreme molecular plasticity of thrombin. Unfortunately, crystal structures of thrombin bound to several different ligands, including the prothrombin F2 fragment (48) (PDB code 2HPQ), heparin (49) (PDB code 1XMN), and GpIb␣-(1-282) (32, 50) (PDB codes 1P8V and 1OOK), indicate that thrombin accommodates ABE-II ligands with little, if any, change in its folded structure and thus do not explain the observed variations in thrombin function upon ligand binding. These discrepancies likely arise from crystal packing effects (49) or from the presence of the D-Phe-Pro-Arg-chloromethyl ketone inhibitor (32,48), which locks the active site and the specificity exosites of the enzyme into a fixed conformation, thus abrogating the structural changes that may be induced by ligand binding in solution. Very recently, the crystal structure of thrombin complex with the fibrinogen ␥Ј peptide-(408 -427) has been solved at 2.4 Å resolution. No significant change in the structure of the enzyme-peptide complex could be detected when compared with that of free thrombin (9) (PDB code 2HWL). Conversely, solution studies involving hydrogen-deuterium exchange coupled with matrix-assisted laser desorption ionization time-offlight mass spectrometry showed that the gammaЈ peptide interacts at or near the thrombin ABE-II residues Arg 93 , Arg 97 , Arg 173 , and Arg 175 . Moreover, the binding of the ␥Ј peptide induces a conformational perturbation to the enzyme as a whole, by significantly protecting from deuterium exchange other regions of thrombin, such as the autolysis loop, the edge of the active site region, some portion of ABE-I, and the A chain (51). Most of Trp residues in thrombin (i.e. Trp 96 , Trp 141 , Trp 148 , Trp 207 , and Trp 215 ) are embedded in segments whose hydrogen-deuterium exchange efficiency is reduced upon ␥Ј peptide binding, whereas other tryptophans (i.e. Trp 29 and Trp 237 ) are in direct contact with the perturbed segments (51). Hence, it is not surprising that the formation of ␥Ј peptidethrombin complex results in a higher fluorescence intensity of the enzyme, as shown in Fig. 3, compatible with a conformational change of thrombin in which the chemical environment of Trp residues becomes more rigid and hydrophobic (28). Thus, it is conceivable that the structural perturbations caused by ␥Ј peptide binding propagates from the ABE-II residues toward the S2-S4 subsites of the catalytic cleft of the enzyme and that these changes are sensed differently by the various P3 residues present in S-2238, PAR1, and PAR4. In agreement with this allosteric hypothesis, the k cat /K m value of S-2288 (D-Ile-Pro-Arg-pNA) by thrombin was increased by 40% at high concentration of the synthetic ␥Ј peptide (160 M, data not shown), demonstrating that even subtle changes in the side chain volume (Ile ϭ 124 Å 3 ; Phe ϭ 135 Å 3 ) and electronic properties at the P3 site can significantly change the allosteric linkage between binding to ABE-II and hydrolytic activity of thrombin.
In principle, the inhibition of PAR1 cleavage by thrombin might also arise from conformational transitions in ABE-I induced long range by ␥Ј peptide binding to ABE-II, leading to a lower affinity of the C-terminal PAR1 segment for ABE-I. The effect of ␥Ј peptide on PAR4 cleavage would be negligible because this latter substrate does not bind to ABE-I. This working hypothesis is worthy of attention in the light of the proposed, but still debated, allosteric linkage existing between ABE-I and ABE-II (25,26). To test this hypothesis, we investigated the effect of ␥Ј on the binding of the fluorescein-conjugated C-terminal 54 -65 peptide of hirudin, [F]-hirudin 54 -65 (PO 3 H 2 ), a well known ligand for ABE-I (52). The fluorescence experiments showed that the binding of [F]-hirudin 54 -65 (PO 3 H 2 ) was not significantly affected by the ␥Ј peptide, as shown by Fig. 8. Thus, it is likely that binding of the elongated ␥ chain to thrombin induces a conformational change mainly occurring at the catalytic site of the enzyme.
The influence of ␥Ј chain on venous thrombosis has been largely recognized (6). In particular, a decrease of this chain was associated with a net increase of the risk factor for venous thromboembolism (6). In contrast, as anticipated above, the role of this fibrinogen chain for arterial thrombosis is still debated (6,7,13). Clinical studies were conducted in the attempt to demonstrate whether altered levels of ␥Ј chain are inversely or directly correlated with increased risk for arterial thrombosis. These studies showed that the association of ␥Ј chain expression with arterial thrombotic diseases is paradoxically different from that shown in venous thrombosis. In particular, the reduced ␥Ј/␥A ratio occurring in certain fibrinogen polymorphisms, such as FGG-H2, was not associated with either acute myocardial infarction (53) or ischemic stroke. Instead, increased ␥Ј levels were shown to be positively associated with an increased risk for both acute myocardial infection and ischemic stroke (54,55). However, this association was shown to be strengthened by the presence of increased levels of plasma fibrinogen concentration (54,55) and by FGG 9340T and FGA 2224G polymorphisms (54,55). Other factors, such as total fibrinogenemia (56), the ability of the ␥Ј chain to protect thrombin by the heparin-antithrombin inhibition (3), and to confer to fibrin clots resistance to fibrinolytic degradation, may represent confounding factors in these studies. Thus, whether or not the thrombin interaction with the ␥Ј chain of fibrinogen plays any pathophysiological role in particular clinical settings remains controversial. In a recent study on ischemic stroke (55), both ␥Ј chain and total fibrinogen levels were elevated in the acute phase of the disease and subsequently decreased in the convalescent phase. The increased ␥Ј chain in the acute phase stems from the elevation of interleukin-6, which can promote the synthesis of the ␥A and ␥Ј chains (57,58). Thus, further studies aimed at investigating the ␥Ј/␥A ratio rather than the absolute content of ␥Ј are needed, especially in clinical situations like acute thrombosis, where plasma fibrinogen is usually increased, and thus investigating the absolute ␥Ј content alone might be misleading.
Based on our data, indicating a net platelet inhibitory effect of fibrinogen ␥Ј chain upon thrombin stimulation, we can infer that elevation of ␥Ј chain level, as a possible result of acute phase response, might exert a beneficial effect on the acute phase of thrombotic syndromes. A recent study in a baboon thrombosis model showed indeed that the 410 -427 ␥Ј peptide is able not only to inhibit fibrin-rich thrombus formation, because of the inhibition of the intrinsic coagulation pathway (related to inhibition of FVIII activation by thrombin), but also platelet-rich thrombus formation in the arterial circulation (59). These findings may be also relevant for clinical applications of ssDNA aptamers, like HD22, whose specific target is ABE-II of thrombin (60). On the contrary, the anti-thrombin and anti-platelet effect of the ␥Ј chain may be deleterious in hemorrhagic sequelae of vascular accidents, such as hemorrhagic stroke. In the latter condition, the expansion of the volume of the hematoma causes the post-stroke complications often responsible for the high mortality from the disease. The findings reported in this study predict that the presence of enhanced expression of the ␥Ј chain could exert deleterious effects on the thrombin-induced platelet activation and thus on either the arrest or onset of the hemorrhage. In conclusion, the role of different expressions of the ␥Ј chain in circulating fibrinogen may variably influence the thrombotic and hemorrhagic manifestations in different clinical settings or different phases of a vascular disease. | 9,668.4 | 2008-10-31T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Environmental Justice and Sustainability Impact Assessment: in Search of Solutions to Ethnic Conflicts Caused by Coal Mining in Inner Mongolia, China
The Chinese government adopted more specific and stringent environmental impact assessment (EIA) guidelines in 2011, soon after the widespread ethnic protests against coal mining in Inner Mongolia. However, our research suggests that the root of the ethnic tension is a sustainability problem, in addition to environmental issues. In particular, the Mongolians do not feel they have benefited from the mining of their resources. Existing environmental assessment tools are inadequate to address sustainability, which is concerned with environmental protection, social justice and economic equity. Thus, it is necessary to develop a sustainability impact assessment (SIA) to fill in the gap. SIA would be in theory and practice a better tool than EIA for assessing sustainability impact. However, China's political system presents a major challenge to promoting social and economic equity. Another practical challenge for SIA is corruption which has been also responsible for the failing of EIA in assessing environmental impacts of coal mining in Inner Mongolia. Under the current political system, China should adopt the SIA while continuing its fight against corruption.
Introduction
The Inner Mongolia Autonomous Region forms much of China's strategic northern frontier bordering Mongolia and Russia.In a region where protest is rare, a series of Mongolian demonstrations across the region, including these in the capital Hohhot, took the world by surprise in the spring of 2011.Students demonstrated and clashed with police, demanding justice.The events were triggered by an incident near Xilinhot (Figure 1).A Chinese truck driver killed a Mongolian herdsman who was blocking a convoy of coal trucks from driving through his pastureland.Chinese and international media widely reported the protests that underscored simmering discontent over environmental damage from mining in this resource-rich region [1][2][3].To quell the demonstrations, the government declared martial law and cracked down on the activists while pledging to look into the impact of the mining industry on the environment and local culture.
We were curious how mining-related environmental problems led to ethnic conflicts in a region that had been relatively free of ethnic tensions in recent history.Our initial investigation indicated that mining caused serious environmental and economic injustice to the Mongolian herdsmen.We found earlier reports on mining pollution in Inner Mongolia.For example, the Beijing Youth Daily reported that a few rare-earth refineries polluted the grassland and killed 60,000 livestock that belonged to 190 herdsmen from 1996-2003 [4].Another report found that arsenic poisoning was threatening the lives of the nearly 300,000 people in the Ordos Region; 2000 were already sick and many died of cancer, producing cancer villages [5].China has hundreds of cancer villages, places where cancer rates are unexpectedly high and industrial pollution is suspected as the main cause [6].However, most of them are in the more developed regions on the eastern coast.In Inner Mongolia, the main cause is suspected to be water pollution caused by mining.Mongolians have many long-established grievances, such as those reported by Jacobs: the ecological destruction wrought by an unprecedented mining boom, a perception that economic growth disproportionately benefits the Chinese and the rapid disappearance of Inner Mongolia's pastoral tradition [7].
Qian et al. find quantitative evidence to support the conclusion that the expansion of coal mining and associated industry and population increase was the major cause of grassland degradation in the Holingol region of Tongliao City, Inner Mongolia [8].While mines are expanding, underground water is being over-extracted and coal-fired power plants as well as chemical plants are being established [9].Coal mining and associated electricity generation have seriously degraded the water resource and the livelihood of local people in Inner Mongolia [8].
Greenpeace reports that in China, a coal chemical project in the dry Inner Mongolia region, part of a new mega coal power base, had extracted so much water in 8 years of operation that it caused the local water table to drop by up to 100 m, and the local lake to shrink by 62%.Due to lowering of water table, large areas of grassland have subsided (Figure 2).The drastic ecological impacts have forced thousands of local residents to become 'ecological migrants' [10] At the costs of the environment and local residents' livelihood, Inner Mongolia has since 2002 experienced an economic boom based on mining.The wealth from the economic boom has not been fairly distributed.Many Chinese investors have benefited from the mining operations and become billionaires.Ordos became one of the wealthiest cities in China.However, ordinary Mongolian herdsmen are not benefiting from that boom, which is based on exploitation of what they view as their resources.Coal development on the grasslands does not increase the herdsmen's income or materially improve their life but instead has dampened their future by degrading the environment [11], causing injustice and sustainability disparities [12,13].The paper draws from global knowledge of environmental justice and assessment approaches and applies it to Inner Mongolia.It argues for the need of developing a sustainability impact assessment (SIA) and demonstrates that such a need is particularly urgent for subtle ethnic regions such as Inner Mongolia.We explore answers to five related questions: (1) What are the theoretical bases for developing an SIA that emphasizes justice?(2) How have assessment approaches been practiced in China?(3) Why has environmental impact assessment (EIA) not worked for Inner Mongolia in the current EIA system?(4) How and why do we need to explore an SIA that supports environmental justice in order to help with sustainability?(5) What should China do in search of solutions to ethnic conflicts in Inner Mongolia?
The analyses were based on data collected during fieldwork through qualitative research methods including site inspections and semi-structured interviews and discussions with local officials and scholars concerning environmental and economic issues.The study covers seven major coal-mining areas: Dongsheng, Shenshang, Suletu, Yuanbaoshan, Wulantuga, Baorixile, and Huanghuashan, in six associated city regions: Ordos, Hohhot, Xilinhot, Chifeng, Tongliao, and Hulunbeir (Figure 1).Primary and secondary data were collected during fieldwork in the summers from 2011-2013.The initial report was presented and discussed at the International Conference on Sustainability Assessment at Dalian Nationalities University.Follow-up fieldwork and research was conducted after the conference to further verify and interpret the research findings.We realize that our study areas were limited to only a few places.Due to lack of time, financial support, and availability of data and information, we were not able to obtain quantitative data or conduct more in-depth investigations.Environmental justice, sustainability impact assessment and ethnic conflicts in China are topics that are contested and require more systematic research.As a result, caution is needed when drawing conclusions from our findings.
In search of solutions to environmental degradation, injustice, and ethnic conflicts in the region, we first examine how project assessment tools could help.For example, environmental impact assessment (EIA) has been regarded as an important measure to control environmental impact in many countries, and some governments, such as those of the United States and Scotland, have attempted to use environmental assessment tools to deliver environmental justice [14].However, environmental assessment tools in theory and practice appear to be inadequate when sustainability, not just the environment, is the subject for assessment.
The Theoretical Basis for Sustainability Impact Assessment that Supports Environmental Justice
This section starts with an overview of the meaning of sustainability and its indicators, and criteria that have been used or proposed for EIA.Following discussions over the use of existing assessment approaches to assess sustainability, the focus is on exploring the possibility of incorporating justice and equity into existing assessments and SIA.
Sustainability: The Three Pillars
Sustainability means meeting the needs of the present without sacrificing the ability of future generations to meet their own needs [15].It has been illustrated as having three overlapping dimensions: the simultaneous pursuit of economic prosperity, environmental quality, and social equity, also known as the "three pillars" of sustainability [16][17][18].In addition, cultural sustainability is widely regarded as an important element for people to achieve a more satisfactory intellectual, emotional, moral and spiritual existence.Recent holistic and inclusive thinking of sustainability emphasizes overlapping dimensions and the interaction among them [18].In addition to environmental and material needs that may be fulfilled through economic development, humans also need social development to improve social justice, equality, and security.While acknowledging the interactions among different dimensions, Gibson et al. caution against a simplistic application of the three pillar model, pointing out that it serves to emphasize tensions among competing interests [19].In contrast, their criteria cross the traditional and limiting divides to provide a more holistic conceptualization [19].They also criticizes approaches to sustainability that over-emphasize local considerations or that focus too strongly on efficiency measures, recalling that the sustainability discourse is essentially global and the West must challenge some fundamental cornerstones of its way of life, and particularly the obsession with economic growth [19].
Sustainability and Justice
Dobson provides a detailed discussion of the relationship between environmental justice and sustainability.He argues that "the discourses of sustainability and justice may be related" but "the question of whether sustainability and justice are compatible objectives can only be resolved empirically, and the range and depth of empirical research required in resolving this question has not been done" [20].We argue that SIA for subtle ethnic regions such as Inner Mongolia should stress justice, including environmental, social, and economic justice, and equity, which has been recognized as a key element of sustainability.Sustainability is about meeting needs.Justice has increasingly been recognized as one of such needs.There is no sustainability without justice.Furthermore, the United Nations resolution 66/197 on sustainable development pays special attention to the welfare of ethnic minorities: recognizing and supporting their identity, culture and interests; avoiding endangering their cultural heritage, practices and traditional knowledge; and preserving and respecting non-market approaches that contribute to the eradication of poverty [21].Iris Marion Young has also questioned the common practice of reducing social justice to distributive justice and argued for groupdifferentiated policies and a principle of group representation [22].Using justice as an overarching element can help develop a more holistic SIA for ethnic regions such as Inner Mongolia.
The concept of environmental justice was first developed in the early 1980s during the social movement in the United States on the fair distribution of environmental benefits and burdens.The United States Environmental Protection Agency defines environmental justice as "the fair treatment and meaningful involvement of all people regardless of race, color, sex, national origin, or income with respect to the development, implementation and enforcement of environmental laws, regulations, and policies" [23].Three different notions of justice have been applied, including distribution, recognition, and procedure (or participation) [24].Procedural justice means that those who are most affected by decisions should have particular rights to be involved and have their voices heard on a fully informed basis [25].Participation has also been demanded as an instrument of EIA.Since ex ante analysis of potential impacts of planned projects on the environment is difficult, participation is intended to reduce uncertainty by intra-subjective judgment; furthermore, participation increases the transparency of the decision-making process [26].From a social science point of view, participation is a central element of sustainability [26].Participation, however, is difficult to translate meaningfully into quantitative terms as a social indicator [26].Direct and open debates among the people who will be affected by the development lay the foundation for conflict resolution in Inner Mongolia, if SIA can be incorporated at the planning level in order to influence decision making and support policies that affect regional sustainability [27].
Sustainability: Indicators and Criteria for Assessments
In practice, economic and social indicators and criteria have been used, in addition to environmental ones, for assessing sustainability [28].For example, Becker presents an overview on sustainability indicators for assessing economic, environmental, and social sustainability which includes "equity coefficients (Gini coefficient, Atkinson's weighted index of income distribution), disposable family income, and social costs, participation, and tenure rights [26].Herder et al. used "production costs" and "local value added" as economic indicators and "employment" as a social indicator [29].However, the incorporation of these sustainability truths into assessment and decision-making processes remains somewhat daunting in practice.Lamorgese and Geneletti developed a framework for evaluating planning against sustainability criteria and found that criteria explicitly linked to intra-and inter-generational equity is rarely addressed [30].Jain and Jain emphasize the need for an alternative index which considers sustainability of human development and formulates an index based on strong sustainability [31].Shah and Gibson have developed a set of 12 core procedural and substantive-level sustainability criteria to be used as a guide for clarifying development purposes, identifying potentially desirable options, comparing alternatives and monitoring implementation for infrastructure at the water-agriculture-energy nexus in India [32].They believe that sustainability-based tools encourage comprehensive attention to issues at the core of sustainability thinking and application: relative to conventional assessment approaches, assessments applying explicit sustainability criteria encourage lasting benefits within complex socio-ecological systems through assessing interdependencies and opportunities, sensitivities and vulnerabilities of regional ecologies, incorporating systems, resiliency and complexity frameworks.SIA for Inner Mongolia should learn from the international experience to develop specific indicators and criteria that help with ethnic equality and harmony.
The Debate over the Use of Existing Assessment Approaches to Assess Sustainability
Researchers have been debating over the use of existing assessment approaches such as environmental impact assessment (EIA) and strategic environmental assessment (SEA) to assess sustainability.For examples, Zhu et al. advocate an impact-centered SEA with institutional components as an alternative to the impact-based approach which seems unable to address institutional weaknesses in most conventional SEA cases in China [33].Lam, Chen, and Wu affirm the potential role of SEA in fostering a sustainable and harmonious society and the need to mainstream sustainability considerations in the formulation of national plans and strategies [34].Hacking and Guthrie identify the features that are typically promoted for improving the sustainable development directedness of assessments and a framework which reconciles the broad range of emerging approaches and tackles the inconsistent use of terminology [35].Morrison-Saunders and Retief assert that internationally there is a growing demand for EIA to move away from its traditional focus towards delivering more sustainable outcomes [36].They argue that it is possible to use EIA to deliver some sustainability objectives in South Africa, if EIA practices strictly follow a strong and explicit sustainability mandate [36].To advance SEA for sustainability, White and Noble examined the incorporation of sustainability in SEA and identified several common themes by which SEA can support sustainability, as well as "many underlying barriers that challenge SEA for sustainability, including the variable interpretations of the scope of sustainability in SEA; the limited use of assessment criteria directly linked to sustainability objectives; and challenges for decision-makers in operationalizing sustainability in SEA and adapting PPP (policy, plan, and program) development decision-making processes to include sustainability issues" [37].
The Possibility of Incorporating Environmental Justice into Environmental Assessments
Jackson and Illsley proposed that SEA could be used to help deliver environmental justice [38].Krieg and Faber suggest that environmental injustices exist on a remarkably consistent continuum for nearly all communities and a cumulative environmental justice impact assessment should take into account the total environmental burden and related health impacts upon residents [39].Connelly and Richardson argue that "we cannot debate SEA procedures in isolation from questions of value, and that these debates should foreground qualities of outcomes rather than become preoccupied with qualities of process" [40].They "explore how theories of environmental justice could provide a useful basis for establishing how to deal with questions of value in SEA, and help in understanding when SEA is successful and when it is not" [40].They assert that "Good SEA must be able to take into account the distributional consequences of policies, plans, or programs, with decisions driven by the recognition that certain groups tend to systematically lose out in the distribution of environmental goods and bads" [40].Walker finds that although practices are evolving there is a little routine assessment of distributional inequalities, which should become part of established practice to ensure that inequalities are revealed and matters of justice are given a higher profile [41].On the other hand, Mclauchlan and Joao oppose the use of strategic environmental assessment (SEA) to deliver environmental justice, partly because "a direct focus on the environment requires that factors associated with environmental justice are not central to SEA" [14].
The literature indicates that it is possible to use environmental assessment to incorporate environmental justice criteria such as public participation.In fact, public participation is considered as an integral part of the EIA procedure [42].A major challenge is that environmental justice is a social factor, which is not central to environmental assessments.In China, EIA is often inadequately implemented and social factors tend to be neglected.For examples, Ren finds that "EIA in China has evolved into a fairly comprehensive and technically adequate system, but the problem lies in its poor enforcement and implementation, due to the political system and incentive mechanisms, institutional arrangements, and regulatory and methodological shortcomings" [43].Yang criticizes that "public participation in the Chinese EIA system has not been effectively carried out" [44].These problems have significant implications to EIA in Inner Mongolia.
The Possibility of Developing Sustainability Impact Assessments with Stress on Justice
The literature on EIA, SEA, and environmental justice may provide a theoretical context for developing SIA.The theory and practice of SIA have been discussed with case studies from different parts of the world.For example, Gibson et al. conceptualize sustainability assessment as a marriage between sustainable development and environmental assessment [19].Huber made the distinction of social justice based on need, on performance, and on property as different dimensions of equity, which are not taken into account in static, target-oriented sustainability policies [26].Bond et al. point out that sustainability assessment is an increasingly important tool for informing planning and development decisions across the globe [45].Required by law in some countries, strongly recommended in others, a comprehensive analysis of why sustainability assessment is needed and clarification of the value-laden and political nature of assessments is long overdue [45].The remaining of the paper will attempt to demonstrate the need to develop an SIA that stresses justice in order to reduce ethnic tensions in Inner Mongolia.
Assessment Practices in China
China faces a daunting task for improving its environmental performance, particularly in the ethnic regions where the environment is fragile, ecological systems are sensitive, the economy is underdeveloped, and ethnic relations are subtle.Different approaches have been proposed to deal with the task.Many believe that economic growth is the key for environmental improvement and social political stability [46].This belief supports China's Go West policy, which covers all provincial level ethnic regions.While that policy has resulted in economic growth in some areas, there are indications that the environmental costs have been enormous and ethnic relations are getting worse.Exploitation of natural resources in the ethnic regions is followed by rapid environmental degradation."Go West" has in some way become "Pollute West" under the "grow first, clean up later" approach to development.Inner Mongolia is a good example.It was once an endless field of grassland, punctuated by mountains and the occasional yurt.Now Inner Mongolia is the country's top coal producer, accounting for about a quarter of all domestic supply-doubling what it was in 2005 [1].
On the other hand, sustainable development has also been the view of some top Chinese officials such as the former premier Wen Jiabao.Sustainability management has shown that environmental problems and social problems are closely related [47], especially in the case of China [48].Among the many possible methods for improving environmental performance, EIA has been used in China, including its ethnic regions.For examples, the Asian Development Bank (ADB) was particularly comprehensive in its assessment of Inner Mongolia Environment Improvement Project (Phase II) [49], following ADB's Environmental Assessment Guidelines [50].The ADB report recommended that Inner Mongolia install "clean coal" technologies now to reduce global warming and reverse the climate change caused by current coal mining [51].Many coal mining companies in Inner Mongolia have drafted EIA and posted notifications for the public to provide feedback.
However, EIAs seem to have not had any significant impact, as coal mining and associated industries continue to expand.Mining pollution causes local herders to lose their sheep and cattle and thousands of pits left behind by the mining companies cause fatalities to the herds [9].Consequently, coal mining has contributed to increased ethnic tension and conflicts in Inner Mongolia.Investigations found that the common people have got poorer in natural-resource rich places such as Inner Mongolia, while government officials and mine bosses got extremely wealthy, increasing social unrest [52].The Inner Mongolia Government issued a document asking local governments and agencies to follow governmental directives to adequately protect the environment and people's livelihood [53].The document clearly stated that promised compensations to herders who lost land due to mining should be honored.Wealth from the mining should be partially used to help improve local infrastructure and living conditions.The document, however, fails to recommend any concrete procedures to insure the local residents receive their share of the mining wealth.The document encourages coal companies to invest in non-coal industries locally.This kind of investment helps to diversify the economy and increase government revenues and GDP.However, further industrialization has been accompanied by worsening environmental degradation and damage to the agricultural environment needed to support the livelihood of the Mongolian herders.
Nevertheless, progress was made.The Inner Mongolia Government claimed to have halted 476 illegal mining projects, ordered 887 mines to suspend operations, permanently shut down 73 mines, intervened in 100 disputes between local herders and mining companies, and established a mechanism involving the government, miners and local residents to resolve disputes through dialogue [54].However, new protests continue to be reported by the international media [55].Tang suggests that a rise in public protests in China signals a failure of environmental governance, where officials use legal threats to extract benefit from polluters, but the power of developers in China remains untouched, despite widespread protests against polluting projects [56].
Why Environmental Impact Assessments Have Failed for Inner Mongolia
The last section elaborates the failure of environmental assessments to do their job for Inner Mongolian mining projects.This section will specifically answer three questions: (1) why did the mining projects fail to conduct environmental assessments when they would be expected?(2) Would environmental assessments have had an impact on the projects if they were conducted?(3) Would environmental impact assessments have addressed the questions of sustainability and environmental justice adequately, even if they were conducted?
Why Did Projects Fail to Conduct Environmental Impact Assessments?
The failure of EIA may be one of the many factors for explaining environmental degradation caused by coal mining in Inner Mongolia.Here are a few scenarios based on our investigation.First, an EIA is not conducted at all.This applies to the many small scale mining operations.Many are "illegal" as they do not have any permit.These operations tend to pay no attention to the environment.They are allowed to be in operation mainly through bribing the government officials who will then turn a blind eye on the environmental destructions.Under the pressure from repeated local protests, there has been a tightening of regulations and cracking down on these operations.However, they continue to be a major threat as corruption will continue to be severe.A second scenario is that an EIA is conducted, but is falsified as the required criteria were not followed.This is concerned with the legal operations.Again, official corruption is involved, which is the main reason EIAs are not conducted or are falsified.
Would Environmental Impact Assessments Have Had an Impact?
We find that large state coal mines often had an EIA conducted.We examined key government directives that provide technical guidelines for EIA for coal mines.China passed its EIA laws in 2002.In 2006, Technical Guidelines for EIA Coal Mine Master Plans was drafted.The guidelines did not include any mandatory requirements in terms of EIA.That left much room for interpretation of activities as to what was appropriate.It stated that the plan should include descriptions concerning water, air pollution, land restoration, and public participation [57].It is unclear how many EIAs were done.However, an online search found four Master Plan EIAs, which were posted for public notification as required by the EIA laws, an indication of implementation of the EIA laws and the 2006 Guidelines.These four EIAs, three from Ordos [58][59][60] and one from Hulunbeir [61] are identical in terms of structure and contents, suggesting that they followed the same standard format and guidelines used in coal mining in Inner Mongolia and possibly nationwide.
The public notifications are very superficial, mainly an overview of the planned project which follows the guidelines but lack any specifics.Accompanying the notifications, a survey form asks questions such as: What do you think of the current environmental conditions in your area?What impact will the project have on the environment?Do you support the implementation of the project or not?The notifications were published in local newspapers or government websites.The public were given 10 business days to respond, which was too short by international standards.A search did not find any cases where public feedback was publicized or had any effect on the plans.That might suggest that public participation did not play any role in the plan and the EIAs were done superficially.Few EIA notifications were found online for the period between 2008 and 2011, only one for 2010 [62] and one for 2011 [63].
The Technical Audit Points for Coal Mine Master Plan EIA Report was published in October 2011 by the Ministry of Environmental Protection [64], after the widespread protests in Inner Mongolia in May.This is a comprehensive directive that provides detailed requirements for coal mine planning and EIA.The Circumstances for Rejecting and Requiring Revisions of the Plan include six items and three of them are: A. The project may cause major impact to the ecology or underground water (quantity or quality) but the plan does not provide mature and practical ecological recovery and protection measures; B. The local resource and environment is unable to provide the capacity for the possible direct and indirect impact of urbanization and industrialization due to coal mining; C. The majority of the public participants do not support the implementation of the project plan.
One of the Circumstances for Requiring Revisions of the Plan for reevaluation includes irregularities of public participation, no explanations for accepting or rejecting public suggestions, or obviously unreasonable rejection of public suggestions.The directive also states that the master plan should ensure that the mining operation will protect the ecological integrity and biodiversity and prevent desertification.Air pollution needs to be controlled during mining, transportation, and storage, consumption, and waste management.However, many of the requirements are still vague due to lack of specifics.
The latest EIA documents we found online include two EIA notifications [65,66] and one EIA report [67].They reflect the more stringent guidelines and contain more specifics than the 2006 and 2010 ones.The posting of an EIA Report provides information to the public.Interestingly, however, the lead author of both the 2006 and 2011 guidelines was Beijing Huayu Engineering Co., Ltd. of the Sino-Coal International Engineering Group, in cooperation with the State Environmental Protection Bureau of China (now Ministry of Environmental Protection).Huayu or some other firms within Sino-Coal have been the sole authors for the EIAs.So the guidelines and EIAs are likely to be on the side of the coal industry, rather than the affected communities.The number of EIAs available online is very small, compared to the number of mines in the region, possibly over 100.According to the Chinese search engine Baidu, there were 82 state-owned mines in Inner Mongolia in 2009, including five state-owned enterprises, 42 state-owned major mines, and 36 state-owned local mines [68].The number should have increased, judging from the increased coal output in the region.More importantly, the new EIA requirements were probably not followed in Inner Mongolia coal mine planning and operations, judging by the high level of environmental degradation due to coal mining, as reported in Chinese official and international media.Consequently, EIAs have had only a limited impact in protecting the environment.
Would Environmental Impact Assessments Have Addressed Sustainability and Environmental Justice Adequately?
Furthermore, even if the more stringent EIA requirements were closely followed, many social problems caused by the exploitation of natural resources in the ethnic regions were not going to go away, as many Mongolians are likely to view the coal and the land as theirs, that they inherited from their ancestors, and that the Chinese are outsiders coming in to take their resources away and destroy their land and lifestyle.The central government may claim that the resources are national property.Many Mongolians may believe that the nation should be the people instead of some state officials.Considering the history of settlement and the Mongolian way of thinking, the mining plans may have to incorporate the concept of environmental justice and respect the view of the local people and culture, in addition to protecting the environment.Improvement in EIAs is needed for the environment, but EIAs are inadequate for dealing with these social problems caused by resource exploitation in the ethnic regions.
The Need to Stress Justice and Equity in Sustainability Impact Assessment
The above discussed problems in EIA practices in China need to be dealt with.For example, public participation needs to be strengthened to allow full involvement from the beginning of the project planning.It would be worthwhile to explore ways to have EIAs conducted by an independent third party rather than by an affiliate of the coal companies, even though that may increase the operational cost of the assessment.EIA has not been taken seriously because it concerns only the environment, which is considered as a public good in China.The government, which is supposed to take care of the environment and the resources, puts economic growth first.Public participation has not been regarded as a key element in resource development as natural resources belong to the government (Officially they belong to the state, but in reality the government, rather than the people, is the state in China).
Wealth from mining is mainly taken by the central government, with the rest of the wealth taken by different levels of the governments.The local government, which usually receives a third of the wealth, is left to take care of the social and economic welfare [52].Corruption and lack of funding have meant little is done for the common people.Such injustice and inequity has led local people to organize to open illegal mines to steal and rob the resources, which they think should belong to them [52].Li Bo, head of Friends of Nature, an environmental non-government organization (NGO) in China, believes that: The environmental assessment of development projects should be much more open.The possible existence of risk for any project-technological and economic, or social and political-should be fully discussed before the project is implemented.Right now, according to the law, there is a process for EIA.But the people who are in charge of executing these are only responsible to their seniors, not to the people under them.So these processes aren't very open, and their discussions aren't transparent.Because of this many projects are approved, and then their problems are only discovered afterwards.An example is the recent PX incident-there's a lot of fear and rage.These things can tear a society apart [69].
Morrison-Saunders and Pope argue that there is inadequate consideration of trade-offs throughout the sustainability assessment process and insufficient considerations of how process decisions and compromises influence substantive outcomes [67].If properly done, sustainability assessment should indicate who gets what, who loses what, how, when and why [70].Current EIAs in China are concerned with trade-offs between the economy and the environment.They are not concerned with trade-offs among different social groups.We argue that SIA should be adopted for subtle ethnic regions in order to adequately evaluate economic, environmental and social impacts to help reduce ethnic tensions.These impacts are interrelated and cannot be mitigated successfully unless they are dealt with together.If fully enforced, SIA will ensure that the public is more involved and their interest is better taken care of when justice and equity are a matter of concern in the assessment.
As sustainability assessment is new in China, we draw below some international experiences to help with the discussion.Gibson et al. present the case of the assessment of the proposed major nickel mining project near Voisey's Bay on the north coast of Labrador (Canada), which is often considered the first attempt to conduct sustainability assessment within a project approval context [19].They challenge the common conceptualization of sustainability of three intersecting pillars representing environmental, social and economic concerns, on which most practice of sustainability assessment is based [19].Gibson reports that an innovative environmental assessment and a set of surrounding and consequential negotiations were conducted between 1997 and 2002 on the proposed project: The proponent and other participants wrestled directly and often openly with the project's potential contribution to local and regional sustainability.The resulting agreements to proceed were heavily influenced by the precedent-setting assessment, which imposed a "contribution to sustainability" test on the proposed undertaking.Given the profound differences in background, culture, priorities and formal power involved, as well as the record of tensions in the history of this case and before, the agreements also represent a considerable achievement in conflict resolution [71].
Faced with growing environmental and social crises, China's new leader, Xi Jinping, has criticized the "grow first, clean up later" approach and given more emphasis on ecological development than his predecessors.Among other things, he recently called for stopping the GDP-based promotion of government leaders [72].Consequently, several provinces have lowered or abandoned using GDP as the only measure of success for city or county leaders, affecting over 70 of China's poorest cities or counties [73].Evaluation will instead be based on poverty reduction and environmental protection [74].It remains to be seen if the new policy will be applied to larger, wealthier cities.Nevertheless, cleanup efforts have been increasing.Many interviewed officials cared about the environment and were sympathetic for the Mongolian herders.There are indications that EIA will be more stringently and widely implemented.Kahya reports that: Concerns over water use from coal mining and gasification projects have led the Chinese government to change the rules for new schemes.Mirroring recent "national plans" to tackle air pollution the Ministry of Water resources has announced a plan to limit coal expansion based on regional water capacity.The rules mean the approval process for large-scale projects must now include an appraisal of the available water [75].
Mclauchlan and Joao oppose the use of strategic environmental assessment (SEA) to deliver environmental justice, partly because "a direct focus on the environment requires that factors associated with environmental justice are not central to SEA [14]."That was exactly the case with some projects that conducted EIAs.China should borrow global knowledge in environmental justice and SIA to help with local practices and leapfrog the EIA stage to start SIA instead.Less-developed ethnic regions should leapfrog the "grow first, clean up later" stage and start practicing sustainability, so that further deepening of injustice and sustainability disparities might be avoided [12,13].EIA has been useful in some countries.An important reason is that these countries tend to follow the rule of law and have an independent media and democratic government.Environmental injustice is partly inherited in the undemocratic system.There are limited options China has as major political reforms are unlikely to happen soon.Within the current political system, SIA certainly seems to be more useful than EIA in dealing with justice and equity problems.
Conclusions
In this paper, we have explored the theoretical basis and possibility of developing SIA with an emphasis on justice and equity to meet an urgent need in subtle ethnic regions such as Inner Mongolia, China.In coal mining practices in Inner Mongolia, an EIA was often not conducted for the large number of small scale so called "illegal" mines, or might have been falsified for many other mines through corrupted officials.Our focus, however, has mainly been on those that have conducted official EIAs following government guidelines but still fail to protect the environment partly because the guidelines are inadequate.The government has tightened control over EIA along with more specific and stringent guidelines.
Our research indicates that even if the new EIA guidelines are closely followed social justice and economic equity problems will continue to exist, as EIAs do not deal with any ethnic social problems.The assessment needs to include guidelines for justice and equity, in addition to protect the physical environment.EIAs appear to be inadequate for that.Even though certain elements of environmental justice such as participation can be incorporated into EIA/SEA, these elements are not central to environmental assessments.Environmental assessments are concerned with the environment while sustainability has addition concerns such as social justice and economic equity.Consequently, EIA/SEA is in theory and practice inadequate as a tool in meeting sustainability challenges.
SIA would be in theory and practice a better tool than EIA/SIA for assessing sustainability impact.The assessment needs to involve the effected ethnic groups at the very beginning and careful negotiations are needed so that agreements can be reached.This would be an appropriate approach to conflict resolution to take care of the profound differences and complex relations in the ethnic regions.Public participation in SIA is an effective measure to ensure social and economic justice and equity.SIA should recognize and respect traditional ethnic way of life, which has often been found to be environmentally sustainable.It is important to let the local people make their own decisions concerning the use of their resources.They tend to be the people who care about the environment the most and have the knowledge for sustainability.Social and economic equity, protecting the environment, and respect for nature, culture, and autonomy of local ethnic groups should all be key elements for SIA in the ethnic regions.
However, one practical challenge for SIA is corruption which has been also responsible for the failing of EIA in Inner Mongolia.China's political system presents another challenge to promoting social and economic equity.Political reforms are necessary to enhance ethnic justice and equity.Under the current political system, China should adopt the SIA for ethnic regions while continuing its fight against corruption.
Many of the concepts discussed in this paper are contested, such as sustainability, justice, and even participation.For example, Cooke and Kothari criticize "participatory development's potential for tyranny" as "it can lead to the unjust and illegitimate exercise of power" [76].On the other hand, Hickey and Mohan argue for transforming problematic traditional practices to citizen participation.The contested nature of the terms shows the complexity of the issues and cautions us to avoid drawing simplistic conclusions [77].We hope that our report will provide the initial information and stimulate future research into developing an SIA with justice and equity emphasized for easing ethnic conflicts in ethnic regions.
Figure 1 .
Figure 1.Location of the studied cities and coal mines in Inner Mongolia. | 8,909.6 | 2014-12-01T00:00:00.000 | [
"Economics"
] |
l-cysteine suppresses ghrelin and reduces appetite in rodents and humans
Background: High-protein diets promote weight loss and subsequent weight maintenance, but are difficult to adhere to. The mechanisms by which protein exerts these effects remain unclear. However, the amino acids produced by protein digestion may have a role in driving protein-induced satiety. Methods: We tested the effects of a range of amino acids on food intake in rodents and identified l-cysteine as the most anorexigenic. Using rodents we further studied the effect of l-cysteine on food intake, behaviour and energy expenditure. We proceeded to investigate its effect on neuronal activation in the hypothalamus and brainstem before investigating its effect on gastric emptying and gut hormone release. The effect of l-cysteine on appetite scores and gut hormone release was then investigated in humans. Results: l-Cysteine dose-dependently decreased food intake in both rats and mice following oral gavage and intraperitoneal administration. This effect did not appear to be secondary to behavioural or aversive side effects. l-Cysteine increased neuronal activation in the area postrema and delayed gastric emptying. It suppressed plasma acyl ghrelin levels and did not reduce food intake in transgenic ghrelin-overexpressing mice. Repeated l-cysteine administration decreased food intake in rats and obese mice. l-Cysteine reduced hunger and plasma acyl ghrelin levels in humans. Conclusions: Further work is required to determine the chronic effect of l-cysteine in rodents and humans on appetite and body weight, and whether l-cysteine contributes towards protein-induced satiety.
INTRODUCTION
High protein diets can drive weight loss and support subsequent weight maintenance. 1,2 Identifying the mechanisms by which protein drives satiety and weight loss may help identify therapeutic options for the treatment of obesity. Recent work has suggested that the amino-acid products of protein digestion may be sensed peripherally and centrally to regulate appetite. 3 Amino acids are critical for normal physiological function and many species are able to adapt their protein intake to ensure an adequate supply of essential amino acids. 4 Different types of protein exert variable effects on appetite, [5][6][7][8] which may reflect their varied amino-acid constituents. The discovery of amino-acidsensing G-protein-coupled receptors, and their expression in regions including the gastrointestinal tract, has led to the suggestion that these receptors may sense amino-acid intake to regulate appetite. 9 Leucine, a branched-chain essential amino acid, reduces food intake by modulating mammalian target of rapamycin activity in the hypothalamus and/or the nucleus tractus solitarius (NTS). 10,11 Yet, the effect of leucine alone does not account for the success of high-protein diets, 12 suggesting additional individual amino acids may also contribute. However, the amino acids with anorectic effects and the mechanisms by which they mediate these effects remain to be fully elucidated.
We therefore investigated the effects of oral and intraperitoneal administration of a range of amino acids on food intake in rodents. These studies identified L-cysteine, a conditionally essential amino acid that acts as a precursor for biologically active molecules such as hydrogen sulphide (H2S), glutathione and taurine, as an anorectic agent. We subsequently further investigated the effects of L-cysteine on appetite in rodents and humans and the mechanisms mediating these effects.
Male Wistar rats used in subdiaphragmatic vagal deafferentation (SDA) experiments were maintained in individual cages under controlled temperature (21-23°C) and light (12:12 light-dark cycle, lights on at 0600 hours) with ad libitum access to food (R70, Lactamin, Sweden) and water unless otherwise stated. Experiments were approved by the Gothenburg Animal Review Board (ethical application number 101505).
Feeding studies Animals were orally gavaged or intraperitoneally (IP) injected with vehicle or L-amino acids during the early light phase following an overnight 16-h fast. Food intake was measured at 1 h post administration with any notable spillage accounted for.
Effect of L-cysteine on behaviour and conditioned taste aversion
Behavioural studies were used to investigate the possibility that the administration of L-cysteine and the associated reduction in food intake was secondary to nonspecific behavioural effects. To confirm that L-cysteine did not result in aversive effects, we also investigated whether oral administration of L-cysteine at 1, 2 or 4 mmol kg − 1 resulted in conditioned taste aversion (CTA) using an established method 14 The role of downstream metabolites and the N-methyl D-aspartate (NMDA) receptor in mediating the anorectic effect of L-cysteine L-cysteine is metabolised via a number of pathways (Supplementary Figure 2A). L-cysteine and some of its metabolites have been reported to act as weak NMDA receptor agonists. 16 The roles of cysteine metabolites and the NMDA receptor in L-cysteine-induced hypophagia and food intake were therefore investigated (see Supplementary Material).
Effect of L-cysteine on cFos immunoreactivity Rats were fasted overnight before receiving an oral gavage of water, 4 mmol kg − 1 L-cysteine or 4 mmol kg − 1 glycine (n = 4-6). Glycine was used as a negative control as it was previously found to have no effect on food intake ( Figure 1a). Two animals were IP injected with hypertonic saline as positive controls for the staining procedure.
Transcardial perfusion, tissue preparation and immunohistochemistry were carried out as previously described, 17 with animals killed 90 min post gavage.
Cell bodies positive for cFos-like immunoreactivity (cFLI) were counted bilaterally from matched sections in hypothalamic and brainstem nuclei by an observer blinded to the treatment. Nuclei were defined in relation to anatomical landmarks according to the rat brain atlas of Paxinos and Watson. 18 Gastric emptying Gastric emptying was measured using an established method. 19,20 Rats were fasted overnight, then received an intraperitoneal injection of saline, 2 mmol kg − 1 L-cysteine, 2 mmol kg − 1 glycine (negative control) or 10 nmol kg − 1 A71623 (cholecystokinin (CCK)-A receptor agonist, positive control) (n = 4-7) followed immediately by an oral gavage of 2 ml of a 1.5% methylcellulose (4000cP), 0.05% phenol red solution. Animals were culled by decapitation 30 min later and the stomach removed for quantification of the remaining phenol red.
Subdiaphragmatic vagal deafferentation surgery
Rats were adapted to a nutritionally complete liquid diet (Nestlé Nutrition, Resource Energy, 1.5 kcal ml − 1 ) for 3 days before undergoing SDA or sham surgery. SDA surgery involved left intracranial rhizotomy and transection of the dorsal subdiaphragmatic trunk of the vagus, resulting in 50% deafferentation and complete subdiaphragmatic vagal deafferentation. 21,22 Post surgery, rats received liquid diet for 2 days and then a semiliquid diet for 4 days, and were given 10 days to fully recover. The effect of oral gavage of 4 mmol kg − 1 L-cysteine on food intake was then measured.
Effect of L-cysteine on gut hormone release Rats were fasted overnight before receiving an oral gavage of water or 4mmol kg − 1 L-cysteine (n = 7-8) or intraperitoneal injection of saline or 2 mmol kg − 1 L-cysteine (n = 6-8). Animals were returned to their home cages and 30 min post administration were culled by decapitation and trunk blood collected in lithium heparin tubes containing 0.6 mg aprotinin. Plasma was separated by centrifugation and then frozen and stored at − 20°C for analysis. After centrifugation an aliquot of plasma was acidified with HCl to a concentration of 1N before freezing.
Repeated administration
Adult male Wistar rats were orally gavaged three times throughout the dark phase (at 1900, 2300 and 0300 hours) with water, 4 mmol kg − 1 L-cysteine or 4 mmol kg − 1 glycine (negative control) (n = 6-9). Body weight and food intake were measured daily at the onset of the dark phase.
Male C57BL/6 mice aged 6 weeks were maintained in group housing with ad libitum access to high-fat diet (D12492, Research Diets, New Brunswick, USA; containing 60% of its energy as fat) and water for 14 weeks, reaching an average weight of 39.1 g. Animals were then individually housed and allowed 1 week to acclimatise before experiments commenced as for rats above (n = 8-10).
Clinical studies
Study participants. Human studies were conducted following ethical approval (West London Research Ethics Committee 1, London, UK) and according to the principles of the Declaration of Helsinki. All participants gave their written informed consent before study enrolment.
Healthy male (n = 2, 1 Caucasian, 1 South Asian) and female (n = 5, all Caucasian) subjects with a mean (± s.d.) age of 35.9 ±10.9 years and body mass index of 23.7 ± 4.3 kg m − 2 who had been weight stable for the 3 months were recruited.
Study design. Participants attended three study visits and reported to the clinical research facility at 0830 hours having fasted from 2100 hours the night before. On each visit, participants were cannulated in the antecubital fossa for serial blood sampling and asked to consume a 200-ml drink containing either vehicle alone, 200 ml containing 0.07 g kg − 1 L-cysteine or 200 ml containing 0.07 g kg − 1 glycine in a single-blind (participant) randomised order. Blood samples were collected at 15-min intervals starting at t = − 15 min for 2.5 h after dosing. Participants were asked to complete visual analogue scales (VAS) at each time point to assess hunger, fullness, nausea, anxiety, irritability and sleepiness. Participants were asked to report any additional side effects during and after the visits. Baseline plasma and serum was assayed for routine clinical chemistry (liver and kidney function, calcium and electrolytes). Plasma acyl ghrelin was measured using a commercially available ELISA (Merck Millipore, MA, USA).
Statistical analysis
Acute food intake and area under the curve (AUC) data is expressed as mean ± s.e.m. and was analysed by one-way analysis of variance (ANOVA) and post hoc Tukey's test. CTA data were analysed using one-way ANOVA with post hoc Dunnett's test. Data from transgenic mice were analysed by paired t-test. CLAMS data, cumulative data and data from GPRC6A knockout mice were analysed by two-way repeated measures ANOVA with a post hoc test with Bonferroni correction. Behavioural data were analysed by Mann-Whitney test and cFos data were analysed by Kruskal Wallis with Dunn's post hoc comparison.
All additional methods are included in Supplementary Material.
RESULTS
The effect of L-amino acids on food intake in rats To investigate the anorectic potential of specific L-amino acids in vivo, the effect on food intake following oral and intraperitoneal administration of a range of amino acids was examined in rats. Of the amino acids investigated, L-cysteine reduced food intake to the greatest extent following both oral and intraperitoneal administration ( Figure 1a). We therefore decided to further investigate the effects of L-cysteine on food intake.
L-cysteine decreases food intake in rodents Oral administration of L-cysteine dose-dependently decreased food intake in rats and mice (Figure 1b and d). Food intake following 4 mmol kg − 1 L-cysteine was significantly reduced, compared with food intake following saline or 4 mmol kg − 1 D-cysteine 0-1 h following administration (P o 0.01) in rats, demonstrating an enantiomer-specific effect (Figure 1b). Intraperitoneal administration of L-cysteine also dose-dependently decreased food intake in rats and mice (Figures 1c and e). Table 1) without altering behaviours indicative of illness or nausea. Intraperitoneal administration of 2 mmol kg − 1 L-cysteine to rats and mice did not cause any behaviour indicative of illness or nausea compared with control (Supplementary Tables 2 and 3).
Oral gavage administration of L-cysteine at doses up to 4 mmol kg − 1 did not cause CTA in rats (Figure 1f). In addition, our data suggested that L-cysteine does not mediate its anorectic effects via the NMDA receptor, GPRC6a or via downstream metabolites (see supplementary results and Supplementary Figures 2 and 3).
L-cysteine increases neuronal activation in the rat brainstem Oral administration of 4 mmol kg − 1 L-cysteine significantly reduced cFLI in the lateral hypothalamic area (LHA) (P o0.05) (Figure 2a). However, there was no significant difference in cFLI in the LHA between animals treated with L-cysteine and glycine (used as a negative control) (Figure 1a), suggesting the reduction in cFLI in the LHA was not specifically related to the anorectic effects of L-cysteine. Oral administration of 4mmol kg − 1 L-cysteine significantly increased cFLI in the area postrema compared with water-treated controls (P o 0.05) (Figure 2a, representative sections Figure 2b). There was a trend for increased cFLI in the NTS (Figure 2a, representative sections Figure 2c). To investigate whether L-cysteine mediated its effect on food intake centrally, we measured food intake following administration of L-cysteine into the lateral ventricle. Central anorectic doses resulted in severe behavioural abnormalities, including seizure-like activity. Such behavioural effects were not previously observed following oral or IP administration of anorectic doses (Supplementary Tables 1-3). We therefore hypothesised that the reduction in food intake following central administration of high doses of L-cysteine was secondary to these behavioural abnormalities and that these effects were caused by high central concentrations of L-cysteine activating central NMDA receptors. Accordingly, NMDA receptor antagonism completely blocked the reduction in food intake and the behavioural abnormalities observed following L-cysteine administration (P o 0.05) (Supplementary Figure 4B). L-cysteine reduces gastric emptying Gastric emptying and gastric distension can affect satiety. Therefore, we investigated the effect of L-cysteine on gastric emptying. Intraperitoneal administration of 2 mmol kg − 1 L-cysteine significantly reduced gastric emptying 30 min after administration to rats (P o 0.001) (Figure 3a). Administration of 2 mmol kg − 1 glycine, used as a negative control, did not affect gastric emptying. To investigate whether CCK, a potent inhibitor of gastric emptying, was responsible for the reduction in food intake and delayed gastric emptying, following administration of L-cysteine, a CCK-1 receptor antagonist was used. The CCK-1 receptor antagonist devazepide (0.5 mg kg − 1 ) inhibited the effect of the CCK-1 receptor agonist, A71623, on food intake and gastric emptying in mice (Figures 3b and d) but did not inhibit the effect of L-cysteine on food intake or gastric emptying (Figures 3c and d). The degree of gastric distension is primarily communicated to the brain through vagal afferents. The role of vagal afferents in mediating the effect of L-cysteine on food intake was investigated in rats that had undergone subdiaphragmatic vagal deafferentation. There was no significant difference in the anorectic response of sham and SDA animals following oral administration of L-cysteine, suggesting vagal afferents are not essential for the anorectic effects of L-cysteine (Figure 3e). L-cysteine suppresses plasma ghrelin Thirty minutes after oral administration of 4 mmol kg − 1 L-cysteine, plasma levels of acyl ghrelin were significantly reduced compared with water-treated rats (P o 0.05) (Figure 4a). IP administration of 2 mmol kg − 1 L-cysteine also significantly reduced plasma acyl ghrelin levels compared with saline-treated animals (P o 0.001) (Figure 4c), but not GLP-1 or PYY levels following oral (Figure 4b) or IP (Figure 4d) administration. L-cysteine did not suppress food intake in transgenic ghrelin-overexpressing mice (Figure 4e). L-cysteine suppresses hunger and plasma ghrelin in humans L-cysteine reduced feelings of hunger compared with glycinetreated controls as measured by visual analogue scales (VAS) (P o 0.05) (Figure 5a). There was a trend for a decrease in VAS scores for 'How pleasant would it be to eat' (Figure 5b) and 'How much could you eat' (Figure 5c). L-cysteine significantly reduced plasma acyl ghrelin levels at 45 min post administration compared with levels following vehicle and glycine treatment (P o0.05) (Figure 5d), the time point at which the largest cysteine-induced change from baseline for 'How pleasant would it be to eat' and 'How much could you eat' occurred. L-cysteine had no effect on plasma GLP-1 and PYY (Supplementary Figure 5). No significant side effects were reported during or after the study (Supplementary Figure 6).
Repeated administration of L-cysteine reduces cumulative food intake Our data demonstrated that L-cysteine could acutely suppress appetite. We subsequently investigated whether the anorectic effect of L-cysteine was sustained following repeated administration in rodents. Repeated administration of L-cysteine over a period of five nights significantly reduced cumulative food intake in lean rats compared with water and glycine-treated controls (P o 0.001) (Figure 6a). However, this change in food intake did not result in a significant difference in body weight gain between the groups (Figure 6b).
To investigate whether L-cysteine could reduce body weight in an obese model, we used the same protocol in diet-induced obese (DIO) mice. L-cysteine-treated animals had lost significantly more weight than water and glycine-treated controls on days 2 and 3 (P o 0.05) (Figure 6d). Cumulative food intake was also significantly lower in the L-cysteine group on days 2 (Po 0.05) and 3 (P o 0.01) (Figure 6c).
DISCUSSION
Our data identify a novel anorectic effect for the amino acid L-cysteine. L-cysteine reduced food intake in rodents and hunger in a small scale study in humans, and reduced plasma levels of the orexigenic gut hormone acyl ghrelin in both rodents and humans.
Jordi et al. 25 recently described the effects of an oral dose of 6.7 mmol kg − 1 of the 20 proteinogenic amino acids on food intake, and identified L-arginine, L-lysine and L-glutamic acid as the most anorectic amino acids. However, our data suggest that at the lower doses we used (oral gavage: 4 mmol kg − 1 , intraperitoneal: 2 mmol kg − 1 ), L-cysteine is more anorectic than L-arginine and L-lysine. We found that L-cysteine reduced food intake in a dosedependant manner. The amounts of L-cysteine administered were higher than a rodent would be expected to consume in a single bout of eating, and thus represent a pharmacological effect. If L-cysteine does have a physiological effect on appetite, then it is likely to act in concert with other products of protein digestion, and thus the effects of L-cysteine per se may be difficult to detect. However, if the precise mechanisms mediating the anorectic effects of L-cysteine are characterised, it would be interesting to investigate whether blocking these mechanisms can inhibit protein-induced satiety and the long-term metabolic effects of a high protein diet.
L-cysteine was effective at doses that did not induce conditioned taste aversion or evoke abnormal behaviour. These data suggest that L-cysteine does not result in unpleasant postingestive consequences that might result in a nonspecific reduction in food intake. However, there may be an effect of L-cysteine at the highest dose tested in the CTA protocol, though it does not achieve statistical significance, and the post-injection increase in locomotor activity observed in saline-treated mice in the CLAMS cages was suppressed in cysteine-treated mice, which may suggest a degree of treatment associated discomfort. Further work would thus be useful to determine whether higher doses of L-cysteine are associated with aversive effects. An isomolar dose of D-cysteine did not reduce food intake following oral or intraperitoneal administration. This L-enantiomer specificity may indicate a potential role for promiscuous amino-acid-sensing receptors such as T1R1/T1R3, CaSR and GPRC6A, which are reported to be activated by L-but not D-amino acids. [26][27][28] However, L-cysteine reduced food intake in GPRC6A knockout mice to the same extent as wild type, suggesting GPRC6A is not necessary for the anorectic effects of L-cysteine. L-cysteine induces a strong T1R1/T1R3mediated cellular response in vitro, but other amino acids that also induce a strong response, such as serine and threonine, 26 did not have significant effects on food intake. L-cysteine can also activate the CaSR. However, histidine is reported to induce the strongest CaSR-mediated cellular response of the proteinogenic amino acids, but did not reduce food intake in our studies. These data suggest that the promiscuous amino-acid receptors T1R1/ T1R3, CaSR or GPRC6A are unlikely to mediate the effects of L-cysteine on appetite, though further studies are needed to conclusively demonstrate that T1R1/T1R3 and CaSR are not involved. L-cysteine increased the number of cFos-positive cells in the AP, suggesting the brainstem may mediate its effects on food intake. L-cysteine reduced gastric emptying via a CCK-1-receptorindependent mechanism and reduced food intake independently of the CCK-1 receptor and vagal afferents. These results accord with those published by Jordi et al. 25 for the amino acids L-arginine and L-glutamic acid, suggesting that there may be similarities between the mechanisms by which these amino acids and L-cysteine mediate their anorectic effect.
L-cysteine may also work by modulating gastrointestinal hormone release. L-cysteine did not alter circulating PYY or GLP-1 concentrations in rodents or humans. The assays used measured total PYY and GLP-1 immunoreactivity, and it is thus possible that specifically measuring the active forms of these hormones might have revealed an effect, though the levels of these active forms generally correlate with the total circulating concentrations. 29,30 L-Cysteine did reduce circulating plasma acyl ghrelin levels in both rodents and humans. This reduction in ghrelin coincided with the greatest decrease in appetite in humans, and the effect of L-cysteine on food intake was attenuated in transgenic ghrelin overexpressing mice. The mechanism regulating ghrelin secretion remains unclear. However, other hormones, including bombesin, somatostatin, CCK, GLP-1 and insulin, have all been linked to the suppression of acyl ghrelin release. The data presented in this paper suggest that the effect of L-cysteine on food intake does not involve GLP-1 or CCK. Interestingly, the CaSR has recently been localised to X/A cells, where it can have both inhibitory and stimulatory actions on ghrelin release. 31 Our data demonstrate that a single dose of L-cysteine can reduce appetite in rodents, and can reduce subjective feelings of appetite in a small scale human study. These effects may, at least in part, be mediated by a reduction in circulating plasma levels of the orexigenic hormone acyl ghrelin. To assess whether this reduction in appetite could be maintained and potentially modulate body weight we investigated the effect of repeated administration in lean rats and obese mice. A previous study has demonstrated that supplementing the diet of rats with 1 or 2% L-cysteine can reduce food intake and body weight, though this study did not investigate whether these effects might be mediated by an aversion to the taste of the supplemented diets. 32 Repeated administration of L-cysteine significantly reduced cumulative food intake in lean animals but had no significant effect on body weight over the duration of the study. To determine whether L-cysteine could modulate body weight under obesogenic conditions we investigated the effect of repeated administration of L-cysteine in DIO mice. L-cysteine caused an initial modest and statistically significant decrease in food intake and body weight in DIO mice. However, the magnitude of this effect appeared to decrease after 3 days of treatment. Weight loss was also observed in control groups, suggesting the administration protocol may have induced undue stress. It is possible that alternative administration protocols might result in more sustained and reliable effects on food intake and body weight. However, higher doses of cysteine were associated with toxicity. As previously mentioned, L-cysteine can have NMDA-receptor-mediated excitatory actions. Dose toxicity studies of L-cysteine have previously been published demonstrating toxic effects after 28 day intravenous administration of 1000 mg kg − 1 day − 1 . 33 Our acute studies used doses that were significantly less than this and our data suggest they were well tolerated.
It has been reported that circulating levels of L-cysteine and its oxidised forms correlate with body mass index and obesity. 34 However, it is unclear whether this is a causal or consequential factor, and whether the observed differences in circulating L-cysteine reflect differences in cysteine intake or metabolism. A previous study has shown that supplementing rats with L-cysteine can prevent the weight loss associated with a methioninedeficient diet, which may reflect an attenuation of the changes to protein metabolism that this diet results in. This study did not, however, find that supplementation with cysteine administration increased the body weight of rats on a nutrient complete diet. 35 Cysteine supplementation has also been reported to lessen the age-related decline in food intake in rats, suggesting an appetite stimulating effect in older animals. 36 Our animals, though adult, were still growing and it is possible that this may also influence their response to L-cysteine. Collectively, these studies suggest that cysteine may have different effects on food intake dependent on the nutritional status and age of the animals. Cysteine administration and dietary supplementation have also been reported to have beneficial effects on glucose levels and insulin sensitivity, 37,38 though, conversely, L-cysteine has also been shown to have inhibitory effects on glucose-stimulated insulin release from pancreatic β-cells in vitro. 39 Our studies found that L-cysteine transiently increased RER, suggesting cysteine stimulates glucose utilisation and reduces fat utilisation. It would be interesting in future studies to further investigate the effects of L-cysteine on glucose homoeostasis in animals and man.
In summary, our studies identify L-cysteine as an amino acid with potent acute anorectic effects. Further work is required to Figure 5. L-cysteine suppresses appetite and ghrelin release in humans. (a-c) Visual analogue scales and area under the curve following ingestion of vehicle, 0.07 g kg − 1 L-cysteine or 0.07 g kg − 1 glycine (n = 7) (d) The change in plasma ghrelin following oral ingestion of vehicle, 0.07 g kg − 1 L-cysteine or 0.07 g kg − 1 glycine (n = 7). Data are expressed as mean ± s.e.m. *Po 0.05.
investigate whether the mechanisms responsible for these effects can be exploited therapeutically. | 5,822.6 | 2014-09-15T00:00:00.000 | [
"Biology",
"Medicine"
] |
Multiparametric Analytical Solution for the Eigenvalue Problem of FGM Porous Circular Plates
Free vibration analysis of the porous functionally graded circular plates has been presented on the basis of classical plate theory. The three defined coupled equations of motion of the porous functionally graded circular/annular plate were decoupled to one differential equation of free transverse vibrations of plate. The one universal general solution was obtained as a linear combination of the multiparametric special functions for the functionally graded circular and annular plates with even and uneven porosity distributions. The multiparametric frequency equations of functionally graded porous circular plate with diverse boundary conditions were obtained in the exact closed-form. The influences of the even and uneven distributions of porosity, power-law index, diverse boundary conditions and the neglected effect of the coupling in-plane and transverse displacements on the dimensionless frequencies of the circular plate were comprehensively studied for the first time. The formulated boundary value problem, the exact method of solution and the numerical results for the perfect and imperfect functionally graded circular plates have not yet been reported.
Introduction
Functionally graded materials (FGMs) are a class of composite materials, which are made of the ceramic and metal mixture such that the material properties vary continuously in appropriate directions of structural components. In the processes of preparing functionally graded material, micro-voids and porosities may appear inside material in view of the technical issues. Zhu et al. [1] reported that many porosities appear in material during the functionally graded material preparation process by the non-pressure sintering technique. Wattanasakulpong et al. [2] reported that many porosities exist in the intermediate area of the functionally graded material fabricated by utilizing a multi-step sequential infiltration technique because of the problem with infiltration of the secondary material into the middle area. In that case, less porosities appear in the top and bottom area of material because infiltration of the material is easier in these zones.
In recent years, a significant number of articles about the free vibrations of porous functionally graded (FGM) plates have appeared in the literature due to their wide applications in many fields of engineering such as aeronautical, civil, mechanical, automotive, and ocean engineering. The gradation of properties in functionally graded materials and the diverse distributions of porosity have a significant effect on distributions of the mass and the stiffness of plates and therefore their natural frequencies. The knowledge about influence of distribution of the material properties on dynamics of plates is very important because it allows us to predict the frequency of plates and find their optimal parameters. Additionally, the comprehensive investigation of the effect of functionally graded material with porosities and diverse boundary conditions on the natural frequencies of plates is the first important step to designing their safe and rational active vibration control system.
We note that, in most engineering applications, the classical plate theory is often used to analyze the dynamic behavior of thin lightweight plates. It is impossible to review all works focused on mechanical behavior of porous FGM structures; then, we limit ourselves to chronological review of some of the works focused on mechanical behavior of porous and porous FGM plates that are closely related to our work.
Jabbari et al. [3] studied the buckling of thin saturated porous circular plate with the layers of piezoelectric actuators. Buckling load was obtained for clamped circular plate under uniform radial compressive loading. The same authors presented the buckling analysis of clamped thin saturated porous circular plate with sensor-actuator layers under uniform radial compression [4,5] investigated thermal and mechanical stability of clamped thin saturated and unsaturated porous circular plates with piezoelectric actuators. Rad and Shariyat [6] solved the three-dimensional magneto-elastic problem for asymmetric variable thickness porous FGM circular supported on the Kerr elastic foundation using the differential quadrature method and the state space vector technique. Barati et al. [7] studied buckling of functionally graded piezoelectric rectangular plates with porosities based on the four-variable plate theory. Mechab et al. [8] studied free vibration of the FGM nanoplate with porosities resting on Winkler and Pasternak elastic foundation based on the two-variable plate theory. Mojahedin et al. [9] analyzed buckling of radially loaded clamped saturated porous circular plates based on higher order shear deformation theory. Wang and Zu [10] analyzed vibration behaviors of thin FGM rectangular plates with porosities and moving in the thermal environment using the method of harmonic balance and the Runge-Kutta technique. Gupta and Talha [11] analyzed flexural and vibration response of porous FGM rectangular plates using nonpolynomial higher-order shear and the normal deformation theory. Wang and Zu [12] analyzed vibration characteristics of longitudinally moving sigmoid porous FGM plates based on the von Kármán nonlinear plate theory. Ebrahimi et al. [13] studied free vibration of smart shear deformable rectangular plates made of porous magneto-electro-elastic functionally graded materials. Feyzi and Khorshidvand [14] studied axisymmetric post-buckling behavior of a saturated porous circular plate with simply supported and clamped boundary conditions. Wang and Zu [15] studied large-amplitude vibration of thin sigmoid functionally graded plates with porosities. Wang et al. [16] studied vibrations of longitudinally travelling FGM porous thin rectangular plates using the Galerkin method and the four-order Runge-Kutta method. Ebrahimi et al. [17] used a four-variable shear deformation refined plate theory for free vibration analysis of embedded smart rectangular plates made of magneto-electro-elastic porous functionally graded materials. Shahverdi and Barati [18] developed nonlocal strain-gradient elasticity model for vibration analysis of porous FGM nano-scale rectangular plates. Shojaeefard et al. [19] studied free vibration and thermal buckling of micro temperature-dependent FGM porous circular plate using the generalized differential quadrature method. Barati and Shahverdi [20] presented a new solution to examine large amplitude vibration of a porous nanoplate resting on a nonlinear elastic foundation modeled based on the four-variable plate theory. Kiran et al. [21] studied free vibration of porous FGM magneto-electro-elastic skew plates using the finite element formulation. Cong et al. [22] presented an analytical approach to buckling and post-buckling behavior analysis of FGM rectangular plates with porosities under thermal and thermomechanical loads based on the Reddy's higher-order shear deformation theory. Kiran and Kattimani [23] studied free vibration and static behavior of porous FGM magneto-electro-elastic rectangular plates using the finite element method. Arshid and Khorshidvand [24] analyzed free vibration of saturated porous FGM circular plates integrated with piezoelectric actuators using the differential quadrature method. Shahsavari et al. [25] used the quasi-3D hyperbolic theory for free vibration of porous FGM rectangular plates resting on Winkler, Pasternak and Kerr foundations.
Contribution of Current Study
The aim of the paper is to formulate and solve the boundary value problem for the free axisymmetric and non-axisymmetric vibrations of FGM circular plate with even and uneven porosity distributions and diverse boundary conditions. The defined coupled equations of motion for the porous FGM circular plate were decoupled based on the properties of physical neutral surface. The general solution of the decoupled equation of motion of a porous FGM circular plate was defined as the linear combination of the Bessel functions functionally dependent on the material parameters. The obtained characteristic equations allow us to comprehensively study the effect of the distribution of material parameters and the formulated boundary conditions on the natural frequencies of axisymmetric and non-axisymmetric vibrations of the circular plates without the necessity to solve a new eigenvalue problem for plates with a steady distribution of parameters.
Authors of many previous papers (e.g., [26][27][28][29][30]) presented the free transverse vibration analysis of the perfect (without porosity) FGM circular plates using the equation of motion including only the coefficient of the pure bending stiffness varying in the thickness direction of the plate. The coefficients of the extensional stiffness and the bending-extensional coupling stiffness were neglected because the effect of the coupled in-plane and transverse displacements was omitted for obtaining simplified solution to the eigenvalue problem.
In the present paper, the obtained equation of motion of the perfect and imperfect FGM circular plates includes the coefficients of extensional stiffness, bending-extensional coupling stiffness and bending stiffness, which appeared by decoupling the in-plane and transverse displacements using the properties of the physical neutral surface. The differences between the values of numerical results for the eigenfrequencies of the perfect FGM circular plate with and without the coupling effect are shown for diverse boundary conditions.
To the best knowledge of authors, there are no studies which focus on the free axisymmetric and non-axisymmetric vibrations of FGM and porous FGM circular plates. In particular, the obtained exact solution, the multiparametric frequency equations and the calculated eigenfrequencies for the free vibrations of perfect and imperfect FGM circular plates with clamped, simply supported, sliding and free edges have not yet been reported. The present paper fills this void in the literature.
FGM Circular Plate with Porosities
Consider a porous FGM thin circular plate with radius R and thickness h presented in the cylindrical coordinate (r, θ, z) with the z-axis along the longitudinal direction. The geometry and the coordinate system of the considered circular plate are shown in Figure 1. The FGM plate contains evenly (e) and unevenly (u) distributed porosities along the plate's thickness direction. The cross-sections of the FGM circular plates with the two various types of distribution of porosities are shown in Figure 2. The volume fraction of the ceramic part changes continually along the thickness and can be defined as [31] ( , ) = + , ≥ 0, where is the power-law index of the material. A change in the power of functionally graded material results in a change in the portion of the ceramic and metal components in the circular plate. We assume that the composition is varied from the bottom surface ( = −ℎ/2) to the top surface ( = ℎ/2) of the circular plate. After substituting the variation of the ceramic part ( , ) from Equation (3) For the functionally graded circular plate with unevenly (u) distributed porosities [16], the material properties in Equations (4) can be replaced by the following forms: The functionally graded material is a mixture of a ceramic (c) and a metal (m). If the volume fraction of the ceramic part is V c and the metallic part is V m , we have the well-known dependence: Based on the modified rule of mixtures [16] with the porosity volume fraction ψ (ψ 1), the Young's modulus, the density and the Poisson's ratio for evenly (e) distributed porosities over the cross-section of the plate have the general forms: The volume fraction of the ceramic part changes continually along the thickness and can be defined as [31] where g is the power-law index of the material. A change in the power g of functionally graded material results in a change in the portion of the ceramic and metal components in the circular plate. We assume that the composition is varied from the bottom surface (z = −h/2) to the top surface (z = h/2) of the circular plate. After substituting the variation of the ceramic part V c (z, g) from Equation (3) into Equation (2), the material properties of the functionally graded circular plate with evenly distributed porosities are defined in the final form: For the functionally graded circular plate with unevenly (u) distributed porosities [16], the material properties in Equations (4) can be replaced by the following forms: In this case, the porosity linearly decreases to zero at the top and the bottom of the cross-section of the plate. The effect of Poisson's ratio is much less on the mechanical behavior of FGM plates than the Young's modulus [32,33], thus the Poisson's ratio will assume to be constant ν e = ν u = ν in the whole volume of the porous FGM circular plate.
Constitutive Relations and Governing Equations
In most practical applications, the ratio of the radius R to the thickness h of the plate is more than 10; then, the assumptions of classical plate theory (CPT) are applicable and rotary inertia and shear deformation can be successfully omitted.
For a thin circular plate, the displacement field has the form: where u, v and w are the radial, circumferential and transverse displacements of the midplane (z = 0) of the plate at time t. Based on the linear strain-displacement relations and Hook's law, the resultant forces and the moments for porous FGM circular plate (i = {e, u}) can be expressed in the following form [34]: where are the in-plane strains and curvatures of midplane, respectively. We assume that the material properties are varied from the bottom surface (z = −h/2) to the top surface (z = h/2) of the plate; then, the coefficients of extensional stiffness A i kl , bending-extensional coupling stiffness B i kl and bending stiffness D i kl can be defined for FGM circular plate with i-th distribution of porosities in the general forms: Additionally, the stiffness coefficients from Equation (9) satisfy the equations The resultant forces and the moments can be also defined by where the stress components and the strain components have the form:
Coupled Equations of Motion
Using the Hamilton's principle [34] and ignoring in-plane inertia forces, the equilibrium equations of motion of the porous FGM thin circular plate have the forms: where the resultants forces and the moments can be obtained using Equations (7) and (8), and can be presented in the following form: In Equation (14c), ρ i is the averaged material density of the FGM circular plate for the i-th distribution of porosities presented in the general form: Substituting Equations (15) and (16) into Equation (14), and using relations given in Equation (10), we get the coupled equilibrium equations of motion of the porous FGM circular plate presented in terms of displacement components: ∂θ 2 is the Laplace operator presented in polar coordinates and ε = ∂u ∂r
Decoupled Equation of Motion
Equation (18) show that the in-plane stretching and bending are coupled because the reference surface is a geometrical midplane. We can eliminate this coupling by introducing the physical neutral surface, where the in-plane displacements will be omitted. The in-plane displacements of the midplane can be expressed in terms of the slopes of deflection in the following form: where z 0 is the distance between the midplane and the physical neutral surface. By substituting Equation (20) into Equations (6) and (15) and introducing z = z 0 , the in-plane displacements u, v and the in-plane forces N i rr , N i θθ , N i rθ must equal zero based on properties of the physical neutral surface. By substituting Equation (20) into Equation (15) and assuming that the Poisson's ratio is constant, distance z 0 can be obtained from relations: where By substituting Equations (20) and (23) into Equations (18c) and (19), we obtain the decoupled equation of transverse vibration of the porous FGM thin circular plate in the form: where
Solution of the Problem
Taking into account a harmonic solution, the small vibration of the porous FGM circular plate may be expressed as follows: where W(r) is the radial mode function as the small deflection compared with the thickness h of the plate, n is the integer number of diagonal nodal lines, θ is the angular coordinate, and ω is the natural frequency. By substituting Equation (26) into Equation (24) using the dimensionless coordinate ξ = r/R (0 < ξ ≤ 1), the general governing differential equation assumes the following form: where L n (·) is the differential operator defined by The calculated general forms of material density ρ i and the coefficients of extensional stiffness A i 11 , extensional-bending coupling stiffness B i 11 and bending stiffness D i 11 for the porous FGM circular plate are presented in the following general forms: where x = y = 1 for the even distribution (i = e) of porosities and x = 2, y = 4 for the uneven (i = u) distribution of porosities. The extensional-bending coupling stiffness B i 11 has the same form for both types of porosities.
By substituting the obtained forms from Equation (29) into Equation (27), the generalized ordinary differential equation with variable coefficients is obtained as: where The boundary conditions on the outer edge (ξ = 1) of the porous FGM circular plate may be one of the following: clamped, simply supported, sliding supported and free. These conditions may be written in terms of the radial mode function W(ξ) in the following form: The static forces M(W) and V(W) are the normalized radial bending moment and the normalized effective shear force, respectively.
The one multiparametric general solution of the defined differential Equation (30) for FGM circular/annular plates with the two various types of distribution of porosities (i = {e, u}) is obtained in the following form: where n (n ∈ N + ) is the number of nodal lines, C 1 , C 2 , C 3 , C 4 are the constants of integration, ξ are the Bessel functions as particular solutions of Equation (30), and M i is the generalized multiparametric function defined as: where The functions J n λ √ M i 1/2 ξ and I n λ (30) where By applying the general solution (44) and the boundary conditions (37-40) as well as assuming the existence of the non-trivial constants C 1 and C 2 , the general nonlinear multiparametric characteristic equations of the FGM circular plate with the two various types of distribution of porosities were obtained in the form: • Clamped (C): • Simply supported (SS): • Sliding supported (S): If x = y = 1 is introduced to Equations (42) and (45), then the obtained characteristic Equation (46) will be valid for the FGM circular plates with even (i = e) distribution of porosities. If x = 2, y = 4 is introduced to Equations (42) and (45), then the obtained characteristic equations (46) will be valid for the FGM circular plates with uneven (i = u) distribution of porosities.
The general solution for the perfect (without porosity) FGM circular plate can be obtained from Equation (44) and presented in the following form: After calculations, the final form of general solution for the perfect FGM circular plate is expressed as where The general solution for the perfect FGM circular plate with negligible effect of the coupling in-plane and transverse displacements ( A i 11 → 0, B i 11 → 0) has the form: where
Parametric Study
The every single fundamental and lower dimensionless frequencies of the free axisymmetric and non-axisymmetric vibrations of porous FGM circular plate were calculated for diverse values of the power-law index g, the porosity volume fraction ψ and different boundary conditions using the Newton method aided by a calculation software.
The Poisson's ratio is taken as ν = 0.3 and its variation is assumed to be negligible. In the present study, aluminum is taken as the metal and alumina is taken as the ceramic material. The values of Young's modulus and densities are taken as follows: E m = 70 GPa, E c = 380 GPa, ρ m = 2702 kg/m 3 , ρ c = 3800 kg/m 3 .
Imperfect FGM Circular Plate
The obtained numerical results for the first three dimensionless frequencies λ = ωR 2 ρ c h/D c of the axisymmetric (n = 0) and non-axisymmetric (n = 1) vibrations of the perfect (ψ → 0) homogeneous (g → 0) circular plate with various boundary conditions are presented in Table 1 and compared with the results obtained by Wu and Liu [35], Yalcin et al. [36], Zhou et al. [37] and Duan et al. [38]. The obtained numerical results for the perfect homogeneous circular plate are in excellent agreement with those available in the literature. The calculated fundamental dimensionless frequencies λ 0 of the axisymmetric (n = 0) and non-axisymmetric (n = 1) vibrations of the FGM circular plate with evenly (i = e) and unevenly (i = u) distributed porosity are presented in Tables The dependences of the fundamental dimensionless frequencies λ 0 of the free axisymmetric (n = 0) and non-axisymmetric (n = 1) vibrations of the circular plate on selected values of the power-law index and the porosities volume fraction are presented in Figures 3-6 as the two-dimensional (2D) and three-dimensional (3D) graphs for the two various types of distribution of porosity and all considered boundary conditions.
Perfect FGM Circular Plate
The obtained general solution (48) and the defined boundary conditions (37 ÷ 40) were used to calculate the first three dimensionless frequencies λ of the axisymmetric (n = 0) and non-axisymmetric (n = 1) vibrations of the perfect (ψ = 0) FGM circular plate with various boundary conditions.
The obtained numerical results are presented in Tables 6-9 for selected values of the power-law index g. Numerical results obtained for the clamped and simply supported plates (Tables 6 and 7) were compared with the results presented in the paper [27], where the effect of the coupling in-plane and transverse displacements was omitted. The fundamental dimensionless frequencies of the perfect FGM circular plates with and without the effect of the coupling in-plane and transverse displacements obtained for selected values of the power-law index and diverse boundary conditions are presented in Table 10. Additionally, the differences (errors) between obtained results were calculated according to the equation: where λ P 0 and λ Y 0 are the fundamental dimensionless frequencies of the perfect FGM circular plate without and with effect of the coupling in-plane and transverse displacements, respectively. Figure 7 presents the dependence of the differences (errors) between obtained results for the power-law index g ≥ 0. presents the dependence of the differences (errors) between obtained results for the power-law index ≥ 0.
Imperfect FGM Circular Plate
The numerical results for the fundamental dimensionless frequencies of the porous FGM Figure 7. The dependence of the differences (errors) between the fundamental dimensionless frequencies of the perfect FGM circular plate without (λ P 0 ) and with (λ Y 0 ) effect of the coupling in-plane and transverse displacements for diverse values of the power-law index.
Imperfect FGM Circular Plate
The numerical results for the fundamental dimensionless frequencies of the porous FGM circular plates presented in Tables 2-5 The observed dependences exist because of the diverse influence of porosity distributions, values of the power-law index and the porosity volume fraction on decreasing (increasing) the ratios of mass to stiffness of the considered circular plates. The all observed dependences are independent of the considered boundary conditions which influence only the values of the dimensionless frequencies of the plate.
Perfect FGM Circular Plate
It can be observed that the values of dimensionless frequencies of the perfect FGM circular plates obtained by omitting the effect of coupling in-plane and transverse displacements are higher than the values of the dimensionless frequencies of the considered plate with the coupling effect. The differences (errors) between the calculated dimensionless frequencies of free axisymmetric and non-axisymmetric vibration of the perfect FGM circular plate with and without the coupling effect are significant for the power-law index g ∈ [0, 20], but, for g ∈ [20, ∞], these differences decrease from 2% to 0%. It can be observed from Table 10 that the differences between the calculated dimensionless frequencies are independent of the modes of vibrations and the boundary conditions of the considered circular plate.
Conclusions
This paper presents the influence of two different types of distribution of porosities on the free vibrations of the thin functionally graded circular plate with clamped, simply supported, sliding supported, and free edges. To this aim, the boundary value problem was formulated and a solution was obtained in the exact form. The universal multiparametric characteristic equations were defined using the properties of the multiparametric general solution obtained for the plate with even and uneven distribution of porosities. The effects of the power-law index, the volume fraction index and diverse boundary conditions on the values of the dimensionless frequencies of the free axisymmetric and non-axisymmetric vibrations of the circular plate were comprehensively studied. Additionally, the influences of the power-law index and different boundary conditions on the values of dimensionless frequencies of the FGM circular plate without porosities were also presented.
The presented multiparametric analytical approach can be effectively applying for free vibration of circular and annular plates with other diverse models of an FGM and FGM porous material. The material parameters can be modeled via the exponential or sigmoid functions, as well as Mori-Tanaka functions or other homogenization techniques [39][40][41][42][43][44]. Diverse applied homogenization techniques only have an influence on the forms of the final replaced plate's stiffnesses and directly on the function M i presented in the obtained general solution in the present paper. It will be the goal of future papers.
The obtained multiparametric general solution will allow for studying the influences of diverse additional complicating effects such as stepped thickness, cracks, additional mounted elements expressed by only additional boundary conditions on the dynamic behavior of the porous functionally graded circular and annular plates. The exact frequencies of vibration presented in non-dimensional form can serve as benchmark values for researchers and engineers to validate their analytical and numerical methods applied in design and analysis of porous functionally graded structural elements. | 5,847.8 | 2019-03-01T00:00:00.000 | [
"Engineering"
] |
A universal mechanism for long-range cross-correlations
Cross-correlations are thought to emerge through interaction between particles. Here we present a universal dynamical mechanism capable of generating power-law cross-correlations between non-interacting particles exposed to an external potential. This phenomenon can occur as an ensemble property when the external potential induces intermittent dynamics of Pomeau-Manneville type, providing laminar and stochastic phases of motion in a system with a large number of particles. In this case, the ensemble of particle-trajectories forms a random fractal in time. The underlying statistical self-similarity is the origin of the observed power-law cross-correlations. Furthermore, we have strong indications that a sufficient condition for the emergence of these long-range cross-correlations is the divergence of the mean residence time in the laminar phase of the single particle motion (sporadic dynamics). We argue that the proposed mechanism may be relevant for the occurrence of collective behaviour in critical systems.
Complex systems usually consist of several dynamical components interacting in a non-linear fashion. Crosscorrelations are then used in order to explore the interdependence in the time evolution of these components measured in terms of specific quantities characterizing each component.
In this context, the existence of cross-correlations has been demonstrated in a wide class of dynamical systems ranging from nano-devices [1] to atmospheric geophysics [2], seismology [3], finance [4][5][6][7][8], physiology and genomics [8]. Of special interest is the case of long-range (power-law) cross-correlations (LRCC) which, being scale free, may be associated with the appearance of characteristics of criticality in the dynamics of the considered complex system. Such a behaviour has been observed, among other examples, in price fluctuations of the New York Stock Exchange during crisis [8], physiological timeseries of the Physiology Sleep Heart Health Study (SHHS) database [8], the spatial sequence describing binding probability of DNA-binding proteins to genes at different locations on mouse chromosome 2 [8] and in flocks of birds [9]. All these findings indicate that the presence of power-law cross-correlations is a quite general property of the dynamics of complex systems. Even more, very recently geometry-induced power-law cross-correlations have been also observed in a coarse-grained description of the dynamics of an ensemble of non-interacting particles propagating in a Lorentz channel [10]. This clearly poses the question of the origin and mechanisms of cross-correlations in particle systems.
Up to now the theoretical treatment of crosscorrelations is based on statistical approaches and their microscopic origin is to a large extent unclear. In the following, we identify the dynamical mechanisms leading to LRCC and show specifically that intermittent dynamics, characterized by long intervals of regular evolution (laminar phases) interrupted by short bursts of abrupt evolution (irregular phases), obeyed by each component separately, generates LRCC between the different components, even if they do not interact with each other. It is argued that the emergence of LRCC is of geometrical origin: in a system with a large number of particles the ensemble of their intermittent trajectories forms a ran-26004-p1 dom fractal set in time. The two-point correlation function of this set can be identified with the cross-correlation function between intermittent trajectories of different particles which appear in the set with probability one. In addition we provide strong evidence that a sufficient condition for the emergence of such scale-free LRCC is the divergence of the mean length of the laminar phase in the intermittent dynamics of each component.
The prototype model we will use to demonstrate our arguments is a system of N non-interacting particles each with an one-dimensional phase space determined by the variable x (i) (i = 1, 2, . . . , N). We do not further specify x (i) : in a real system it can be for example the position or the momentum of particle i or any other property characterizing its state (partially or totally). For the time evolution of x (i) of each particle independently we use a version of the well-known Pomeau-Manneville map of the interval [11], which has been employed in the literature for the description of a wide class of phenomena ranging from anomalous diffusion to turbulence and spiking behaviour of neuro-biological networks, given as for i = 1, 2, . . . , N. In eq. (1) u (i) are positive constants, z (i) are characteristic exponents fulfilling z (i) > 1 and r (i) n are random numbers uniformly distributed in (0, 1]. The quantity x (i), * represents the upper border of the phase space region ((0, x (i), * )) within which the evolution of the particle dynamics is laminar. A typical characteristic of the intermittent dynamics is that for any trajectory and for z 1 the x-values in the laminar region are very close to the diagonal x n there is very slow. Notice that in eq. (1) there is no coupling term between phase space variables of different particles since there is no mutual interaction. This simple model, based on the normal form for the description of intermittent dynamics, is very general and captures all the basic dynamical ingredients necessary for the development of cross-correlations as we will see in the following. To avoid unnecessary complexity we further simplify the model assuming: u (i) = u and z (i) = z for all i. Note that the end of the laminar region x * is not strictly defined. One possible choice, which we use in the following, is to fix x * as the pre-image of 1, i.e. as the solution of the equation x * + u(x * ) z = 1. A second choice is to set it equal tox * = 1/(uz) 1/(z−1) being the x-value for which the non-linear term in eq. (1) becomes equal in magnitude with the linear term. These two values (x * andx * ) are close to each other (with an at most 20% relative deviation) for almost all values of z and our results for the crosscorrelations, shown below, do not depend on this choice.
Using eq. (1) we evolve the considered particle system in discrete time. Different particles correspond to different trajectories, i.e. trajectories starting from a different ini-tial condition. Thus we propagate a set of N trajectories forming the corresponding ensemble. We use the notation x (i) n,A in order to indicate with the index A the possibility for using different representations for an ensemble trajectory for the calculation of the cross-correlation function(s). For example we will use either the original trajectories generated by eq. (1) taking real values in (0, 1) (in this case we use the symbol x for the index A) or a binary representation of these trajectories taking the values 0 in the laminar phase and 1 in the irregular phase (in this case we use the symbol s for the index A). The cross-correlation function is then defined as A , respectively. We calculate the cross-correlation function CC x (m) for various values of z using ensembles of 10 4 trajectories with length 10 5 for each case ensuring convergence of our results. For z > 2 we find an algebraic decay of CC x (m) with increasing m, having an exponent which depends on z. This conclusion is established by fitting the numerical results with a power-law model and then performing a Kolmogorov-Smirnov test for the normality of the residuals. The obtained p-values are all higher than 0.5, indicating that the power-law is indeed a good fit. The behaviour of CC x (m) for z < 2 is more complicated. For 3/2 < z < 2 long-range cross-correlations exist, however they do not possess a power-law form. For 1 < z ≤ 3/2 the crosscorrelation function practically vanishes, performing small amplitude random oscillations around zero. It is worth to mention here that a distinction between the properties of intermittent dynamics for z ≤ 3/2, 3/2 < z < 2 and z ≥ 2 has been already discussed in [12] where the term sporadicity is introduced for the description of the z > 2 case.
In order to analyze and understand the different behaviour of the cross-correlation functions for z > 2 and z < 2, we consider the distribution of the laminar phase lengths, or as it is often also named, the distribution of the waiting times in the laminar region. It is well known that this distribution obeys asymptotically ( 1) a powerlaw of the form ρ( ) ∼ − z z−1 [13,14], where is the laminar phase length. For z > 2 the mean laminar length diverges, while for z < 2 it is finite 1 . Thus, the divergence of should be related with the emergence of power-law cross-correlations between the particles. As we will see later utilizing a simple stochastic description allowing also for analytical treatment, this property can be formulated as follows: if is infinite, then the conditional probability that the particle j at an instance n + m is in the laminar region, provided that the particle i was in the laminar region at instance n, is finite and decays algebraically with increasing m.
To facilitate analysis it is useful to develop a symbolic code for the intermittent dynamics in eq. (1). Such a symbolic representation of the dynamics of the Pomeau-Manneville intermittent map capturing several details like the re-injection rate in the laminar region (and therefore the invariant density in the immediate neighbourhood of the marginally unstable fixed point) has been proposed in [12]. Here we are interested mainly to isolate the dynamical properties leading to the emergence of cross-correlations avoiding the influence of other detailed aspects of the intermittent dynamics. Therefore, we will use a much simpler code, mapping x in the laminar region (x ∈ [0, x * ]) to 0 and x out of the laminar region (x ∈ (x * , 1]) to 1. Such a code has been used in [14] to calculate power-spectra of intermittent systems. In practice we use the full dynamics of eq. (1) to generate the ensemble of intermittent trajectories and then we replace the x-values in each time-series by 0 or 1 according to the previously described rule. Subsequently, we calculate the cross-correlation function CC s (m) for different z-values using the binary sequences generated by the symbolic dynamics from the trajectories of the map in eq. (1).
Complementary we introduce a simple stochastic model containing only the information of the laminar length distribution to simulate the emergence of cross-correlations. We assume a process consisting of two phases defined as follows: i) a stochastic variable ξ takes the value 1 in the irregular phase and the value 0 in the laminar phase and ii) the length of the irregular phase is always 1, while the laminar length probability density is a power-law with exponent −z/(z −1), z being the exponent in the intermittent map of eq. (1). Then we generate an ensemble of realizations of this process and calculate the cross-correlation function CC r (m) for this ensemble. Here the index r indicates that the ensemble of trajectories used to calculate this cross-correlation function is generated by the underlying random process. Despite the simple form of both, the intermittent dynamics in binary representation and the associated stochastic process, large scale computational efforts (10 5 trajectories have been propagated for 10 6 iterations) are needed to achieve convergence of the longtime behaviour of the cross-correlation function. In fig. 1 we show the results obtained for CC A (m) with A = s, r for z = 2.5, 3, 4, 5. The coloured triangles correspond to A = s, while the red lines to A = r. We observe a very good agreement between the two results for each z value, providing a strong indication that the key quantity determining the scale properties of the cross-correlation function is indeed the laminar length distribution, which is the only quantity shared by the two descriptions.
A geometrical interpretation of the emergence of powerlaw cross-correlations can be obtained by showing that the ensemble of trajectories generated either by the intermittent model of eq. (1) or by the simplified stochastic model introduced above, are realizations of a random fractal set. Thus they can be produced by an automaton [15] and they can also be mapped to a scale-free network using the visibility [16], or the recently generalized cross-visibility algorithm [17]. Adopting this point of view, it is natural to expect that two intermittent trajectories (corresponding to two different particles) being two different subsets of a random fractal set or of a scale-free network are power-law cross-correlated. In fact these long-range cross-correlations are dictated by the power-law form of the two-point correlation function of the corresponding set (random fractal or scale-free network).
To demonstrate the fractal properties of the ensemble of trajectories of the simplified stochastic model, we employ techniques used for the calculation of the lower entropy dimension (LED) [18] for random fractal sets. LED corresponds to the "mass dimension" of usual fractal sets (like for example the Cantor set). Thus, a time-series of length L represented by a sequence of binary symbols (0 for laminar phase and 1 for irregular phase in our case) is interpreted as a set of unoccupied (0) and occupied (1) non-overlapping cells of the embedding set 2 . In a large ensemble of trajectories all of length L and generated using a fixed value of z, one can calculate the mean number of "1"s N (1, L) (averaging over the ensemble) determining the mean number of occupied cells necessary to cover entirely the so defined random fractal set. Notice that the concept of the "random fractal set" establishes only at the level of the ensemble and it is not defined for individual trajectories. N (1, L) scales with the length L of the embedding set as where s is a positive constant and d F is the associated fractal LED. To verify that the ensemble of intermittent trajectories defines a random fractal set, we calculate the N (1, L) . We use the same statistics (10 6 trajectories) to construct the trajectory ensemble for each L-value. In fig. 2(a) we show the result for z = 3 on double logarithmic scale. We observe a linear behaviour corresponding to a perfect power-law with d F ≈ 0.5. This suggests the validity of a general scaling relation of the form were the fractal LED may depend on z and the index RF indicates the associated random fractal set. We have performed the fractal LED calculation for several values of z in the range z > 3/2. Our results are summarized in fig. 2(b), where we show the dependence of the fractal LED d F (z) on the exponent of the intermittent dynamics z. Notice that the power-laws N RF (1, L) ∼ L dF (z) for different z's are all of the same quality as measured by the coefficient of determination and the corresponding chi-square per degree of freedom (χ 2 /dof) of the fit. Thus, we conclude that the ensemble of the intermittent trajectories is equivalent with respect to its complexity with a random fractal set with variable dimension d F (z) depending on the characteristic intermittency exponent z. Clearly, the observed fractality refers to the time-dependence, i.e. the considered sets (ensembles of trajectories) are fractal in time. This geometrical property makes transparent the existence of cross-correlations among the members of the ensemble. The fractal LED becomes equal to the embedding dimension for z = 3/2 signalling the absence of long-range cross-correlations in this case. As already mentioned, in the region 3/2 < z < 2 long-range cross-correlations still exist, however there is no clear signal of a power-law form 3 . This issue requires more extensive studies and it is left for a future detailed study.
Going one step further in our analysis, one can develop a method to find an analytical estimation of the cross-correlation function CC r (m) based on the above introduced stochastic model. To achieve this, let us first consider two binary sequences {x n,r , . . .} generated by the stochastic model. To simplify the notation we will omit the index r of the trajectory values for the following steps. The function CC r (m) should be proportional to the joint probability P ij (x (i) k = 1; x (j) k+m = 1) that the random variable x (i) has the value 1 at time step k and the random variable x (j) has the value 1 at time step k + m, averaged over the time Obviously it holds since x (i) and x (j) are statistically independent. To calculate P (x (i) k = 1) one can use the method introduced in [14] writing (7) where P 1|1 (n|x (i) k−n = 1) is the conditional probability to have a laminar phase of length n directly after the instant k − n if x (i) has the value 1 at the time instant k − n. The appearance of a laminar phase with duration n is independent of the value of x (i) at the instant k −n. Thus, we find where ρ(n) is the laminar length distribution normalized to one. Inserting eq. (6) into eq. (5) we obtain which can be solved recursively using as initial condition P (x (i) 1 = 1) = p 0 with p 0 ∈ (0, 1). A similar equation is obtained also for P (x (j) k+m = 1) replacing simply k with k + m. Having solved eq. (9) one can calculate the lefthand site of eqs. (6), (7) and insert the obtained results in eq. (5) to get an analytical approximation for CC r (m) containing three sums. The validity of the introduced analytical scheme is tested in fig. 3, where we show the symbolic dynamics result CC s (m) together with the analytical estimation for CC r (m) using z = 3.
We observe a very good agreement between the analytical result and the numerical simulations for CC s (m). Notice that numerically calculated CC r (m) is not shown in this plot. However, as illustrated in fig. 1 the numerical results for CC r (m) and CC s (m) are very close to each other for any considered z and therefore, the analytical estimation of CC r (m) can be also used as an analytical estimation of the cross-correlation CC s (m). The analytical treatment leads us to the conclusion that it is the longrange character of the correlation between P (x (i) k = 1) and P (x (j) k = 1) existing for any pair of intermittent trajectories which generates the observed cross-correlations. Note that this property has been discussed in [19] in a different context.
With our analysis we have demonstrated a mechanism to establish power-law cross-correlations between particles which do not interact with each other. This phenomenon is induced by the strong intermittent dynamics performed by each of the particles independently. The resulting ensemble of trajectories for all particles, despite the absence of a coupling between trajectories of different particles, forms -in a binary representation-a fractal set in time and the underlying self-similarity leads to the establishment of algebraically decaying cross-correlations. Strong intermittency (sporadicity) discussed here is a result of the interaction of a particle with a suitable external potential (field) 4 . The appearance of long-range crosscorrelations deems sporadic dynamics a plausible mechanism for the collective behaviour emerging in a N -particle system. Furthermore, since such a collective behaviour is accompanied by scale-free inter-particle correlations, it could be related to the emergence of critical behaviour in the considered system. In fact, a connection of intermittent dynamics with criticality has already been established in [20] using the example of the 3D Ising model. There it has been shown that the order parameter fluctuations at the critical point can be efficiently described by an intermittent map of Pomeau-Manneville type -similar to that of eq. (1)-with additive noise. The exponent z in this intermittent map is related to the isothermal critical exponent δ associated with the second-order transition. This property sets a bound z ≥ 2 necessary for the occurrence of critical behaviour. It is remarkable that this bound coincides with the bound obtained by our present analysis in order to have a divergent mean laminar length. An astonishing feature of our results is that the power-law cross-correlations emerge even without interaction among the particles. In the context of critical phenomena such a property is welcome, since it could explain universality aspects. Indeed the microscopic interactions between the elementary degrees of freedom of a critical system do not play any role for the determination of the critical exponents and the associated scaling laws describing the phenomenology of an extended system at the critical point.
In the framework of our approach the obtained correlations are determined by the time evolution of the trajectories of two different particles. To enable a closer relation to equilibrium critical phenomena one should extend these ideas also to the case of a field depending both on time and on space. Such an extension requires the use of matrix equations for the field evolution replacing the variable x (i) n by a scalar field φ(i, n), where i is a spatial variable, while n is the time variable. At a first glance one could argue that for the calculation of the spatial cross-correlations one might exchange the role of spatial and temporal variables in the dynamics, use eq. (1) to describe changes of the field φ in space and average over the time variable. This would lead to power-law cross-correlations between the field values at different locations, which is typical for a critical system. However, a consistent treatment of this case requires more elaborate and extensive studies left for future investigations. * * * This work was made possible by the facilities of the Shared Hierarchical Academic Research Computing Network (SHARCNET: www.sharcnet.ca) and Compute/Calcul Canada. The authors thank C. Petri and B. Liebchen for fruitful discussions. The performed research has been co-financed by the European Union (European Social Fund -ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) -Research Funding Program: Heracleitus II. Investing in knowledge society through the European Social Fund. The authors also thank the IKY and DAAD for financial support in the framework of an exchange program between Greece and Germany (IKYDA 2010) and the UK EPSRC for funding under grant EP/E501311/1. | 5,216.4 | 2014-01-01T00:00:00.000 | [
"Physics"
] |
Contribution of a Novel B3GLCT Variant to Peters Plus Syndrome Discovered by a Combination of Next-Generation Sequencing and Automated Text Mining
Anterior segment dysgenesis (ASD) encompasses a spectrum of ocular disorders affecting the structures of the anterior eye chamber. Mutations in several genes, involved in eye development, are implicated in this disorder. ASD is often accompanied by diverse multisystemic symptoms and another genetic cause, such as variants in genes encoding collagen type IV. Thus, a wide spectrum of phenotypes and underlying genetic diversity make fast and proper diagnosis challenging. Here, we used AMELIE, an automatic text mining tool that enriches data with the most up-to-date information from literature, and wANNOVAR, which is based on well-documented databases and incorporates variant filtering strategy to identify genetic variants responsible for severely-manifested ASD in a newborn child. This strategy, applied to trio sequencing data in compliance with ACMG 2015 guidelines, helped us find two compound heterozygous variants of the B3GLCT gene, of which c.660+1G>A (rs80338851) was previously associated with the phenotype of Peters plus syndrome (PPS), while the second, NM_194318.3:c.755delC (p.T252fs), in exon 9 of the same gene was noted for the first time. PPS, a very rare subtype of ASD, is a glycosylation disorder, where the dysfunctional B3GLCT gene product, O-fucose-specific β-1,3-glucosyltransferase, is ineffective in providing a noncanonical quality control system for proper protein folding in cells. Our study expands the mutation spectrum of the B3GLCT gene related to PPS. We suggest that the implementation of automatic text mining tools in combination with careful variant filtering could help translate sequencing results into diagnosis, thus, considerably accelerating the diagnostic process and, thereby, improving patient management.
Introduction
In recent years, next-generation sequencing technology (NGS) and access to databases containing sequencing data from patients with eye disorders have accelerated research in ophthalmic molecular genetics. However, still the most difficult and time-consuming part of the work is data analysis and clinical interpretation of reported variants, especially in cases with blended phenotype, when the decision on major clinical feature, crucial for diagnostic process is difficult to make. Additionally, due to massive sequencing data generation and relatively restricted curation, available databases, within reliable information contain false variant pathogenicity attributions. Thus, rigorous variant interpretation, as recommended by the American College of Medical Genetics and Genomics (ACMG) [1], followed by extensive review of the existing literature could improve diagnostic accuracy.
Anterior segment dysgenesis (ASD) encompasses a spectrum of ocular disorders affecting the structures of the anterior eye chamber, including the iris, lens, cornea, and the anterior chamber angle. Clinical manifestations include corneal opacities, cataracts, and iridocorneal adhesions. ASD poses a high risk of the patient developing early onset and aggressive glaucoma resulting from dysregulation of aqueous humor flow, high intraocular pressure (IOP), and death of retinal ganglion cells [2]. Disorders related to such a phenotype include Axenfeld Rieger syndrome (ARS), isolated Peters Anomaly, Peters plus syndrome (PPS), primary congenital glaucoma, congenital hereditary endothelial dystrophy, and iridogoniodysgenesis anomaly [3], all of which are registered as rare by Orphanet. The genetic background of ASD is only partially known and may be related to the disruption of many genes, e.g., CYP1B1, BMP4, FOXC1, PAX6, FOXE3, NDP, SLC4A11, HCCS, PITX2, PITX3, LMX1B, and PXDN [4][5][6][7][8], several of which can modulate tyrosinase activity, postulated as an important modulator of iridocorneal angle malformations [6]. Importantly, diseases with multisystemic manifestations can also present with ASD, e.g., syndromes resulting from mutations in genes encoding collagen type IV: COL4A1 and COL4A2 [2,9,10], often manifest with neurological disorders, what may hinder finding the relevant genetic cause of the disease. Diseases from the ASD spectrum may be dominant (ARS, aniridia) or recessive (primary congenital glaucoma, PPS), and due to the diverse genetic background, several phenotypes may present with either mode of inheritance [11].
Due to the heterogeneous clinical manifestation of the syndrome, reflecting the complex genetic background, classification, and proper diagnosis of ASD is still challenging. Here, we report a strategy for NGS data analysis in order to find the genetic cause of a rare combination of symptoms in a pediatric patient with a set of ocular disorders, cleft lip and high-arched palate, using data from DNA sequencing of coding regions of 4813 genes in both parents and the affected child.
We demonstrate a variant filtering strategy combined with information on the observed phenotype coded in Human Phenotype Ontology (HPO) terms and information enrichment by literature mining with the AMELIE (Automatic MEndelian LIterature Evaluation) [12] tool and by genomic variant annotation and prioritization with wANNOVAR [13][14][15] to help draw biological insights from sequencing data and increase the chances of faster and accurate diagnosis. Literature mining tool prioritizes variants according to the current knowledge from published data and enables critical verification of results through linking ranked genes important for the phenotype with relevant publication, what accelerates the process and may reduce false positive findings. In our opinion, this analytical strategy should be applicable to other clinical scenarios with complex genetic background and heterogeneous clinical manifestations also.
Obstetric Information
The 30-year-old pregnant patient was referred for the first-trimester testing due to three previous miscarriages and advanced maternal age. The parents had one healthy child. Screening involved testing for trisomies 21, 18, and 13, ultrasonography, measurement of maternal serum free β-human chorionic gonadotropin (β-hCG), and pregnancy-associated plasma protein-A (PAPP-A) levels, according to the Fetal Medicine Foundation recommendations. Both biochemical parameters were measured on the DELFIA Xpress analyzer (Perkin Elmer). The mother denied medication history or infection during pregnancy. Moreover, no familial predisposition to any disease was identified. Routine fetal scanning at 13 weeks of gestation showed a cleft lip and palate and increased nuchal translucency (Figure 1). Nuchal translucency was assessed as 3 mm and was estimated as being over the 95th percentile for the given crown-rump length (CRL = 67.7 mm). No other anomalies were detected for the fetus. Having detected a fetal defect and correlated it with an indirect risk group, an invasive diagnostic procedure was suggested to determine the fetal karyotype. Amniotic fluid was aspirated during amniocentesis on week 15, day 4 of pregnancy. Following amniocyte culture, a karyotype of a normal male (46, XY) was obtained at a resolution of 550 bands. The couple was then referred for genetic counseling for further etiological investigation.
The second ultrasonographic examination performed at the 20th week of gestation, as per the recommendations of the Polish Society of Gynecologists and Obstetricians, confirmed the presence of bilateral cleft lip and palate.
The patient (mother) was hospitalized with intrauterine growth restriction in the 28th week of gestation. In view of abnormal flow in the umbilical artery and the middle cerebral artery, as well as deteriorating fetal well-being, it was decided to terminate the pregnancy and perform a cesarean section.
Initial Postnatal Examination
Physical examination of the fetus revealed bilateral cleft alveolar processes, cleft lip, and high-arched palate. Rhizomelic proximal extremities and brachydactyly were noted. Microphthalmia and bilateral corneal opacities were observed during ophthalmic examination.
Ophthalmic Evaluation
Visual acuity in both eyes was determined as dubious light perception. Examination under general anesthesia resulted in the following findings ( Figure 2). Right eye (RE): Cornea dimensions 8 x 8.5 mm; IOP 10 mmHg; central corneal leucoma, subsequent to ulceration with ingrowth of blood vessels. Translucent peripheral cornea revealed flattened anterior eye chamber, medium-width iris with persistent pupillary membrane, and lens adhesion to the posterior corneal surface. A visible fragment of the lens was translucent with pink reflection from the eye fundus. Further details were hard to evaluate.
Left eye (LE): Cornea dimensions 8 × 8.5 mm; IOP 12 mmHg, partial central leucoma. The peripheral part of the cornea was translucent with a persistent pupillary membrane. The pupillary dilatory response was normal and some peripheral blood vessels of the retina could be visualized.
The anatomical axial length of the eyeballs in ultrasonographic (USG) evaluation was 18 mm (RE) and 17.5 mm (LE) in projection AB (Ultrascan B, Alcon, USA). Diagnostic USG of the posterior part of the eyes revealed no pathology in RE and a small floater above an optic nerve disk in LE.
The patient remained under ophthalmic supervision. At the age of four months, central thinning of the corneal leucoma of the RE, threatening perforation and secondary glaucoma were determined in the RE. Corneal thickness in the RE was 826 µm in the periphery but only 158-380 µm in the center while being 855 µm in the entire LE. IOP was 37 mmHg in the right and 17 mmHg in the LE. Amniotic membrane transplantation to protect the cornea and cyclocryotherapy to lower IOP were performed in the RE. Topical IOP lowering medications were started and the patient was followed for local status and IOP. In spite of ongoing pharmacological treatment, another round of amniotic membrane transplantation and cyclocryotherapy had to be performed at the age of six months. Stabilization of the IOP was achieved at the level of 26 mmHg in the right and 18 mmHg in the LE. The patient remains under ophthalmic supervision with permanent pharmacological medications to prevent glaucoma of the RE.
High-Throughput Sequencing Results
On average, 9.8 million reads were sequenced per sample, and approximately 8.9 million reads (91.1%) were mapped on the reference using our custom pipeline. A high depth of coverage (at least 20×) was maintained for 86.9% of the targeted region and approximately 95% of the target region was covered at least 10×. In total, we detected 32,222 variants: 26,230 single nucleotide polymorphisms (SNPs), 2536 insertions, and 3456 deletions.
Following the GEMINI trio analysis, we identified 265 variants in total, of which 36 were de novo variants, 200 autosomal recessive variants, and 16 pairs of compound heterozygotes (32 variants, of which three were de novo) in the proband. In terms of localization, 38 variants were annotated as exonic and 227 as intronic changes. All discovered variants were concatenated into one VCF file for further analysis (Supplement 1). Using in silico prediction algorithms such as SIFT, PROVEAN, FATHMM-XF, and MutationTaster, we were able to select several candidates with possible damaging impact (Table 1).
For genomic variant prioritization, we uploaded the generated VCF file (Supplement 1) into AMELIE and wANNOVAR with the following HPO identifiers: Corneal opacity (HP:0007957), abnormality of the anterior chamber (HP:0000593), anterior chamber synechiae (HP:0007833), and cleft lip/palate (HP:0000202). Based on AMELIE and wANNOVAR results (Table 2), we discovered that variants in both alleles of B3GLCT (alias B3GALTL) that were segregated from an unaffected father and unaffected mother, were strong candidates to be potentially responsible for the observed pathogenicity, which, most likely, was PPS (Supplement 2).
The second variant, inherited from the father, was a deletion, NM_194318.3:c.755delC (p.T252fs), which has not been previously reported (according to LOVD, and is not present in the gnomAD database). In Varsome, the discovered variant was classified according to ACMG2015 as likely pathogenic, with identified criteria to be PVS1 and PM2. Moreover, this mutation had a MutationTaster score of 1 (prediction: Disease causing) and changed the reading frame downstream of the amino acid (AA) at position 252. As a result, the mutation terminated translation at AA position 264, whereas a wild-type protein is 498 AA long. Both mutations were manually reviewed in an integrative genomic viewer (IGV).
Sanger Sequencing Results
For final confirmation of the potentially causative variant in B3GLCT, we performed Sanger sequencing of exons 8 and 9 in the patient and both parents. Both mutations were present in the patient and the inheritance pattern was confirmed. Sanger sequencing electropherograms are presented in Figure 3.
Discussion
As reported, 6%-8% of children are born with developmental disorders, with many of them caused by genetic changes in a single gene [16]. Despite substantial progress in the NGS data analysis field, finding the causal variant is still a time-consuming process, with the diagnostic yield falling between 25% and 30% for exome sequencing data [17][18][19], especially due to difficulties in the interpretation of the functions of rare variants. In the current study, we used AMELIE and wANNOVAR on NGS data to prioritize genetic variants based on the putative contribution to the clinical phenotype of a pediatric patient suffering from anterior chamber disorder involving corneal opacity, persistent pupillary membrane, thinning of the posterior cornea and lenticulo-corneal adhesions with secondary glaucoma and accompanying cleft lip and palate.
Independent of the sequencing scale, NGS creates considerable data burden Thus, careful variant filtering and prioritization, according to the information available from sequencing databases, in compliance with ACMG standards are required. Additionally, differences in NGS data analysis between laboratories could generate discrepancies in the diagnosis, thus filtering schemes should be clearly reported. Moreover, clinical interpretation of sequencing data requires matching information on patients' genotypes and phenotypes and is usually complemented by the integration of information from databases such as OMIM or ClinVar, which are manually constructed and curated. The search for causal variants should be supported by published evidence if it exists, and this requires manual querying of relevant literature databases. This step is the most time-consuming in data processing and has a major impact on timely diagnosis and minimizing false negative and false positive findings. Web-based literature search engines are employed to assist manual curation of newly published information. AMELIE is a method for ranking candidate causal genes related to the phenotype, extracted from primary literature using supervised machine learning methods. The inclusion of HPO-encoded phenotype information facilitates communication with clinical specialists. In our opinion, it combines the convenience of other HPO-based variant ranking tools such as Exomiser [20] with enrichment of the latest literatures. wANNOVAR is a web server which provides functional annotation of genetic variants, reporting their conservation levels, calculating their predicted functional importance scores, retrieving allele frequencies in public databases, and implementing a protocol to identify a subset of potentially deleterious variants/genes.
The combination of these two strategies applied to trio sequencing data led us to the identification of compound heterozygous variants in the B3GLCT gene, which has been previously associated with the PPS phenotype, observed in our patient. Using only a VCF file, annotated by GEMINI with several pathogenicity prediction tools (SIFT, Provean, FATHMM-XF, MutationTaster), resulted in conflicting predictions for variants in some cases. Although the list of possible causative variants included two variants of B3GLCT as compound heterozygous variants, it was not fully informative as the annotation missed the prediction for the deletion NM_194318:c.755delC. Both AMELIE and wANNOVAR ranked variants in genes related to HPO-encoded phenotypes. As a result, B3GLCT got the highest score with both tools. Moreover, in contrast to lower ranked genes, the disease annotated to B3GLCT-PPS-matched our patient's phenotype perfectly.
PPS, a subtype of ASD, inherited in a recessive manner, is a very rare disease with unknown incidence or prevalence. As reported in 2016 by Jaak Jaeken et al., the worldwide number of patients diagnosed with PPS due to B3GLCT mutation was 49 [21]. Common features of PPS involve anterior chamber dysgenesis symptoms such as central corneal clouding, cataract, thinning of the posterior surface of the cornea, iris hypoplasia, and iridocorneal adhesions. In addition to ocular defects, PPS is characterized by short stature, dysmorphic facial features, and developmental delay [21]. In terms of classification, PPS belongs to genetically heterogeneous congenital disorders of glycosylation, grouping rare congenital, neurometabolic, and malformation syndromes [22,23]. It is caused by changes in B3GLCT, encoding an O-fucose-specific β-1,3-glucosyltransferase (beta-1,3-glucosyltransferase), responsible for attachment of glucose to O-linked fucose (O-fucose), which is previously added by protein O-fucosyltransferase 2 (POFUT2) to thrombospondin type 1 repeats (TSRs), present in many proteins [24]. Localized at the endoplasmic reticulum (ER) [25], beta-1,3-glucosyltransferase modifies only properly folded TSRs and promotes secretion of ER proteins stabilized by glycosylation. Together with POFUT2, B3GLCT provides a noncanonical quality control system of proper protein folding in cells [26]. A GWAS study on more than 17,100 patients, with advanced age-related macular degeneration (AMD) and over 60,000 controls, showed a significant association between AMD and loci in B3GLCT and ADAMTS9 [27].
On that date, there were 22 unique variants of the B3GLCT gene reported in 10 individuals in the LOVD database. Variant c.660+1G>A (rs80338851), located at the donor splice site (5 ss) of exon 8 observed in our patient, is one of the most common among variants identified in PPS patients [28]. It has a frequency of 0.01% in the gnomAD database for European (non-Finnish) population and accounts for 69% of all reported pathogenic alleles [21]. The results of Oberstein et al. showed that c.660+1G>A alters the acceptor site of exon 8, leading to the skipping of this exon and the introduction of a premature termination codon (PTC) at position +10 within exon 9. According to the position rule for PTC, c.660+1G>A results in a nonsense mRNA that elicits nonsense-mediated mRNA decay (NMD) [29]. NMD is the surveillance pathway protecting cells from the action of truncated proteins which could be translated from transcripts bearing a PTC [30]. In mammalian cells, it is the process of degradation of the mRNA, depending on the interaction between translation termination, complex multiprotein assemblies, which takes place when PTC is present at least 50 nucleotides upstream of the last exon junction. mRNA degradation due to NMD is involved in many neurological and developmental disorders [31].
The second of the detected variants, c.755delC (p.Thr252fs), was not present in any publicly available database and is the first one reported in exon 9. Pathogenicity prediction algorithms assigned this variant as disrupting. Similar to c.660+1G>A, c.755delC introduces a PTC within exon 9, 26 nucleotides from the splice site, which also should result in NMD.
Our study demonstrates the important role and possible diagnostic utility of NGS combined with AMELIE and wANNOVAR.
Such an approach, on the one hand, brings data on genes and variants with well-described roles in disease development based on curated, authoritative databases such as OMIM (wANNOVAR) and, on the other hand, updates the analysis with current information from published manuscripts. Such a procedure helps reduce the number of false positive findings not related to a phenotype which are over-abundant in the VCF file annotated with pathogenicity tools.
Clinical Assessment
The patient was recruited from the Clinic of Neonatology at the Jagiellonian University Medical College, Krakow, Poland and the Ophthalmology Clinic and Department of Ophthalmology, Medical University of Silesia in Katowice, Poland. Written informed consent was obtained from the study participants and informed parental consent was obtained on behalf of the child. Developmental and dysmorphology assessments were conducted by a clinical geneticist. This research adhered to the tenets of the Declaration of Helsinki and was performed upon approval of the protocol by the Jagiellonian University Ethics Committee No. 122.6120.12.2015 (29 January 2015).
Ophthalmic Evaluation
The patient was admitted to the outpatient clinic of the Department of Ophthalmology at the Silesian Medical University in Katowice at the age of seven weeks and then, subsequently, at four and six months.
DNA Extraction, Library Preparation, and NGS
DNA was extracted from the peripheral blood of the affected child and both his healthy parents with the Maxwell 16 Blood DNA purification kit (Promega, Madison, WI, USA) on the Maxwell 16 device (Promega). Sequencing libraries were prepared with the TruSight one sequencing panel kit (Illumina, San Diego, CA, USA) according to the manufacturer's protocol. In brief, 50 ng of DNA was fragmented and adaptor-tagged in an enzymatic reaction. Genomic libraries were enriched for the coding regions of the 4813 genes by two cycles of hybridization with biotinylated probes, followed by capture on streptavidin beads; 12.5 pM libraries were sequenced on the MiSeq sequencer (Illumina) using v3 chemistry reagents (2 × 150 bp reads).
Data Analysis
Raw reads were processed with the Illumina software, generating base calls and corresponding base-call quality scores. These data were then processed through our custom pipeline that uses open-source programs ( Figure 4). Briefly, generated fastq files were fed to the FastQC software (version 0.11.5) to provide quality control checks on sequenced data (Andrews et al., 2010). Reads were aligned to the human reference genome GRCh37 (hg19) using the BWA-MEM algorithm from the Burrows Wheeler Aligner (BWA, version 0.7.5) [32]. Unmapped, low mapping quality score and duplicated reads were filtered out with SAMtools (version 0.1.19) [33]. GATK (version 3.7) base quality recalibration was applied across all samples simultaneously using variant quality score recalibration (VQSR) according to the GATK best practices recommendations [34,35]. Filtered variants were concatenated into one record (VCF file) and then, using GEnome MINIng (GEMINI, version 0.18.3) [36], the discovered variants were annotated with SnpEff (version 4.2) [37], and loaded into the SQLite database for parents-child trio analysis. Using the function for trio analysis implemented in the GEMINI software, we identified three groups of variants in the proband patient: (1) De novo, (2) autosomal recessive, and (3) compound heterozygotes. All variants were concatenated into one VCF file and used to predict in silico the effects of amino acid substitutions and indels using SIFT [38], Provean [39], FATHMM_XF [40], and MutationTaster [41]. Simultaneously, the generated VCF file was uploaded into two web tools, Automatic Mendelian Literature Evaluation (AMELIE) [12] and wANNOVAR [15], which connect the ANNOVAR [13] annotation pipeline with the Phenolyzer [42] gene prioritization pipeline. Both softwares were used with standard settings and patient phenotype identifiers as HPO ID. Finally, predicted causative variants were classified according to ACMG2015 guidelines, to verify potential pathogenicity using Varsome [43] online tools.
Sanger Sequencing
For final confirmation of the most plausible causal variants, exons 8 and 9 of B3GLCT (RefSeq NM_194318) were analyzed by Sanger sequencing on the ABI3500 sequencer (Applied Biosystems, ThermoFisher Scientific, Waltham, MA, USA). The obtained sequences were aligned to the reference NC_000013 with the SeqScape software (LifeTechnologies, ThermoFisher Scientific). After identification of an exon-shortening event, the overlapping sequence resulting from the heterozygous deletion was analyzed manually using SnapGene (GSL Biotech, Chicago, IL, USA). | 5,033 | 2019-11-28T00:00:00.000 | [
"Biology"
] |
Big Green at WNUT 2020 Shared Task-1: Relation Extraction as Contextualized Sequence Classification
Relation and event extraction is an important task in natural language processing. We introduce a system which uses contextualized knowledge graph completion to classify relations and events between known entities in a noisy text environment. We report results which show that our system is able to effectively extract relations and events from a dataset of wet lab protocols.
Introduction
Wet lab protocols specify the steps and ingredients required to synthesize chemical and biological products.The majority of wet lab protocols are formatted as natural language, designed for human lab workers to interpret and carry out.Protocols are formatted differently depending on lab norms and the author writing them, and may include spelling mistakes, nonstandard abbreviations, colloquial phrasing, and assumptions that may not be obvious to readers from outside of the author's lab or field.
Automated extraction of events, relations, and entities from this noisy language data enables standardized tracking of lab protocols, and is an important step forward for the automated reproduction of scientific results.We examine the problem of automatically identifying and classifying events and relations between entities as part of Shared Task 1 at WNUT 2020 (Tabassum et al., 2020).This shared task works with the Wet Lab Protocol Corpus (WLPC) introduced by Kulkarni et al. (2018).The WLPC dataset consists of wet lab protocols drawn from an open-source database and annotated by a group of human annotators which included subject matter experts.
Prior Work
Past approaches to relation and event extraction from wet lab data have included systems based on propagating information across graphs.Jiang et al. (2019) introduce an end-to-end system, called SpanRel, for identifying and labeling text spans and the relations between them using any text embedding model.The DyGIE and DyGIE++ systems, meanwhile, learn to propagate useful information across graphs of coreferences, relations, and events, allowing long-distance contextual information to support relation and event extraction tasks based on sliding window BERT embeddings of the text (Luan et al., 2019;Wadden et al., 2019).
Knowledge Graphs
Knowledge graphs are graph representations of the relations between entities (Schneider, 1973).In a typical knowledge graph construction, graph nodes represent entities, while edges of different types represent relations between entities.As an example, a knowledge graph may contain nodes for "United Kingdom" and "United Nations" with an edge of type "Member-Of" between "United Kingdom" and "United Nations." Because knowledge graphs are generated from imperfect information, they represent a subset of information about their component nodes and thus suffer from incompleteness.Incompleteness means that edges representing relations which exist between nodes in reality are not present in the graph (for example, if the knowledge graph contains the "United Kingdom" and "United Nations" nodes but does not contain the "Member-Of" relation between them).This property of knowledge graphs has given rise to efforts to identify missing relations between entities, a task referred to as knowledge graph completion (Lin et al., 2017).
There are obvious parallels between knowledge graph completion and relation extraction from text given prelabeled entities; namely that both tasks require identifying a relation (if one exists) between arXiv:2012.04538v1[cs.CL] 7 Dec 2020 a given pair of entities.We therefore develop a model which represents the task of extracting relations from wet lab protocols as a knowledge graph completion problem.
Representing relation extraction as sequence classification
Relation classification requires the input of two target entities to predict a relation between them.Therefore, to formulate relation extraction as relation classification, we must identify target entity pairs.A basic approach might be to simply sample each possible pair of entities in both possible orders (bidirectional sampling is required because relations are order-dependent).This sampling strategy, however, ignores structural information about the data.Protocols are separated into lines with one line for each step, and relations and events typically occur between entities which are close together.
This naïve approach also introduces computational problems.The number of possible entity pairs for n entities is n 2 − n, which produces a high number of entity pairs as the number of entities grows.We find that real relations represent just 0.37% of the possible relations in the WLPC training data, indicating that a system which enumerates all possible entity pairs would have to be exceptionally accurate to be effective.
The structural features of the data enable us to reduce the scope of our evaluation by focusing only on entities which are close to each other.We initially evaluated based on only considering entity pairs in the same step.By analyzing the training data, we find that 99% of true relations are between entities which contain less than 14 tokens between them.We thus restrict our analysis to entity pairs which are less than 14 tokens apart.Using this method (based on training data statistics) we are able to maintain 99% of true relations while reducing the total number of relations evaluated by 41% over a sentence based approach and improving our precision substantially.
Contextualization
One distinction between knowledge graph completion and relation extraction is important to consider.In a knowledge graph, nodes are unique and any given relation between two nodes always exists.In relation extraction from text, nodes are not unique.Consider the following protocol instruction: "Separate 5mL of the solution and add 5mL water to replace the removed volume." In this protocol, the "5mL" entity of type measurement which refers to the solution is distinct from the "5mL" entity of type measurement which refers to the water.The action "Separate" acts on the former, but not on the latter, while the action "combine" acts on the latter, but not on the former.We handle this discrepancy by adding a local context sequence, identifying the targeted entities in-text.
We generate this context sequence by taking the tokens corresponding to the n sentences surrounding the target entity tokens as contextual information.We find empirically that n = 1 provides the best performance, and that higher values of n tend to cause overfitting.
To resolve the issue of ambiguous entity reference in a sequence where multiple entities share the same text (as above), we identify entities in-context.To do this, we add entity label tokens ([EntA] and [EntB]) surrounding the referenced entities in the context, tagging them for easy identification.
Relation Classification
Once we have extracted a set of viable entity pair candidates, given two labeled candidate entities E a and E b , and surrounding context C we attempt to achieve two tasks: identifying whether or not a relationship is present between the two entities, and if there is, to classify the relationship between the entities using a knowledge graph completion approach.
Prior work has introduced the idea of using language models to formulate relation prediction between entities in a knowledge graph as a sequence classification task (Yao et al., 2019).Pre-trained language models such as ELMo and BERT have seen widespread success when fine-tuned for use in sequence classification tasks (Devlin et al., 2019;Vaswani et al., 2017;Peters et al., 2018).
We finetune a BERT model provided by the HuggingFace library to perform relation prediction based on multi-sequence classification (Wolf et al., 2019).We finetune for 15 epochs, using an initial learning rate of 5 × 10 −5 and an input size of 100 tokens.Hyperparameters were determined via grid search over the development set.Type ID Mask represents the token type IDs passed to the BERT model.These binary type IDs indicate different sequence sources for multisequence problems such as this one.For example, when performing a classification task with two sequences, tokens from the first sequence would have a type ID of 0, and tokens from the second sequence would have a type ID of 1.The type IDs improve learning stability for BERT, ensuring that the model is able to distinguish between different sources of data.
Typically, three-sequence classification tasks in BERT are handled by masking in a 0-1-0 style (ie, the type ID mask for sequence 1 is 0, the type ID mask for sequence 2 is 1, and the type ID mask for sequence 3 is 0 again).We find that the distinct information types of entity information and contextual information mean that labeling a sequence of Entity-Entity-Context sequences as 0-1-0 is ineffective, as BERT is not able to effectively learn the difference between context and entity information.We instead use the type-mask format 0-0-1, labeling labeled entity tokens 0 and context tokens 1.This method improves training stability and increases model performance substantially.We suggest that differences in sequence information type is the most important metric for determining type ID mask.
Results & Discussion
Our results, shown in Table 1, show that our system is able to effectively identify many types of relations even given this noisy data format.More specifically, this approach is able to identify relations and events with extremely high recall (as high as .95for Measure and Measure-Type-Link relations).
Our approach is relatively weak in precision.This is likely due to our formulation of the task as an evaluation of potential entity pairs.We find that our system classifies relations with an accuracy of 93% on the development set, but because there are many more possible pairings between entities in a given protocol than there are actual pairings, even a system with high accuracy can incorrectly predict nonexistent relations.
We reduce the number of possible entity pairings generated by applying a distance heuristic discussed in Section 4. We found that tuning the amount of entity pair candidates evaluated impacted results significantly (for example, our development set F1-score rose almost 20% when using a token-based distance metric rather than a sentencebased distance metric for selecting candidate entity pairs).The recall of our results suggests that our distance heuristic is effective at dramatically reducing the number of evaluated entity combinations without removing too many valid combinations, but the precision indicates that it may be valuable to modify or find an alternative method for producing candidate entity pairs.This could include a method which considers contextual information, instead of focusing only on the token distance between a pair of entities.
Class-specific result analysis allows us to identify where our system struggles.One such area is classes which occur less frequently in the data.pected here.We expect that collection of more data for imbalanced classes could improve performance of predictions for those classes substantially.
Recent prior work has shown that BERT and other language embedding models can become overly reliant on simple patterns in the data.Chauhan (2020) showed that the addition of the text "10 deaths" to uninformative tweets about COVID-19 caused a BERT based system to mistakenly label them as informative.We anticipate that this effect may make our system more prone to failure in edge cases, where basic clues that the model has learned in terms of entity type patterns or contextual patterns are not present.A potential solution for this problem is to augment the training data using examples which do not have certain attributes (for example, masking entity labels).This may reduce the model's tendency to learn from basic patterns rather than true relationships between text and a relation or event class.
Conclusion & Future Work
We show that contextualized knowledge graph completion using sequence classification can perform effectively on a relation extraction task in a noisy and specialized domain.Our model effectively identifies relations and events in the data, and our work leaves open many avenues for future work.
As discussed in Section 5, our system is sensitive to how candidate entity pairs are selected.We use a distance heuristic based on statistics of the training data to achieve our results, but we anticipate that more sophisticated methods for identifying promising candidate entity pairs could improve our results.We also suggest that our results could be improved by using a domain-specific model such as SciB-ERT or BioBERT (models trained on scientific papers and abstracts respectively).Prior work shows that these models often outperform standard BERT models on scientific data (Beltagy et al., 2019;Lee et al., 2020).
We believe that our results and the results of any systems which require training or fine-tuning large models would be improved by increasing available training data.Finding an effective method for augmenting existing training data and generating or collecting new training data (artificial or real) is a valuable route for further study.
Finally, we are interested in further investigation of representing relation and event identification as graph completion.Link prediction systems which support a variety of edge labels could allow us to leverage structural data from a protocol relation graph.This could enable the identification of relations which are improbable or those which may be missing from the predictions.
Figure 1 :
Figure 1: Input tokens and type ID masks for BERT pretraining.
Our macro-average F1-score is 0.69 for the seven most frequent relation classes (each of these has over 1000 examples in the training data), versus a macro-average of 0.43 for the seven least frequent relation classes (each of which has less than 1000 examples in the training data).BERT and similar language-embedding models rely on large quantities of training data, and class performance suffering due to lack of training data is not unex-
Table 1 :
Results by relation type on withheld WLPC test data. | 3,012.8 | 2020-11-01T00:00:00.000 | [
"Computer Science"
] |
Efficient Plasma Technology for the Production of Green Hydrogen from Ethanol and Water
: This study concerns the production of hydrogen from a mixture of ethanol and water. The process was conducted in plasma generated by a spark discharge. The substrates were introduced in the liquid phase into the reactor. The gaseous products formed in the spark reactor were hydrogen, carbon monoxide, carbon dioxide, methane, acetylene, and ethylene. Coke was also produced. The energy efficiency of hydrogen production was 27 mol(H 2 )/kWh, and it was 36% of the theoretical energy efficiency. The high value of the energy efficiency of hydrogen production was obtained with relatively high ethanol conversion (63%). In the spark discharge, it was possible to conduct the process under conditions in which the ethanol conversion reached 95%. However, this entailed higher energy consumption and reduced the energy efficiency of hydrogen production to 8.8 mol(H 2 )/kWh. Hydrogen production increased with increasing discharge power and feed stream. However, the hydrogen concentration was very high under all tested conditions and ranged from 57.5 to 61.5%. This means that the spark reactor is a device that can feed fuel cells, the power load of which can fluctuate.
Introduction
Hydrogen energy can be an excellent solution to two challenges: increasing energy production and reducing the environmental impact of human activity. Fuel cells enable the production of clean electricity from hydrogen. However, there is currently no viable technology to produce "green" hydrogen. Presently used industrial methods for hydrogen production are mainly based on the processing of fossil fuels. The electrolysis of water is of marginal importance due to the high cost of the hydrogen produced in this way. Other methods of producing hydrogen from renewable resources are constantly being researched to improve efficiency. "Green" hydrogen can be produced in the process of splitting water [1][2][3][4][5] and raw materials obtained from biomass, e.g., biogas [6,7], bio-alcohols , and bio-oils [30,31]. Among the raw materials derived from biomass, ethanol is the most convenient. Ethanol is easy to obtain, store, and transport. It is also a relatively safe compound for health and the environment. The steam reforming of ethanol (R1) and the water-steam gas reaction (R2) allow the production of six moles of hydrogen from one mole of ethanol. C 2 H 5 OH + H 2 O → 2CO + 4H 2 (R1) Producing hydrogen from ethanol is complex, and many different competing reactions are possible. For example, hydrocarbons and coke are produced in these reactions. Due to the competitive reactions, the efficiency of hydrogen production is much lower than is theoretically possible. Therefore, research is focused on finding conditions for hydrogen Figure 1 shows the apparatus used in this research. The liquid mixture of water and ethanol was fed to the spark reactor. A constant ethanol/water molar ratio equal to 3 was used. It is a stoichiometric ratio concerning the R3 reaction, which is a total and balanced record of R1 and R2 reactions.
Materials and Methods
The feed flow was regulated with a mass flow controller (Bronkhorst/EIEWIN, flow measurement accuracy ±2%) in the range from 0.32 to 1.42 mol/h. The quartz casing of the spark reactor had an inner diameter of 8 mm. The electrodes were made of stainless steel and had a diameter of 3.2 mm. Above the electrodes, there was a quartz fiber layer with a thickness of~10 mm. Subsequently, the vapors of the substrates passed through the plasma zone of a volume of~0.09 ccm. Water and ethanol molecules collided with high-energy electrons in this region, and chemical reactions were initiated. After passing the plasma zone, the gases were filtered. The filtered gases were directed to a water cooler condensed water and ethanol. The condensate composition was analyzed using a Thermo Scientific Trace 1300 gas chromatograph (standard error 2.3%) with a single quadrupole mass detector. The cooled gases were analyzed with an HP6890 gas chromatograph (standard error 4.9%), and an APAR AR236/2 sensor. The AR236/2 sensor was used to measure the water vapor content (humidity measurement accuracy ±2.5%, temperature measurement accuracy 0.5 • C) and gas temperature, while the HP6890 chromatograph with a thermal conductivity detector allowed the concentration of gaseous products to be measured. The amounts of produced gases were measured with an Illmer-Gasmesstechnik gas meter (accuracy 0.1 dm 3 ). Figure 1 shows the apparatus used in this research. The liquid mixture of water and ethanol was fed to the spark reactor. A constant ethanol/water molar ratio equal to 3 was used. It is a stoichiometric ratio concerning the R3 reaction, which is a total and balanced record of R1 and R2 reactions.
C2H5OH+3H2O→2CO2+6H2
(R3) The feed flow was regulated with a mass flow controller (Bronkhorst/EIEWIN, flow measurement accuracy ±2%) in the range from 0.32 to 1.42 mol/h. The quartz casing of the spark reactor had an inner diameter of 8 mm. The electrodes were made of stainless steel and had a diameter of 3.2 mm. Above the electrodes, there was a quartz fiber layer with a thickness of ~10 mm. Subsequently, the vapors of the substrates passed through the plasma zone of a volume of ~0.09 ccm. Water and ethanol molecules collided with highenergy electrons in this region, and chemical reactions were initiated. After passing the plasma zone, the gases were filtered. The filtered gases were directed to a water cooler condensed water and ethanol. The condensate composition was analyzed using a Thermo Scientific Trace 1300 gas chromatograph (standard error 2.3%) with a single quadrupole mass detector. The cooled gases were analyzed with an HP6890 gas chromatograph (standard error 4.9%), and an APAR AR236/2 sensor. The AR236/2 sensor was used to measure the water vapor content (humidity measurement accuracy ±2.5%, temperature measurement accuracy 0.5 °C) and gas temperature, while the HP6890 chromatograph with a thermal conductivity detector allowed the concentration of gaseous products to be measured. The amounts of produced gases were measured with an Illmer-Gasmesstechnik gas meter (accuracy 0.1 dm 3 ). We previously used the described apparatus and measurement methodology in the research of hydrogen production from a mixture of methanol and water [38]. We previously used the described apparatus and measurement methodology in the research of hydrogen production from a mixture of methanol and water [38].
The production of a particular gaseous compound (F[i], mol/h) was calculated from Formula (1): where Q is the gas flow at standard conditions (dm 3 /h), c i is the fraction of the compound in the cooled gas, and V is the standard molar volume of gas (22.4 dm 3 /mol). The ethanol conversion (x, %) was calculated from Formula (2): where F[EtOH] is the flow rate of ethanol at the reactor outlet (mol/h), and F 0 [EtOH] is the feed flow rate of ethanol (mol/h). The hydrogen yield (Y, %) was calculated from Formula (3): The energy efficiency of hydrogen production (E, mol(H 2 )/kWh) was calculated from Formula (4): The discharge power (P) ranged from 15 to 55 W, and it was measured using a Tektronix TDS 3032B oscilloscope (vertical accuracy ±2%, time base accuracy 20 ppm), Tektronix P6015A (attenuation 1000:1 ± 3%), and TCP202 probes (accuracy ±3%). Figure 2 shows the voltage and current waveforms recorded at the minimum and maximum discharge power. The production of a particular gaseous compound (F[i], mol/h) was calculated from Formula (1): where Q is the gas flow at standard conditions (dm 3 /h), ci is the fraction of the compound in the cooled gas, and V is the standard molar volume of gas (22.4 dm 3 /mol). The ethanol conversion (x, %) was calculated from Formula (2): where F[EtOH] is the flow rate of ethanol at the reactor outlet (mol/h), and F0[EtOH] is the feed flow rate of ethanol (mol/h). The hydrogen yield (Y, %) was calculated from Formula (3): The energy efficiency of hydrogen production (E, mol(H2)/kWh) was calculated from Formula (4): The discharge power (P) ranged from 15 to 55 W, and it was measured using a Tektronix TDS 3032B oscilloscope (vertical accuracy ±2%, time base accuracy 20 ppm), Tektronix P6015A (attenuation 1000:1 ± 3%), and TCP202 probes (accuracy ±3%). Figure 2 shows the voltage and current waveforms recorded at the minimum and maximum discharge power. The selectivity of ethanol conversion to coke (Sc, %) was calculated from Formula (5): where Gc is the coke weight stream (g/h), and Mc is the molar mass of carbon (12 g/mol). The root mean square velocity (vk, m/s) of the particles was calculated from Formula (6): vk = √( 3 · kB · T/m) (6) where kB is the Boltzmann constant (1.38 · 10 −23 J/K), m is the particle mass (kg), and T is the temperature (K). The selectivity of ethanol conversion to coke (S c , %) was calculated from Formula (5): where G c is the coke weight stream (g/h), and M c is the molar mass of carbon (12 g/mol). The root mean square velocity (v k , m/s) of the particles was calculated from Formula (6): where k B is the Boltzmann constant (1.38 · 10 −23 J/K), m is the particle mass (kg), and T is the temperature (K).
The Effect of the Discharge Power
This section presents and discusses the effect of the discharge power on producing hydrogen from a mixture of water and ethanol. The research was carried out for a steady feed stream equal to 1 mol/h. This feed stream was optimal in our previous studies conducted in the barrier discharge [15].
If the process was run according to the reactions R1 and R2, and the ethanol conversion was complete, the hydrogen and carbon dioxide concentrations would be 75% and 25%, respectively. However, competitive reactions caused the hydrogen concentration to be lower and they reached 57-58% (Table 1). Carbon monoxide was the second most concentrated gas product. Its concentration was 23-24%. The following product was methane, with the concentration ranging from 3.7 to 4.4%. The concentration of carbon dioxide was slightly lower than that of methane and ranged from 3.2 to 4.3%. Acetylene and ethylene were also formed, and their concentrations ranged from 1.2 to 3.1% and 1.1 to 2.0%, respectively. The high concentration of CO indicates that the reaction R2 is ineffective. R2 is a subsequent reaction inhibited by H 2 produced in the reaction R1. The composition of the gases practically did not depend on the discharge power. On this basis, it can be concluded that the discharge power did not significantly affect the mechanism of chemical reactions taking place in the spark discharge. This is because chemical reactions are initiated in collisions with high-energy electrons and depend on the energy of the electrons and their numbers. In the spark discharge, the electron density ranges from 10 16 to 10 18 per ccm [39,40]. This is not much compared to the number of molecules, which for an ideal gas under standard conditions is 2.7 × 10 19 per ccm. However, there are many times more collisions with electrons than collisions between other particles because electrons are much more mobile than gas molecules due to their low mass. The root mean square velocity of the particles strictly depends on the particle mass according to Formula (6).
Only some of the collisions lead to the dissociation of ethanol and water. For the collision, the electron must have sufficiently high energy to break one of the bonds. The energy of bonds in ethanol ranges from 4.10 to 5.14 eV [41,42]. The energy bond is presented in Figure 3. The energy of O-H bonding in water is 5.10 eV [43]. Only some of the electrons have such high energy. However, collisions with electrons with lower energy lead to an increase in the internal energy of the particle (R4, R5) and enable its transformation in subsequent collisions (R6-R13): Ethanol molecules that have obtained a sufficiently high internal energy can also decay into stable products (R14-R18): L. Dvonc and M. Janda [39] observed that the electron density increased with the temperature of the gases. The change was significant. The electron density was~10 16 per ccm at a gas temperature of~900 K, and at a gas temperature of~1400 K, the electron density was~10 18 per ccm. In the spark reactor, the temperature increased with the discharge power ( Figure 4). The measurement of the gas temperature in the plasma zone was infeasible. However, the images from a thermal imaging camera show that the temperature of the reactor wall in the discharge area increased with the discharge power. This means that the temperature of the gases also increased with the increasing discharge power. Thanks to this, the number of electrons increased, and there were more collisions. Moreover, the average electron energy may increase with the discharge power. However, these changes are often insignificant because the electric field affects the average electron energy [39]. The electric field does not change much for the established geometry of the reactor. But even a small increase in the average electron energy due to the power increase has a positive effect. The higher the average energy of the electrons, the more energy is transferred from the electrons to the molecules in each collision. After a smaller number of collisions, they can decay. The sequence of possible reactions of radicals and intermediate products formed in collisions of electrons with ethanol and water was presented in detail in our previous work [16].
H2O*+e→H · +OH · +e (R13) Ethanol molecules that have obtained a sufficiently high internal energy can also decay into stable products (R14-R18): L. Dvonc and M. Janda [39] observed that the electron density increased with the temperature of the gases. The change was significant. The electron density was ~10 16 per ccm at a gas temperature of ~900 K, and at a gas temperature of ~1400 K, the electron density was ~10 18 per ccm. In the spark reactor, the temperature increased with the discharge power (Figure 4). The measurement of the gas temperature in the plasma zone was infeasible. However, the images from a thermal imaging camera show that the temperature of the reactor wall in the discharge area increased with the discharge power. This means that the temperature of the gases also increased with the increasing discharge power. Thanks to this, the number of electrons increased, and there were more collisions. Moreover, the average electron energy may increase with the discharge power. However, these changes are often insignificant because the electric field affects the average electron energy [39]. The electric field does not change much for the established geometry of the reactor. But even a small increase in the average electron energy due to the power increase has a positive effect. The higher the average energy of the electrons, the more energy is transferred from the electrons to the molecules in each collision. After a smaller number of collisions, they can decay. The sequence of possible reactions of radicals and intermediate products formed in collisions of electrons with ethanol and water was presented in detail in our previous work [16]. Although the gas composition did not change with the discharge power change, the hydrogen production increased with the increase in power because the ethanol conversion increased (Figures 5 and 6). Figures 5 and 6 illustrate that hydrogen production, ethanol conversion, and hydrogen production efficiency increased rapidly with increasing power from 15 to 25 W. A further power increase resulted in a slow increase in these parameters. This resulted in a reduction in the energy yield of hydrogen production. The highest energy efficiency of 22.5 mol(H2)/kWh was obtained with a power of ~25 W (Figure 5). This is 36% of the theoretical energy efficiency for hydrogen production in the reaction R3. If the reactants are introduced into the reactor in the liquid phase, the enthalpy of reaction R3 is 341.68 kJ, which corresponds to the energy efficiency of hydrogen production of 62 mol(H2)/kWh. In the literature, a different enthalpy value of 173.54 kJ is also found, which corresponds to the energy efficiency of hydrogen production of 124.5 mol(H2)/kWh. These values are correct when reactants are vaporized in a heat exchanger and introduced into a reactor in the gas phase. These values ignore the energy used to evaporate the substrates, which is a very energy-consuming operation. Additionally, any omission of heating the substrates to temperatures higher than the standard temperature causes the demonstrated energy efficiency of hydrogen production to be higher than theoretically possible. Although the gas composition did not change with the discharge power change, the hydrogen production increased with the increase in power because the ethanol conversion increased (Figures 5 and 6). Figures 5 and 6 illustrate that hydrogen production, ethanol conversion, and hydrogen production efficiency increased rapidly with increasing power from 15 to 25 W. A further power increase resulted in a slow increase in these parameters. This resulted in a reduction in the energy yield of hydrogen production. The highest energy efficiency of 22.5 mol(H 2 )/kWh was obtained with a power of~25 W ( Figure 5). This is 36% of the theoretical energy efficiency for hydrogen production in the reaction R3. If the reactants are introduced into the reactor in the liquid phase, the enthalpy of reaction R3 is 341.68 kJ, which corresponds to the energy efficiency of hydrogen production of 62 mol(H 2 )/kWh. In the literature, a different enthalpy value of 173.54 kJ is also found, which corresponds to the energy efficiency of hydrogen production of 124.5 mol(H 2 )/kWh. These values are correct when reactants are vaporized in a heat exchanger and introduced into a reactor in the gas phase. These values ignore the energy used to evaporate the substrates, which is a very energy-consuming operation. Additionally, any omission of heating the substrates to temperatures higher than the standard temperature causes the demonstrated energy efficiency of hydrogen production to be higher than theoretically possible. Usually, hydrogen production from alcohols is more energy-efficient than hydrogen production from water. D. G. Rey et al. [4] reported that the energy efficiency of hydrogen production from water was 1.1 mol H2/kWh. N. R. Panda and D. Sahu [23] reported that the energy efficiency of hydrogen production from methanol was 1.2 mol H2/kWh. B. Sarmiento et al. [44] reported that the energy efficiency of hydrogen production from ethanol was 3.3 mol H2/kWh. The studies mentioned above were carried out in the barrier discharge. The same principle is confirmed by comparing the work carried out in the corona discharge. J. M. Kirkpatrick and B. R. Locke [5] produced hydrogen from water with Usually, hydrogen production from alcohols is more energy-efficient than hydrogen production from water. D. G. Rey et al. [4] reported that the energy efficiency of hydrogen production from water was 1.1 mol H2/kWh. N. R. Panda and D. Sahu [23] reported that the energy efficiency of hydrogen production from methanol was 1.2 mol H2/kWh. B. Sarmiento et al. [44] reported that the energy efficiency of hydrogen production from ethanol was 3.3 mol H2/kWh. The studies mentioned above were carried out in the barrier discharge. The same principle is confirmed by comparing the work carried out in the corona discharge. J. M. Kirkpatrick and B. R. Locke [5] produced hydrogen from water with Usually, hydrogen production from alcohols is more energy-efficient than hydrogen production from water. D. G. Rey et al. [4] reported that the energy efficiency of hydrogen production from water was 1.1 mol H 2 /kWh. N. R. Panda and D. Sahu [23] reported that the energy efficiency of hydrogen production from methanol was 1.2 mol H 2 /kWh. B. Sarmiento et al. [44] reported that the energy efficiency of hydrogen production from ethanol was 3.3 mol H 2 /kWh. The studies mentioned above were carried out in the barrier discharge. The same principle is confirmed by comparing the work carried out in the corona discharge. J. M. Kirkpatrick and B. R. Locke [5] produced hydrogen from water with energy efficiency of 0.12 mol H 2 /kWh, while X. Zhu et al. [18] produced hydrogen from Energies 2022, 15, 2777 9 of 14 ethanol with energy efficiency of 10 mol H 2 /kWh. Therefore, alcohols are an attractive raw material.
The Effect of the Feed Flow
The feed flow influence on hydrogen production was studied for the power of 25 W. For this power, the energy efficiency reached the maximum ( Figure 5). A feed flow influences the course of chemical reactions because it affects the residence time of reactants. Long residence times of reactants in a reactor and high conversions can be achieved when a low feed flow is used. The confirmation of this principle can be seen in Figure 7. The ethanol conversion and hydrogen yield decreased with increasing feed flow because the residence time of the reagents decreased.
Energies 2022, 15, x FOR PEER REVIEW 9 of 14 energy efficiency of 0.12 mol H2/kWh, while X. Zhu et al. [18] produced hydrogen from ethanol with energy efficiency of 10 mol H2/kWh. Therefore, alcohols are an attractive raw material.
The Effect of the Feed Flow
The feed flow influence on hydrogen production was studied for the power of 25 W. For this power, the energy efficiency reached the maximum ( Figure 5). A feed flow influences the course of chemical reactions because it affects the residence time of reactants. Long residence times of reactants in a reactor and high conversions can be achieved when a low feed flow is used. The confirmation of this principle can be seen in Figure 7. The ethanol conversion and hydrogen yield decreased with increasing feed flow because the residence time of the reagents decreased. In cases where many chemical reactions occur, the reduction of the residence time often affects the product's composition. In producing hydrogen from a mixture of water and ethanol, the product of sequential reactions is carbon dioxide. Therefore, its concentration increased with the increase of the average residence time of the reactants in the reactor ( Table 2). The concentrations of CO, C2H2, and C2H4 decreased as they were consumed in the sequential reactions generating CO2 and H2. In cases where many chemical reactions occur, the reduction of the residence time often affects the product's composition. In producing hydrogen from a mixture of water and ethanol, the product of sequential reactions is carbon dioxide. Therefore, its concentration increased with the increase of the average residence time of the reactants in the reactor ( Table 2). The concentrations of CO, C 2 H 2 , and C 2 H 4 decreased as they were consumed in the sequential reactions generating CO 2 and H 2 . The decrease in the selectivity of the ethanol conversion to coke with the increase in the flow rate of the reactants (Figure 8) also results from the shortening of the residence time of the reactants.
The decrease in the selectivity of the ethanol conversion to coke with the increase in the flow rate of the reactants (Figure 8) also results from the shortening of the residence time of the reactants. Coke can be formed not only from the decomposition of ethanol (R17) but also in several sequential reactions (R19-R21): Surprisingly, the concentration of CH4 remained unchanged, although similarly to C2H2 and C2H4, its concentration should decrease with the progress of steam reforming of hydrocarbons. The consumption of CH4 in the reforming was probably compensated by the production of CH4 in the methanation (R22) and Sabatier (R23) reactions. The high concentration of H2 and CO promoted methanation (R22), and the increase in the CO2 concentration accelerated the Sabatier reactions (R23): The increase in the feed flow increased the hydrogen production and the energy efficiency of hydrogen production ( Figure 9). The increase in hydrogen production resulted from the introduction of more reactants into the reactor so that even with a lower conversion, the production was higher. On the other hand, the increase in the energy efficiency of hydrogen production resulted from the decrease in ethanol conversion. Low ethanol conversion means that the system is further away from thermodynamic equilibrium as the short residence time of the reactants prevented reaching this equilibrium. The greater the shift of the system from equilibrium, the faster the chemical reactions run because there are many substrates and few reaction products. Therefore, the increase in the feed flow caused a decrease in ethanol conversion and a greater shift of the composition of the Coke can be formed not only from the decomposition of ethanol (R17) but also in several sequential reactions (R19-R21):
2CO
C + CO 2 (R19) Surprisingly, the concentration of CH 4 remained unchanged, although similarly to C 2 H 2 and C 2 H 4 , its concentration should decrease with the progress of steam reforming of hydrocarbons. The consumption of CH4 in the reforming was probably compensated by the production of CH 4 in the methanation (R22) and Sabatier (R23) reactions. The high concentration of H 2 and CO promoted methanation (R22), and the increase in the CO 2 concentration accelerated the Sabatier reactions (R23): The increase in the feed flow increased the hydrogen production and the energy efficiency of hydrogen production ( Figure 9). The increase in hydrogen production resulted from the introduction of more reactants into the reactor so that even with a lower conversion, the production was higher. On the other hand, the increase in the energy efficiency of hydrogen production resulted from the decrease in ethanol conversion. Low ethanol conversion means that the system is further away from thermodynamic equilibrium as the short residence time of the reactants prevented reaching this equilibrium. The greater the shift of the system from equilibrium, the faster the chemical reactions run because there are many substrates and few reaction products. Therefore, the increase in the feed flow caused a decrease in ethanol conversion and a greater shift of the composition of the reaction mixture from the state of thermodynamic equilibrium. As a result, the rate of chemical reactions was faster. A faster reaction rate resulted in better utilization of the energy fed to the reactor. The disadvantages of reducing the residence time of the reactants were low ethanol conversions and a significant amount of substrate was left unused. The same effect of changing the feed flow was observed previously in the barrier discharge reactor [15]. Additionally, in other plasma processes, reducing the plasma treatment time while maintaining the same discharge power reduced the conversion of substrates [45]. reaction mixture from the state of thermodynamic equilibrium. As a result, the rate of chemical reactions was faster. A faster reaction rate resulted in better utilization of the energy fed to the reactor. The disadvantages of reducing the residence time of the reactants were low ethanol conversions and a significant amount of substrate was left unused. The same effect of changing the feed flow was observed previously in the barrier discharge reactor [15]. Additionally, in other plasma processes, reducing the plasma treatment time while maintaining the same discharge power reduced the conversion of substrates [45].
Conclusions
Ethanol can be an excellent raw material for the production of green hydrogen as it is produced in the fermentation process from biomass. CO2 emitted in the production of ethanol and hydrogen is re-consumed by plants. As a result, hydrogen production from ethanol is a zero-emission method. Unfortunately, despite extensive research, a cost-effective method of producing hydrogen from ethanol has not yet been developed. The main problem is coke formation causing deactivation of catalysts. The use of excess water reduces coking but requires more energy to heat water, making the hydrogen production process unprofitable. From an energy point of view, it is most advantageous to use a stoichiometric water to ethanol ratio equal to 3, which makes it impossible to use catalysts. On the other hand, coke does not interfere with plasma reactors' operation if they are correctly constructed. In this work, a plasma reactor was used, in which plasma was generated by a spark discharge insensitive to coking. The coke was removed from the reactor by the gaseous product stream. A significant advantage of the spark discharge was the possibility of generating it from a mixture of water and ethanol without introducing additional gases facilitating electric breakdown. The reactor used was characterized by high flexibility. The tests were conducted with a feed flow from 0.32 to 1.42 mol/h and discharge power from 15.4 to 54.7 W.
The discharge power affected the ethanol conversion, hydrogen production, and energy efficiency of the hydrogen production. The ethanol conversion and hydrogen production increased with increasing discharge power, while the energy yield was maximum at 25 W.
Conclusions
Ethanol can be an excellent raw material for the production of green hydrogen as it is produced in the fermentation process from biomass. CO 2 emitted in the production of ethanol and hydrogen is re-consumed by plants. As a result, hydrogen production from ethanol is a zero-emission method. Unfortunately, despite extensive research, a costeffective method of producing hydrogen from ethanol has not yet been developed. The main problem is coke formation causing deactivation of catalysts. The use of excess water reduces coking but requires more energy to heat water, making the hydrogen production process unprofitable. From an energy point of view, it is most advantageous to use a stoichiometric water to ethanol ratio equal to 3, which makes it impossible to use catalysts. On the other hand, coke does not interfere with plasma reactors' operation if they are correctly constructed. In this work, a plasma reactor was used, in which plasma was generated by a spark discharge insensitive to coking. The coke was removed from the reactor by the gaseous product stream. A significant advantage of the spark discharge was the possibility of generating it from a mixture of water and ethanol without introducing additional gases facilitating electric breakdown. The reactor used was characterized by high flexibility. The tests were conducted with a feed flow from 0.32 to 1.42 mol/h and discharge power from 15.4 to 54.7 W.
The discharge power affected the ethanol conversion, hydrogen production, and energy efficiency of the hydrogen production. The ethanol conversion and hydrogen production increased with increasing discharge power, while the energy yield was maximum at 25 W.
The feed flow influenced ethanol conversion, hydrogen production, energy efficiency, and gas composition. The ethanol conversion decreased with increasing the feed flow, while the hydrogen production and energy efficiency increased. The concentrations of H 2 , CO 2 , C 2 H 2 , and C 2 H 4 decreased with the increase in the feed flow, while the concentration of CO increased. The CH 4 concentration did not change. Increasing the feed flow reduced the selectivity of ethanol conversion to coke, which is a favorable phenomenon.
Although the efficiency of hydrogen production changed with the change of process conditions, the concentration of hydrogen was consistently high and ranged from 57.5 to 61.5%. Carbon monoxide was also formed in large quantities. Much less carbon dioxide, methane, acetylene, and ethylene were produced. The concentration of hydrogen is sufficient to supply solid oxide fuel cells with such gas. Typically, carbon monoxide and hydrocarbons do not interfere with the operation of these cells. In the high operating temperature of these cells, these compounds will be oxidized, and the heat of their oxidation heats the cell.
The high concentration of carbon monoxide (21-24.3%) in the produced gas indicates that hydrogen production can be significantly increased by increasing the CO conversion in the water-gas shift reaction. This reaction occurs to a small extent in a spark discharge, evidenced by a low CO 2 concentration (3.7-6.1%). | 7,333.2 | 2022-04-10T00:00:00.000 | [
"Engineering"
] |
Spectral interferometry-based chromatic dispersion measurement of fibre including the zero-dispersion wavelength
We report on a simple spectral interferometric technique for chromatic dispersion measurement of a short length optical fibre including the zero-dispersion wavelength. The method utilizes a supercontinuum source, a dispersion balanced Mach-Zehnder interferometer and a fibre under test of known length inserted in one of the interferometer arms and the other arm with adjustable path length. The method is based on resolving one spectral interferogram (spectral fringes) by a low-resolution NIR spectrometer. The fringe order versus the precise wavelength position of the interference extreme in the recorded spectral signal is fitted to the approximate function from which the chromatic dispersion is obtained. We verify the applicability of the method by measuring the chromatic dispersion of two polarization modes in a birefringent holey fibre. The measurement results are compared with those obtained by a broad spectral range (500–1600 nm) measurement method, and good agreement is confirmed. [DOI: http://dx.doi.org/10.2971/jeos.2012.12017]
INTRODUCTION
The chromatic dispersion, which is a significant characteristic of optical fibre, affects the bandwidth of a high speed optical transmission system through pulse broadening and nonlinear optical distortion.Chromatic dispersion of long length optical fibres is determined by two widely used methods [1]: the time-of-flight method which measures relative temporal delays for pulses at different wavelengths, and the modulation phase shift technique which measures the phase delay of a modulated signal as a function of wavelength.Recently, a rapid and accurate spectral interferometry-based measurement method using an asymmetric Sagnac interferometer has been presented [2].
White-light interferometry based on the use of a broadband source in combination with a standard Michelson or a Mach-Zehnder interferometer [3] is considered as one of the best tools for dispersion characterization of short length optical fibres.White-light interferometry usually utilizes a temporal method [4] or a spectral method [5]- [13].The spectral method is based on the observation of spectral fringes in the vicinity of a stationary-phase point [5]- [12] or far from it [13].The feasibility of the interferometric techniques has been demonstrated in measuring the dispersion of holey fibres [14]- [16] usable for supercontinuum generation [17].However, an accurate control of the chromatic dispersion is required for the application [18].As an example, a highly-birefringent holey fibre [19] has been designed and fabricated with the zero-dispersion wavelength (ZDW) close to a 1064 nm of a microchip laser, enabling savings in size and cost of a supercontinuum source.Moreover, these broadband and high-power sources have enabled to increase the comfort of dispersion measurement [16].
In this paper, a simple technique, based on spectral interferometry and employing a NIR low-resolution spectrometer, is used for chromatic dispersion measurement of a short length optical fibre including the ZDW.The method utilizes a supercontinuum source, a dispersion balanced Mach-Zehnder interferometer and a fibre under test of known length placed in one arm of the interferometer while the other arm has adjustable path length.The method is based on resolving one spectral interferogram from which the fringe order versus the precise wavelength position of the interference extreme is obtained [20].This dependence is fitted to the approximate function enabling to obtain the chromatic dispersion.We verify the applicability of the method by measuring the chromatic dispersion of two polarization modes in a birefringent holey fibre.Good agreement between the measurement results and those obtained by a broad spectral range (500-1600 nm) measurement method is confirmed.
EXPERIMENTAL METHOD
Consider a dispersion balanced Mach-Zehnder interferometer (see Figure 1) and a fibre under test of length z that supports two polarization modes over a broad wavelength range.The fibre, which is characterized by the effective indices n x (λ) and n y (λ), is inserted into the first (test) arm of the interferometer and the other (reference) arm has the adjustable path length L in the air.If the analyser at the output interferometer discriminates the x polarization only, the optical path difference (OPD) ∆ MZ (λ) between the beams in the interferometer is given by where l is the path length in the air in the test arm.The group OPD ∆ g MZ (λ) is similar to Eq. ( 1), in which n x (λ) is replaced by the group effective index N x (λ) given by the relation Next, consider that the spectral interference fringes can be resolved by a spectrometer used at the output of the Mach-Zehnder interferometer.The spectral signal (interference fringes) recorded by the spectrometer of a Gaussian response function can be expressed as [12] S MZ (λ where V is a visibility term and ∆λ R is the width of the spectrometer response function. To resolve spectral fringes in a spectral range from λ 1 to λ 2 , the group OPD ∆ g MZ (λ) must satisfy the condition ∆ g MZ (λ) < λ 2 1 /∆λ R .We can resolve in the recorded spectral interferogram a suitable number of spectral fringes.The interference maximum (a bright fringe) satisfies the relation where m is the order of interference of the fringes.After counting i bright spectral fringes in the direction of shorter wavelengths, Eq. ( 4) can be written as The wavelength dependence of the effective index n x (λ) can be well approximated by a modified Cauchy dispersion formula [21] where A i are the coefficients.On substituting from Eq. ( 5) into Eq.( 6), we obtain where and a 5 = −A 5 z.By a least-squares fitting of Eq. ( 7), the constants a i and m are determined, and knowing the fibre length z, the wavelength dependence of the group effective index N x (λ) can be deduced from Eqs. ( 2) and ( 6).The chromatic dispersion D x (λ) can be evaluated as where c is the velocity of light in vacuum.The ZDW λ x 0 is given by D x (λ x 0 ) = 0. Similarly, the dispersion slope S x (λ) = dN x (λ)/dλ and its value S x (λ x 0 ) at the ZDW λ x 0 can be determined.If the analyser at the output of the interferometer discriminates the y polarization only, the chromatic dispersion D y (λ), the ZDW λ y 0 given by D y (λ y 0 ) = 0 and the dispersion slope S y (λ y 0 ) of the y-polarization mode can also be measured.
The method can also be applied for fibres with two ZDWs provided that a sufficient number of spectral fringes can be resolved in a measured spectral range.In addition, the degree of Laurent polynomial (a modified Cauchy dispersion formula) used in the data evaluation has to be chosen with respect to this fact.
The spectral signal is recorded by the spectrometer in the transmission mode after a dark spectrum and a reference spectrum (without the interference) are stored.The spectrometer has a spectral operation range from 850 to 1700 nm and its read optical fibre with a 50 µm core diameter results in a Gaussian response function of ∆λ R ≈9 nm.In the test arm of the interferometer, a fibre sample of length z = (77640 ± 10) µm is placed.The fibre sample is a pure silica birefringent holey fibre similar to that analysed in a previous paper [9].
EXPERIMENTAL RESULTS AND DISCUSSION
Prior to the measurement we utilized the main advantage of the set-up, which is in fibre connection of a light source (that can be varied) with the interferometer.We used a laser diode (λ ≈ 670 nm) instead of the supercontinuum source to check the precise placement and alignment of the optical components in both arms of the interferometer by observing the interference pattern.The proper excitation of the fibre was also inspected [9], and in order to measure the chromatic dispersion of the xor y-polarization mode, the polarizer and analyser need to be oriented along the short or the long axis of the far-field pattern [12].
In the chromatic dispersion measurement, such a path length in the reference arm of the interferometer was adjusted to resolve interference fringes in a spectral range as wide as possible.Figure 2(a) shows an example of the spectral signal recorded by the spectrometer for the x-polarization mode.It clearly shows the effect of the limiting resolving power of the spectrometer on the visibility of the spectral interference fringes [see Eq. ( 3)].The visibility is the highest in the vicinities of two different equalization wavelengths at which the group OPD in the interferometer is zero.Between the two equalization wavelengths the minimum in the group effective index N x (λ) or equivalently the ZDW is located.
The procedure used to retrieve the chromatic dispersion D x (λ) from the recorded spectral signal consists of two steps.In the first step, the wavelengths of interference maxima and minima are determined for the recorded signal.The spectral interference fringes are numbered in such a way that the The solid lines correspond to the fit of Eq. (7).
fringe order increases in the direction of longer wavelengths to the first equalization wavelength (≈ 0.95 µm), decreases to the second equalization wavelength (≈ 1.28 µm) and from it once again increases in the direction of longer wavelengths.Figure 3(a) shows by the markers the dependence of the fringe order on the wavelength obtained from the spectrum shown in Figure 2(a).In the second step, a least-squares fit of Eq. ( 7) is used to the dependence which gives the constants a i and m. Figure 3(a) shows the results of the fit by the solid line.Then the constants a i and the known fibre length z give the constants A i needed in the determination of the chromatic dispersion D x (λ) according to Eq. ( 8).It is shown in Figure 4(a) together with D x (λ) measured in the same set-up by a broad spectral range (500-1600 nm) measurement technique presented in a previous paper [12].It is clearly seen that both functions agree well in the vicinity of the ZDW.Their different courses, especially in a short-wavelength range, are caused by the approximation (6) used in this narrower spectral range.
The constants a i obtained from a least-squares fit of Eq. ( 7) can serve as the first estimate in a least-squares fit of Eq. ( 3) to the recorded spectral signal.The result of the fit is shown in Figure 2(a) by the solid curve and it illustrates very good agreement.The corresponding ZDWs λ x 0 and the dispersion slopes S x (λ x 0 ) are listed in Table 1.We estimate the error of determin- 1 The ZDW λ 0 and the dispersion slope S(λ 0 ) for xand y-polarization modes.ing the ZDW, which is affected by the wavelength sampling of a low-resolution spectrometer [12], below 1 nm and the error of the dispersion slope below 10 fs km −1 nm −2 .It is supposed that the smaller error in determining the ZDW can be attained using an optical spectrum analyser with a denser wavelength sampling.Similar procedure with all the mentioned steps was used for the spectral signal shown in Figure 2(b), i.e., for the y-polarization mode.Figure 4(b) shows the chromatic dispersion D y (λ) which is shifted to shorter wavelengths in comparison with D x (λ).Table 1 then lists the ZDWs λ y 0 and the dispersion slopes S y (λ y 0 ).
CONCLUSIONS
In this paper, a simple technique for chromatic dispersion measurement of short length optical fibres, including the ZDW, is presented.The technique, which is based on spec-tral interferometry employing a low-resolution NIR spectrometer, utilizes a supercontinuum source, a dispersion balanced Mach-Zehnder interferometer and a fibre under test placed in one arm of the interferometer and the other arm with adjustable path length.Within the method, the precise wavelength positions of the interference maxima and minima in one spectral interferogram are determined.These are used to retrieve the fringe order versus the wavelength which is fitted to the approximate function enabling to obtain the chromatic dispersion.We verify the applicability of the method by measuring the chromatic dispersion of two polarization modes in a birefringent holey fibre.The measurement results are compared with those obtained by a broad spectral range measurement method [12], and good agreement is confirmed.The use of the method, whose main advantage is in the measurement comfort, i.e. in rapid and accurate measurements of the ZDW and the dispersion slope, can be extended for fibres with the ZDW in the VIS spectral range.Moreover, in comparison with fibre-optic implementations of measuring setups the presented one is dispersion balanced and it enables an easy inspection of the optical field at the output of the test arm of the interferometer.
FIG. 1
FIG. 1 Experimental set-up for measuring the chromatic dispersion of a fibre under test.
FIG. 3
FIG. 3 Fringe order (markers) as a function of wavelength retrieved from the recorded spectral signal shown in Figure 2: (a) x-polarization mode, (b) y-polarization mode.
FIG. 4
FIG. 4 Chromatic dispersion obtained from the fit shown in Figure 3: (a) x-polarization mode, (b) y-polarization mode.The dashed lines are the result of a broad spectral range measurement. | 3,007.2 | 2012-05-30T00:00:00.000 | [
"Physics"
] |
Wing Morphometrics of Aedes Mosquitoes from North-Eastern France
Simple Summary Mosquitoes act as vectors of arboviruses and their correct identification is very important to understanding the diseases they transmit. To date, this identification is based on several techniques that are either expensive or time consuming. Wing geometric morphometrics allow fast and accurate mosquito identification. By analyzing the pattern of wing venation, it is possible to separate mosquito species. We applied this technique on six Aedes mosquito species from north-eastern France. Our results show a very good differentiation of these species. The use of wing geometric morphometrics could increase the efficiency of field entomologists in case of viral outbreaks. Integrated with existing morphological identification software, it might help relocate mosquito identification from the lab to the field. Abstract Background: In the context of the increasing circulation of arboviruses, a simple, fast and reliable identification method for mosquitoes is needed. Geometric morphometrics have proven useful for mosquito classification and have been used around the world on known vectors such as Aedes albopictus. Morphometrics applied on French indigenous mosquitoes would prove useful in the case of autochthonous outbreaks of arboviral diseases. Methods: We applied geometric morphometric analysis on six indigenous and invasive species of the Aedes genus in order to evaluate its efficiency for mosquito classification. Results: Six species of Aedes mosquitoes (Ae. albopictus, Ae. cantans, Ae. cinereus, Ae. sticticus, Ae. japonicus and Ae. rusticus) were successfully differentiated with Canonical Variate Analysis of the Procrustes dataset of superimposed coordinates of 18 wing landmarks. Conclusions: Geometric morphometrics are effective tools for the rapid, inexpensive and reliable classification of at least six species of the Aedes genus in France.
Introduction
Identification of mosquitoes is a matter of public health. Numerous mosquitoes are proven vectors of human or zoonotic arboviruses, such as dengue (DENV), chikungunya (CHIKV), West Nile (WNV) or Usutu (USUV). Recently, Southern Europe suffered autochthonous dengue epidemics [1]. These highlight the need for rapid vector identification, surveillance and control. Morphological methods, initially used for the description of original species and their comparisons, are the main means to quickly identify mosquitoes. They rely upon dichotomic/polytomous keys, illustrated simplified keys and interactive keys [2]. The latter, with regard to the European fauna, were firstly developed in 2000 [3] and were recently updated using the Xper2 software [4], leading to MosKeyTool [5]. This interactive identification key for mosquitoes of the Mediterranean region requires updates on fauna composition and morphological data, but also well-preserved specimens analyzed by expert personnel. While such morphological tools are very helpful, their routine use can turn out to be time-consuming. With the advent of molecular biology, molecular tools were developed in order to accurately identify mosquito species. Mostly based on barcoding techniques (analysis of the cytochrome oxidase I gene) [6], the sequencing and comparison of sequences with online databases (GenBank, BOLD) provide a reliable identification method [7,8]. However, some cryptic species like those of the Culex pipiens complex require further analysis of the ACE2 (acetylcholinesterase) gene and microsatellites to achieve accurate identification [9,10]. In addition to barcoding techniques, more precise molecular tools were developed in order to identify mosquitoes belonging to the same species complex. For instance, the multiplex allele-specific PCR technique was used to diagnose similar Aedes mosquitoes from the Stegomyia subgenus [11] and mosquitoes from the Anopheles gambiae and Anopheles barbirostris complexes [12,13]. In another area of molecular biology, loopmediated isothermal amplification (LAMP) assays were created with possible outcomes in field surveillance of invasive species [14]. Finally, proteomic approaches have recently flourished in entomological identification. The MALDI-ToF technique has been successfully applied for mosquito (both adults and larvae) and blood-meal identification [15][16][17]. These approaches appear to be accurate, but are time-consuming, somewhat expensive and need consequent laboratory equipment to be performed. Barcoding can, however, be of help to identify collections or damaged specimens.
In the 2000s, the emergence of geometric morphometrics (GM) opened a new field in mosquito identification and analysis. GM is defined as the statistical analysis of form based on Cartesian landmark coordinates [18]. This approach is based on the analysis of point coordinates on the wings. A mathematical transformation can be used to extract data and then classify mosquito species [19]. GM became widely used after the "revolution in morphometrics" that occurred in the 1990s [20]. This technique shows a broad range of applications in biology in fields such as medical imaging, anthropology or even botany [21][22][23]. In the field of medical entomology, the use of GM made it possible to further analyze insect populations. As the emergence of arboviruses is on the rise, populations of vectors have been of interest for GM studies. Quite naturally, insect families such as Muscidae, Reduvidae, Ceratopogonidae or Culicidae have been exhaustively studied [24].
Currently, GM is used in mosquito classification and the survey of the effects of biotic and abiotic factors on mosquito populations [25][26][27][28]. However, this technique is mostly applied to the three main arbovirus vectors: Aedes, Anopheles and Culex mosquitoes. GM has proven reliable in the identification of the genus Aedes, such as Ae. aegypti and Ae. albopictus (the main vectors of dengue fever), and to compare the life and trait variations among these populations [28,29]. For the Anopheles genus, GM was able to improve reliable diagnosis for some sympatric Anopheles species in South America, for instance, An. cruzii, An. homunculus and An. bellator [30]. Within the Culex genera, reliable morphological discrimination between Cx. pipiens and Cx. torrentium relies on GM to separate females and observe the genitalia of males [31]. Since vector groups are substantially found in the GM literature, entomologists began to show interest in species of lesser epidemiological importance [32]. Nevertheless, as there is a non-negligible possibility of vector competence of these species, such studies increase preparedness in the case of unexpected arboviruses emergence. GM studies performed on vectors in metropolitan France have been mostly applied to the Psychodidae and Ceratopogonidae families, such as the genus Phlebotomus or Culicoides [33,34]. Mosquito vectors of metropolitan France belong to the genera Aedes and Culex. French Ae. albopictus has been assessed as an effective vector of DENV, CHIKV and ZIKV [35][36][37]. Cx. modestus and Cx. pipiens from southern France have been characterized as competent for WNV transmission [38]. However, to the best of our knowledge, none of the autochthonous or invasive populations of French Aedes mosquitoes have been submitted to GM analysis.
In the present study, we propose an analysis of wing traits and the classification of mosquito species endemic to north-eastern France. Our sampling challenges several arbovirus vectors (Ae. albopictus, Ae. cinereus s.l., Ae. sticticus and Ae. japonicus) [39] and includes a couple of species without any proven vector status (Ae. cantans and Ae. rusticus).
Materials and Methods
Female mosquitoes were captured from 2018 to 2019 in the Grand-Est region, in the localities of Berru, Châlons-sur-Vesle, Reichstett and Schiltigheim ( Figure 1). Females were collected with BG Sentinel © (Biogents, Regensburg, Germany) traps and by human-landing techniques ( Table 1). Samples were brought back to the laboratory and placed into cages prior to identification, except for Ae. albopictus and Ae. japonicus, which were stored in 70% ethanol until dissection and analysis. Mosquitoes were anesthetized by cold, morphologically identified at the species level using taxonomic keys (Schaffner et al. and Möhrig [3,40]) and euthanatized. Right wings were dissected under a stereomicroscope, underwent mechanical treatment to remove scales [41], dehydrated in successive ethanol baths and mounted on slides with Euparal mounting medium © ) (Carl Roth, Karlsruhe, Germany). Legs were used for molecular identification. Samples were randomly chosen within each group and went through a molecular barcoding identification. DNA was extracted with the DNeasy Blood and Tissue extraction kit (Qiagen, Hilden Germany) following the manufacturer's instruction. Polymerase Chain Reaction performed on a 648 bp fragment of the COI gene was set as follows: initial denaturation at 94 • C for 30 s, followed by 5 cycles at 94 • C for 30 s, 45 • C for 30 s and 72 • C for 1 min, then 35 cycles at 94 • C for 30 s, 51 • C for 30 s, 72 • C for 1 min and a final elongation step at 72 • C for 10 min.
Amplicons went through Sanger sequencing (Genewiz, Leipzig, Germany). Sequences were compared to existing GenBanK sequences with the BLAST algorithm [43] and identification was considered accurate above a 99% similarity.
Pictures were taken using the Stream Essentials software version 1.7 and a DP-26 video camera connected to a SZX10 stereomicroscope (Olympus, Tokyo, Japan). All specimens were captured with a X2 magnification. Pictures were saved in JPEG format, and the work files were built with TPS Util © version 1.76. In total, 18 landmarks were manually digitized by one of the authors (JPM) with TPSDig © version 2.31 [44], as shown in Figure 2. Error assessment: In order to evaluate the error in landmark digitization, we performed a Pearson correlation test on a subset of 76 randomly chosen pictures digitized twice by the same operator (JPM).
Landmark analysis: Coordinates of the 18 landmarks were imported in RStudio software (version 1.2.5019) [45] and processed within the geomorph package (version 3.2.1) [46]. Coordinates were aligned by performing Procrustes superimposition (Figure 3). The mean positions of the landmarks per species are shown in Figure 4. Plots exported from R were made with the generic plot function.
Coordinates in TPS format were imported in MorphoJ software version 1.07a [47]. Multivariate regression over the Procrustes coordinates was performed in order to evaluate the allometric influence of size over shape. Canonical Variate Analysis (CVA) was applied on the coordinates and Mahalanobis distances were computed to study the similarity between species. Pairwise cross-validated species reclassification tests with 1000 permutation runs were conducted. This test aims to quantify the rate of correct reclassification between samples.
Mosquito Collection and Identification
Taking into account their wing integrity, a total of 148 females has been selected ( Table 1). Sequences of the specimens sequenced in the present study are available in Gen-Bank under accession numbers MW843020 to MW843031.
Error Measurement
The Pearson correlation test on our data subset showed a good repeatability of our digitization process (correlation coefficient of 0.9999639, 95 percent confidence interval: 0.9999611-0.9999665, p-value < 0.0001).
Mean Shapes
Procrustes superimposition performed on the raw coordinates made it possible to align all landmarks positions (Figure 3). For each species, the median position of each landmark was processed and allowed to draw the following composite and observe the maximum deviation for landmarks 10 to 18. (Figure 4).
Allometric Regression
Multivariate regression of the Procrustes coordinates on CS shows an allometric effect of wing size on wing shape (3.95%, p < 0.0001). We did not choose to remove it as we consider, like Wilke et al., that allometric size variation is a part of the process of species identification [19].
Canonical Variate Analysis
Canonical Variate Analysis performed on our dataset accounted for 86.73% of the total variance on the first two canonical variates. The specimens from the six species studied here belong to four subgenera: Ae. albopictus belongs to the subgenus Stegomyia, Ae. japonicus to the subgenus Finlaya, Ae. cinereus s.l. to the subgenus Aedes, Ae. cantans, Ae. rusticus and Ae. sticticus to the subgenus Ochlerotatus. Figure 5 shows a relative clustering between the Stegomyia and Aedes subgenera. Species appear to be well segregated with low overlapping. The pairwise cross-validated species reclassification test shows an accuracy of 98%. The detailed pairwise cross-validated species reclassification test is available in Table 2. A neighbor-joining tree was performed on Mahalanobis distances between these species (Figure 6). (75-90%). The high values shared by the other taxa can be explained by the disparity of the morphological characters separating the processed species as well as their respective sizes. This tree shows the branching of Ae. cantans, Ae. rusticus and Ae. sticticus, all members of the subgenus Ochlerotatus, well supported by a bootstrap rate of 100%. The branch including Ae. albopictus, Ae. cinereus and Ae. japonicus is not supported by bootstrap.
Discussion
In the present paper, we show that morphometric tools are efficient to classify Aedes mosquitoes from north-eastern France. We focused our sampling on this genus because it includes most of the vectors of mosquito-borne arboviruses. Ae. albopictus is an efficient vector of DENV, although less efficient than Ae. aegypti [49]. French populations of Ae. albopictus are competent for DENV [37] and can also transmit CHIKV and ZIKV [35,36]. In Germany, the Netherlands and Switzerland, Ae. japonicus was shown to be an effective vector of CHIKV, DENV, USUV and ZIKV [50][51][52][53]. The vector competence of Ae. cantans, Ae. cinereus, Ae. rusticus and Ae. sticticus remains mostly unknown, although Ae. cantans has been found positive for WNV in some recent screenings [54]. Despite the lack of data about their vector competence, these species could be locally abundant and responsible for nuisance (personal observation).
The goal of the neighbor-joining tree built ( Figure 6) is not to analyze the evolution patterns of these species, as both the sampling and the methods used are not appropriate for this purpose. The tree emphasizes that the three members of the subgenus Ochlerotatus (Ae. cantans, Ae. rusticus and Ae. sticticus) are clustered together. This means that their wings share more similarities than with the wings of other species. The origin of these similarities could be of phylogenetical inheritance providing similar structures (they belong to the same subgenus) or could be linked to their wing sizes, which are the largest across our samples (personal observation). Conversely, Ae. albopictus and Ae. japonicus are branched together, despite the fact that they belong to different subgenera.
Morphometrics have been successfully used in different applications, such as the discrimination and identification of mosquitoes (including sibling species, such as Cx. pipiens and Cx. torrentium [31], or sympatric Anopheles [30]) and to assess the influence of biotic or abiotic factors on mosquito wings [26].
GM have proven effective in the entomological field for species differentiation or the analysis of cryptic complexes. In this study, we successfully applied geometric morphometrics on French indigenous and invasive Aedes wings. This technique allowed a rapid and effective classification of six species of the Aedes genus: Ae. albopictus, Ae. cantans, Ae. cinereus s.l., Ae. japonicus, Ae. rusticus and Ae. sticticus. GM has already been used in Europe to identify female mosquitoes of autochthonous and invasive species [55]. Nevertheless, this technique is still struggling to differentiate between closely related species, such as Ae. annulipes and Ae. cantans [19,55]. Our results are in accordance with other studies performed in Europe.
Due to all the morphometric literature, researchers are steadily building a database of wing patterns. It would be interesting if all this worldwide data could be merged in order to create a global catalog of mosquito wing patterns. As some authors have shown, the landmark disposition of two geographically isolated mosquito populations from the same species can show pattern variation [28]. Nonetheless, such large databases could be of help to create worldwide tools for mosquito identification.
GM is a valuable tool to prepare for the emergence of arboviruses. Exhaustive databases could be built and made available to that end. Integration of GM tools into identification software (such as MosKeyTool) could help ease the process of identification, allowing beginner field entomologists to make accurate identifications, and confirmed entomologists to save valuable time in the case of an epidemic event.
Conclusions
Geometric morphometrics are a proven efficient tool in mosquito classification [19]. They allow the rapid and reliable identification of mosquito species, including closely related species and genera. Six autochthonous and invasive Aedes species from the northeast of France were successfully segregated in this study, with a correct reclassification rate of 98%.
As the number of morphological experts decreases, morphometric identification could be of assistance when molecular identification cannot be performed (i.e., specimens deposited in curated collections, especially type-specimens stored in museums). Today, we are witnessing an increasing number of outbreaks of mosquito-borne emerging and re-emerging diseases. In this context, field studies are mandatory to assess the presence of known vectors. Morphometrics could reduce the processing time of samples caught in the field and directly decrease latency between entomological investigation and targeted vector control operations.
Geometric morphometrics are a developing field of biological studies. The principal flaw of this technique is that landmarks must be placed manually, meaning human error is a variable in the rigorous mathematical treatment of this method. Advances in machine learning and computer vision will hopefully make it possible to automatize the entire analysis process in the near future. | 3,837.4 | 2021-04-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Challenges of Teaching English for Elementary School Student in Indonesian Rural Areas
Teaching English in rural areas presents another hurdle for English language teachers. Furthermore, this present study attempted to analyze the challenges of teaching English for young learners in Indonesian rural area. This study employed qualitative methodology and case study was used as a research design. Then, in this study, purposive sampling was adopted to select the participants. Furthermore, three elementary English teachers were participated as the participants in this study. In addition, the data were collected through semi-structured interview. Then, to validate the data this study used credibility, transferability, dependability, and confirmability. Furthermore, thematic analysis was used to analyse the data. This study discovered that there were five main challenges of teaching English in rural areas which were the status of English in curriculum, lack of qualified English teacher, insufficient of educational facilities, students’ negative attitudes toward English and students’ socio-economic background. Additionnally, the implications of this study was provided valuable information for further research and can be helpful to school's stakeholders to increase the quality of the education in the English language learning process, especially at the primary level of education.
A B S T R A C T
Teaching English in rural areas presents another hurdle for English language teachers.Furthermore, this present study attempted to analyze the challenges of teaching English for young learners in Indonesian rural area.This study employed qualitative methodology and case study was used as a research design.Then, in this study, purposive sampling was adopted to select the participants.Furthermore, three elementary English teachers were participated as the participants in this study.In addition, the data were collected through semi-structured interview.Then, to validate the data this study used credibility, transferability, dependability, and confirmability.Furthermore, thematic analysis was used to analyse the data.This study discovered that there were five main challenges of teaching English in rural areas which were the status of English in curriculum, lack of qualified English teacher, insufficient of educational facilities, students' negative attitudes toward English and students' socio-economic background.Additionnally, the implications of this study was provided valuable information for further research and can be helpful to school's stakeholders to increase the quality of the education in the English language learning process, especially at the primary level of education.
INTRODUCTION
In this era of globalization, the English language is now extensively spoken throughout the world.The English language becomes the most spoken language around the world with 1,3 billion English users (Crystal, 2003;Nazaruddin, 2017) (Rao, 2019).In Indonesia, English is primarily regarded as a foreign language, and it is rarely used in everyday communication.This prominence of English has influenced the status of the English subject in curriculum, for example the status of the English subject at the primary school level.English is regarded as one of the most hard and toughest languages to learn.Lack of English exposure is one impeding factor in attaining English among ESL or EFL students (Sikki et al., 2013) (Halik & Nusrath, 2020).Then, students with lower English proficiency level tend to have negative attitudes towards English language learning.Hence, it is difficult for English language teachers to teach English in English as Fitri Nur Laila / Challenges of Teaching English for Elementary School Student in Indonesian Rural Areas second language (ESL) and English as foreign language (EFL) context to the students (Jalaluddin & Jazadi, 2020;Mollaei & Riasati, 2013).Apart from the forementioned issues, there are myriad obstacles and difficulties to be taken into consideration for English language teacher when teaching English in ESL or EFL contexts.
The teaching of English to young learners brings with it its own variety of issues.In the context of English as a foreign language (EFL), a lack of training and professional development for teachers is one of the difficulties that can arise when teaching English to young students (Copland et al., 2014;Mijena, 2014;Prihatin et al., 2021).Additionally, as a result of this difficulty, there may be a shortage of teachers who are knowledgeable and skilled.Unmastery on the part of teachers of teaching methods, strategies, and techniques is yet another problem that has surfaced as a direct result of inadequate training and professional development for educators.Another problem that arises while teaching English to younger students in an EFL setting is a scarcity of resources and instructional material (Listyariani et al., 2018;Saleh & Ahmed, 2019).The lack of learning activities is therefore caused by the emergence of monotonous activities (ibid).Challenges associated with maintaining order in the classroom (Pertiwi et al., 2020;Ramadhani & Syamsul, 2017).In addition, a lack of exposure to the English language is another obstacle confronted while teaching English to young students (Chowdhury & Shaila, 2011;Mijena, 2014;Nikmah, 2018).
There are myriad of empirical studies that have tried to examine the issues in teaching English to young learners in general or local context (Camlibel-Acar, 2016;Copland et al., 2014;Listyariani et al., 2018;Mijena, 2014;Pertiwi et al., 2020;Prihatin et al., 2021).(Camlibel-Acar, 2016;Copland & Garton, 2014;Listyariani et al., 2018) Lack of qualified English teachers can be considered as one of the impeding factors in teaching English for young learners (TEYL) and it is caused by inadequate professional training development for teachers .With this condition, it is difficult for English teachers to teach English subjects, determine an appropriate teaching approach in the class (Pertiwi et al., 2020;Prihatin et al., 2021).Then, lack of learning resources also seems to be one of the issues that have been faced by the teachers in TEYL context.Insufficiency of English exposure in the learning process can be considered a hurdle that can be faced in TEYL especially in teaching English in foreign language context (Mijena, 2014).Furthermore, big class size can be seen as another problem in TEYL context because teachers will have difficulties in managing the classroom (Jazuli & Indrayani, 2018;Ratminingsih et al., 2018).While lack of motivation and discipline issues are the issues in TEYL context that emerge from students' side.
In addition, the teaching and learning of English in rural settings might be impeded by substantial hurdles.Some main issues that can be faced during the English teaching and learning process in rural setting such as, insufficient educational facilities students' negative perceptions regarding English language learning, lack of parental support and their socioeconomic background (Febriana et al., 2018;Ler, 2012;Shan & Abdul Aziz, 2022).Then limited learning sources can be one of obstacles in teaching English in rural setting.All of these limitations and obstacles have made it tough and challenging for English Language teachers to teach English to pupils in rural areas.However, small number of research has investigated the issues and challenges of TEYL in rural areas.Study conducted by previous study in Bangladesh examined the hurdles in TEYL at the Primary level of rural area in Bangladesh (Milon, 2014).Previous study found the majority of students in rural schools are weak in English cause their low proficiency level, scarcity of professional English teachers, lack of teachers professional training development, lack of mastery of methods and materials of teaching English, limited time allocation, classroom size, lack of pedagogical knowledge (Halik & Nusrath, 2020;Matenda et al., 2020).
Then previous study intended to investigate the barrier rural Malaysian preschool instructors have in teaching and mastering English abilities (Masturi et al., 2022).This study discovered that lack of English exposure, classroom management and lack of mastery teaching method and strategies, low level of English skills, and lack of parental relation can be identified as a challenge in teaching English at preschool level.Furthermore, other researcher discovered a number of factors that impede English language instruction in rural areas, including inadequate parental involvement, an inadequate educational materials in schools, both students' and parents' negative attitude toward English learning, a lack of motivation and proper instruction, a lack of enthusiasm in learning English, a poor learning atmosphere, a poor social background, and a lower level of basic English knowledge (Halik & Nusrath, 2020).Previous research also looked at the difficulties faced by generalist teachers of English in rural Mexican schools (Izquierdo et al., 2021).This study discovered that generalist teachers experience challenges as a result of lack professionalization of the English language, which affects their ability to teach students in the language.
There are few studies that have been examined teaching challenges in Indonesian rural area.The study by previous study examined into challenges faced by English teachers in remote areas (Harlina & Nur Yusuf, 2020).This study revealed that low student motivation in learning the English language, a lack of parental support, and poor teacher quality are some of the factors that make it difficult to teach English in rural areas.While other study investigated on difficulties with online teaching in rural areas during the Covid-19 outbreak (Agung & Surtikanti, 2020).According to this study, English teachers in rural areas face challenges such as a lack of learning facilities, a lack of student motivation, a lack of parental supervision, and insufficient teacher training while attempting to teach English online.
However, scarce research has investigated challenges in teaching English for young learners in rural in rural area.Previous study investigated difficulties toward teaching English in rustic areas in Baureno, East Jave, Indonesia (Khulel & Wibowo, 2021).Then, this study discovered 3 emerging challenges that elementary teachers confronted.First, Socio-economic condition.This condition emerged due to socioeconomic factors for instance parentals' income, background of education, and occupation.This condition had been influencing on students' motivation in learning English.Another study conducted, Indonesia who depicted that sociocultural issues, students' lack of prior knowledge, students' lack of enthusiasm, schools' lack of resources, and teachers' lack of training on their professionalism were identified as the challenges teachers in rural settings face when attempting to teach English (Syahputra, 2022).
Based on those problem and descriptions from previous empirical research, it showed that there was scarce studies have investigated the challenges in teaching young learners in Indonesian rural area.Furthermore, such studies in investigating challenges in teaching young learner in Indonesian context are still necessary.Therefore, this research aims to analyze the challenges of teaching English to young learners in rural area of Semarang Regency, Central Java, Indonesia.
METHOD
In attempt to answer the research objective, this study employed qualitative research methodology.By implementing qualitative research, it provided in-depth understanding of real-world issues by taking into consideration the natural contexts in which individuals or groups function (Haven et al., 2019;Korstjens & Moser, 2017).Furthermore, case study as part of qualitative research methodology was adopted as a research design in this study.The implementation of case study assisted in providing indept information about the phenomenon in this study (Frees & Klugman, 2013;Yin, 2018).This research conducted in several Elementary schools in rural area in Semarang Regency, Central Java, Indonesia.The researcher decided to conduct this study in Elementary schools since the focuses of this study is to investigate the challenges that emerge of teaching English for young learners in rural area.Furthermore, three elementary English teachers in rural area became the participants of this study since the focus of this study is to examine the issues that teachers face of teaching English in Elementary schools at rural area.Furthermore, purposive sampling was utilized to select the participants and there were some criteria, which were: 1.The participants are elementary English teachers in rural areas.2. The participants have minimal 1 year teaching experience.Table 1 presented demographic information about the participant.Semi structured interview was used as data collection instrument in this study.The participant asked to share their experiences in teaching English for young learners in rural area through semistructured interview.The interview took around 10-15 minutes for each participant and the participants could use either English or Bahasa Indonesia.Also, all the interview processes were documented in form of audio format.There 2 primary questions that raised to the participants were as follows: Regarding participant demographic data, the first inquiry.The second concern was the difficulty of teaching young learner in Indonesian rural areas.Also, credibility, transferability, dependability, and confirmability were used to validate the qualitative data (Moon et al., 2016;Shenton, 2004).The credibility was achieved through regular debriefing between the researchers and supervisors.Then, transferability was attained through providing clear information related to setting and subject of the data.Also, to strengthen dependability this study, all interview processes were documented.In addition, confirmability was achieved through the returning process of the transcripts of the interview of the research subjects for confirmation before being forwarded to the translation process.Then, the results of the interview were transcribed, observed, labelled, organized based on theme and further analysed.Furthermore, thematic analysis was utilized to analyse the collected data, where emerging theme would be identified, categorized, and discussed.
Fitri Nur Laila / Challenges of Teaching English for Elementary School Student in Indonesian Rural Areas
The status of English in Curriculum
This study identified the position of English in curriculum one of difficulties that confronted by English teachers in rural area.As pointed by Participant 1 that the fact that "English has become a local content lesson and is no longer the primary subject that must be taught to elementary school students is, in my opinion, a setback and lack of attention from educational administrator.Considering that English is crucial for children's future access to richer knowledge and exploration of a larger universe.Obviously, pupils must begin learning English at a young age, given that they are in primary school."(Participant 1) One of the participants, participant 1, held the view that the status of English as a local subject is a setback, as it causes some schools to pay less attention to it.Also, according to Participant 2, the fact that "English is now considered a local content subject has influenced the amount of that is allocated to the study of the English language.Even though English has local content status and teaching is scheduled once a week, in practice, sometimes I teach English once every 2 weeks or 3 weeks.Because there are some local contents being taught, sometimes I change the English subject for other material."(Participant 2) The status of English as a local content subject in the curriculum, particularly at the basic level, has been impacted by changes in educational policy.Additionally, it has impacted the time allotted for English language instruction, which the participants perceive as a drawback since they believe it is insufficient for the process of teaching and learning English.Additionally, the participants believed that learning English is crucial for children's future success in accessing more in-depth knowledge and global exploration.
Lack of Qualified English Teachers
Other issues confronted by English teachers when teaching English in rural area related with lack of certified English teachers.As stated by Participant 1, that "When teaching English to students, I usually explain the material first and then after that I give them assignments such as ask them to do the worksheets.However, I have difficulties in choosing the appropriate strategies when it comes to teach English skills to the students.This difficulty occurs because I do not come from English education background, therefore it is quite difficult to teach English skills especially since there is no English training for teacher like me." (Participant 1) Thus, the lack of proficiency English teachers in rural schools is one of the issues associated with the teaching and learning English.Therefore, it is found that English teachers are teachers who lack a foundational English education as stated by Participant 1.This makes it harder for the teacher to teach English, particularly English skills.
Insufficient Educational Facilities
Sufficient educational facilities are important to support the learning process.H However, some rural schools suffer from inadequate facilities.As noted by Participant 1 "so it's very rare for me to implement such technology or other resources to support the English teaching and learning process.Besides that, if we want to use such as an LCD, we have to borrow it first at the village office, because the school's property is in poor condition and no longer can be used."(Participant 1) The school has facilities, but they were in poor condition, as indicated by Participant 1's statement.Similarly, Participant 2 also confronted the similar issue related with inaccessibility of the internet."Inaccessibility to the internet is one of the obstacles associated with teaching English in distant areas.This makes it tough for me to access the numerous types of intriguing teaching and learning resources that are only available online." (Participant 2) It shows that in rural areas there are not enough educational facilities available to support the teaching and learning process of the English language such as the use of internet to access learning resources that are only available online.Due to the lack of not enough educational facilities.
Students' Negative Attitude in English Language learning
Students' negative attitude toward English language also found as issue in teaching English in rural area.Participant 1 pointed out that "Due to infrequent exposure, students are still unfamiliar with the English language.Therefore, students continue to feel that English is a challenging language that is unnecessary to acquire."(Participant 1) Similarly, Participant 2 pointed out that "Actually, there is no substantial difference between the learning materials provided in villages and cities from a standpoint of instructional materials.However, there is a distinction between how English is received and how it is learned.I do not wish to generalize as an English teacher in the village.This viewpoint is solely from my perspective as a teacher at our school toward the students I teach.There is a perception that English is difficult in rural places, which is sufficient to reduce their enthusiasm to study."(Participant 2).Participant 1 and Participant 2's statements indicated that students' negative attitudes toward the English language stemming from a lack of exposure to the language and their attitude demotivated them to study English.
Students' Socio-Economic Background
Students' socio-economic background also found as challenges in teaching English in rural area.As noted by Participant 2 which is "In rural areas, the majority of students come from socioeconomic backgrounds that are lower, meaning that their parents are less likely to understand the necessity of learning English.This belief is affected to some extent by the low educational background of the parents."(Participant 2) Similarly, Participant 3 noted that "The socioeconomic background of student has a significant impact on their English-learning abilities.I observe that parents from the medium and upper classes realize the significance of English more and are also sufficient from an economic standpoint.For example, paying for an English tutor for their children.Meanwhile, this is opposed by the fact that some parents from lower socioeconomic classes are unaware of the significance of English and are less able to fund their children's education."In rural areas, the majority of students come from socioeconomic backgrounds that are lower, meaning that their parents are less likely to understand the necessity of learning English.Meanwhile, some parents from lower socioeconomic classes are unaware of the significance of English and are less able to fund their children's education.
Discussion
English is still considered a sort of local content in several Indonesian primary schools.According to the findings of this study, the status of English as local content has brought about problems with time management.Participant 1 held the view that the time allotted for teaching the English topic, which is 2 x 30 minutes each per week, is insufficient due to the fact that elementary students will be unable to recall the information that they learned in the prior meeting.As a consequence of this, teachers have to spend the upcoming week going over previous content, which can take a significant amount of time.
Based on the data, it found that as local content subject, the status of English can cause problem in time management.This is in line with finding that one of the challenges that confronted by English language teachers when teaching English in primary level the time management issues emerged and it can be influenced with the status of English subject in curriculum (Khulel & Wibowo, 2021).As a result, English has received minimal consideration from teachers.
It was discovered that one of the most significant obstacles to teaching English in rural areas is the incapability to utilize appropriate teaching techniques, methods, or approaches when teaching English to students in rural area, which is responsible for the inadequate English language proficiency of rural students.Furthermore, this finding in line with findings that lack of experienced teachers became one of issues in teaching English for young learners in rural area (Harlina & Nur Yusuf, 2020;Milon, 2014).Also, this finding supported finding that in rural areas, a shortage of English-proficient teachers can significantly contribute to incapability to utilize appropriate teaching techniques, methods, or approaches (Izquierdo et al., 2021).Also, inadequate professional development for teachers also affects teachers' proficiency in teaching English.Then, it is tough and challenging to teach English to rural children without knowledge and skills related lesson planning and teaching approaches, especially in regard to teaching the four English Language skills (Fawley et al., 2020;Ogunjobi & Akindutire, 2020).
In remote areas, a lack of educational facilities can be one of the obstacles to teaching English.In this study discovered that the absence of facilities to enhance the learning process was one of the obstacles rural English teachers encountered.In addition, this is in the same vein with the findings which found that some rural schools have access to basic facilities and equipment but are unable to fully utilize the facilities due to the difficulty of obtaining the facilities (Mahdum et al., 2019;Zaenafi, 2019).Also, inadequate educational infrastructure is one of the primary obstacles that can be encountered throughout the English teaching and learning process in rural setting and this is supported finding which insufficient educational facilities can be one of obstacle in teaching English in rural area (Febriana et al., 2018).Besides that, it is reported that many facilities and equipment in rural schools have become obsolete, which this finding in the same vein with other study in rural setting some of facilities become defunct (Matenda et al., 2020).Then, the lack of internet access can be identified as a hurdle in teaching and learning in rural area.Also, this might be one of the enormous challenges faced by English Language teachers in remote areas, as it can be difficult to gain access to a variety of engaging teaching and learning materials and this is in the same vein with finding which One of the challenges of teaching in rural regions is the absence of internet access (Atmojo & Nugroho, 2020;Sabiri, 2019).
Another difficulty that found in this study is that TEYL teachers in rural Indonesian schools encountered is dealing with students who have a negative attitude regard the English language.For students Fitri Nur Laila / Challenges of Teaching English for Elementary School Student in Indonesian Rural Areas in rural Indonesian schools, a lack of enthusiasm in learning English presents a challenge for their teachers, who must overcome this impediment in order to give lessons that are easily understood by their students.It indicated that students' attitudes toward English language learning have influenced their motivation to learn English.Thus, it is evident that their unfavourable attitude toward English language study can lower students' motivation, and this can be interpreted as a challenge for English teachers in rural areas (Halik & Nusrath, 2020;Harlina & Nur Yusuf, 2020).In addition, these are in line with previous study who discovered students' negative attitudes is one of primary obstacles that can be encountered throughout the English teaching and learning process in rural settings (Halik & Nusrath, 2020).
Furthermore, lack of exposure to English can be influencing factor on students' negative attitudes toward English learning.Then this is supported some empirical studies' finding of previous study that inadequate English exposure during the learning process might be regarded as a potential obstacle in teaching English to young learners particularly in teaching English in a foreign language situation (Mijena, 2014;Nikmah, 2018).Whereas in rural locations, it can be difficult to get information or entertainment linked to the English language, and there is also a limited amount of English environment.Moreover, students' socio-economic background also plays role in impeding the teaching English process in rural setting as found in this study.Based on the data above, it is indicated that the majority of students in rural areas come from low socioeconomic background, preventing them from receiving adequate educational resources.Students who come from more fortunate backgrounds have an advantage over those who come from less privileged backgrounds because they have access to more facilities or resources.Because of this, students' socioeconomic backgrounds can have a significant impact not just on their ability to obtain a good education but also on their motivation and attitude toward learning.This aligns with the findings who showed that socio-economic background became one of the problems in teaching English in rural settings (Khulel & Wibowo, 2021).This difficulty occurred as a result of socio-economic background factors such as parental income and educational background.
Based on the result of discussion, it shows that there several challenges in teaching English for young learners in rural setting such as the status of English as local content subject which cause time management problem, lack of qualified English teachers, lack of educational facilities and students' negative attitude toward English teaching and learning.Then, these challenges that faced by the English teachers when teaching in rural area can impede the teaching and learning process.The implication of this study provides valuable information for further research.Also, the finding of this study is expected to provides valuable insight and be helpful for school's stakeholders to increase the quality of the education in English language learning process especially in primary level of education.After reaching a conclusion based on the study's findings, there were certain recommendations that were intended to be given to those who teach English and to other researchers.It is advised that teachers of English, particularly those who teach English to young learners, make use of an appropriate method that might be employed by teachers to overcome the issues of TEYL in rural settings.Then, for other scholars, it is indicated that it is worthwhile to conduct further inquiry in this topic.
CONCLUSION
This study aims to investigate the challenges of teaching English to young learners in rural area of Semarang Regency, Central Java, Indonesia.This study discovered that there are five main challenges that confronted by English teacher in rural area.First, it is the status of English in curriculum which can be a factor that cause time management issue.The second challenges that depicted in this study is lack of qualified English teachers and this issue is affected by the insufficient of teacher professional development.Then, insufficient educational facilities discovered as hurdles that English teacher confronted when teaching English in rural setting.Then, negative attitudes about English caused by a lack of motivation and exposure constitute an additional challenge for rural primary school teachers.In addition, students' socioeconomic background can be indicated as issues in teaching English in rural setting. | 6,134.8 | 2023-10-06T00:00:00.000 | [
"Education",
"Linguistics"
] |
Analysis of a Possible Battery Chargertopology
The basic converter is shown in Fig. 1 [1] and consists of an inductor L, an inductor LB , a capacitor C, an active switch S and a passive switch D. The positive pole of the output voltage is the cathode of the diode. The battery is modelled by a voltage source UB . Due to the additional capacitor C and the additional inductor L, the classical converter, which is only a step-down converter, is transferred into a step-up-down converter. The mean value of the output voltage U2 , with the duty ratio of the active switch d (the on-time of the active switch referred to the switching period) and neglecting the losses is
Introduction
The basic converter is shown in Fig. 1 [1] and consists of an inductor L, an inductor L B , a capacitor C, an active switch S and a passive switch D. The positive pole of the output voltage is the cathode of the diode. The battery is modelled by a voltage source U B . Due to the additional capacitor C and the additional inductor L, the classical converter, which is only a step-down converter, is transferred into a step-up-down converter. The mean value of the output voltage U 2 , with the duty ratio of the active switch d (the on-time of the active switch referred to the switching period) and neglecting the losses is . (1) The converter is a step-up step-down converter, which also enables us to drive a DC motor in one direction (one-quadrant drive), controlled braking is not possible with this circuit. For other concepts, control applications, and detail about the electrical machine refer to the references [2] till [7].
Basic analysis
The basic analysis has to be done with idealized components (that means no parasitic resistors, no switching losses) and for the continuous mode in steady (stationary) state. A good way to start is to consider the voltage across the inductors.
Since for the stationary case the absolute values of the voltagetime-areas of the inductors have to be equal (the voltage across the inductor has to be zero in the average), we can easily draw the shapes according to Figs. 2 and 3. (Here the capacitor is assumed to be so large that the voltage can be regarded constant during a pulse period.) Figure 2 shows the current through and the voltage across inductor L. The current rate of rise of course depends on the values of L and U 1 . Figure 3 shows the current through and the voltage across inductor L B . Based on the equality of the voltage-time-areas in the stationary case, it is easy to give the transformation relationship for the battery voltage U B as a function of the input voltage U 1 and the duty ratio d. From Fig. 3 we get (2) and from Fig. 2 . (3) After a few steps we get the voltage transformation law Figure 4 shows the voltage transformation factor (the ratio of mean value of the output voltage of the converter to input voltage) which is dependant on the duty cycle d. The converter is a step-up-down converter.
In the same manner a relationship for the currents through the inductors can be derived based on the equality of the absolute values of the current-time-areas of the capacitor during on-and offtimes of the active switch. From Fig. 5, which shows the current through the capacitor C, we get , . Fig. 1 one can immediately see that the current through the semiconductor devices is the sum of the inductor currents i L and i LB (through the switch S during T on ϭ d и T through the diode D during T off ϭ (1 Ϫ d) и T. Therefore, the current maximum values for the semiconductor devices are approximately .
(7) Figure 6 shows the current through the active switch and Fig. 7 through the passive switch (diode).
From Fig. 6 one can see that the correct current through the switch S is (8) The rms values are important for the calculation of the onstate losses. One can calculate the exact rms value for switch S according to . (9) After a few steps we get the exact rms value for the switch S to . (10) The current rms value in the middle for the transistor S with the current mean value of the inductors is approximately . (11) The conduction losses for the switch S with R DS(on) as on-resistance of the transistor S can be calculated according to . (12) After a few steps we get for the conduction losses of the active switch S . The conduction losses of transistor S with (11) are approximately , which could be supplemented by the current ripple by using the exact equation (10).
From Fig. 7 one can see that the current through the passive switch D is .
(15) Therefore, one can calculate the exact rms value for diode D according to . (16) After a few steps we get the exact rms value for the diode .
Fig. 7 Voltage across and current through the diode D
The current rms value in the middle for the diode D with the current mean value of the inductors is approximately . ( The conduction losses for the diode D with R D as differential resistor of the diode and U VD as the fixed forward voltage can be calculated according to . ( After a few steps we get the conduction losses for the passive switch D . (20) The conduction losses of diode D with (18) are approximately , which omits the current ripple.
So one can calculate the total conduction losses according to .
We can calculate a rough approximation of the total conduction losses according to (22) and get . (23) One can calculate the efficiency of the converter (taking only the semiconductor conduction losses into account) according to It is therefore easy to calculate a rough approximation of the efficiency of the converter for continuous operation mode and we get
Converter model
The state variables are the inductor current i L , the charging inductor current i LB , and the capacitor voltage u C . The input variables are the input voltage u 1 and the voltage u B . The fixed forward voltage of the diode (the diode is modelled as a fixed forward voltage U VD and an additional voltage drop depending on the differential resistor of the diode R D ) is included as an additional vector. The parasitic resistances are the on-resistance of the active switch R S , the series resistance of the converter coil R L , the series resistance of the charging coil R LB , the series resistor of the capacitor R C , and the differential resistor of the diode R D .
In continuous inductor current mode there are two states. In state one the active switch is turned on and the passive switch is turned off. Figure 8 shows this switching state one.
The state space equations are now In state two the active switch is turned off and the passive switch is turned on. Figure 9 shows this switching state two.
The describing equations are leading to the state space description (33) When using two active switches in push-pull mode (a second active switch instead of the diode), the diode is shunted and U VD can be set to zero. R D is then the on-resistance of the second switch. Combining the two systems by the state-space averaging method leads to a model, which describes the converter in the mean. We can combine these two sets of equations, providing that the system time constants are large compared to the switching period.
Weighed by the duty ratio, the combination of the two sets yields to (34) The dynamic behavior of the idealized converter is described correctly in the average by the given system of equations, thus quickly giving us a general view of the dynamic behavior of the converter. The superimposed ripple (which appears very pronounced in the coils) is of no importance for qualifying the dynamic behavior. This model is also appropriate as large-signal model, because no limitations with respect to the signal values have been made. .
Linearizing this system around the operating point enables us to calculate transfer functions for constructing Bode plots.
The weighed matrix differential equation representing the dynamic behavior of the converter is a nonlinear one. A linearization is necessary to use the possibilities of the linear control theory. We can calculate the linearized small signal model of the converter with capital letters for the operating point values and small letters for the disturbance around the operating point . (35) The following equations can be specified for the stationary (working point) values . (38) The same results as derived from the section basic analysis can be achieved in the ideal case (assuming ideal devices with no losses) One can calculate the linearized small signal model of the converter according to (eq. 42) Fig. 10 Gate-source voltage (blue) of the active switch by a duty cycle of 50%, measurement signal of the LEM current sensor (turquoise) and current through the inductor LB measured by a current probe (green) which is measured at the active switch (blue), the measurement signal of a LEM current sensor (turquoise and triangular shaped) and the current through the inductor L B (green and triangular shaped).
Two-point control
The two-point control is another control method for controlling the output current of the battery charger. In comparison with the pulse width modulation, the two-point control operates with variable periodic time as well as with variable pulse width. The activation and deactivation of the active switch depends on the instantaneous values of the current to be controlled. The active switch has to be turned off when the current reaches the upper limit and has to be turned on again when the current reaches the lower limit. The difference between the switching points is called hysteresis. Figure 11 shows the non-inverting Schmitt-trigger. The parallel positive feedback creates the needed hysteresis. The hysteresis voltage is controlled by the proportion between the resistances of R 1 and R 2 . The resistor R 1 can be calculated from resistor R 2 , upper threshold voltage u Tϩ , lower threshold voltage u TϪ , higher supply voltage u satϩ and lower supply voltage u satϪ according to . (43) The inverting input makes the reference point. The reference voltage is obtained from the voltage divider R 3 and R 4 . One can calculate the offset voltage u V according to . (44) The resistor R 4 can be calculated from resistor R 3 , offset voltage u V and supply voltage usat according to . (45)
Results of two-point control
A DC-DC converter with two-point control and current measurement was built for the evaluation of the two-point control in the circuit. Figure 12 shows the concept. The DC-DC converter with parasitic resistances of the components and fixed forward voltage of the diode as well as the two-point control and current measurement are indicated. A so-called "shunt resistor" RS is indicated for the current measurement. A LEM-current sensor in series with the output inductor is used in the real circuit for current sensing. A driver stage is used in order to turn on or block the MOSFET as quickly as possible.
The circuit of the non-inverting Schmitt-trigger is dimensioned so that the upper threshold voltage u Tϩ is and the lower threshold voltage u TϪ is 2.2 V. The offset voltage u V is 2.4 Vin this case. The supply voltage of the LEM-converter and the gate driver is 10 V. The input voltage of the converter can be up to 36 V. Figure 13 shows that the control signal (orange) turns to "high", when the measurement signal of the LEM current sensor (red) reaches the lower threshold and it turns to "low", when the measurement signal of the LEM current sensor reaches the upper threshold. The on-time is 6 μs and the off-time is 17 μs. This corresponds to a frequency of 59 kHz. Figure 14 shows the measurement signal of the LEM current sensor (turquoise), the hysteresis (pink), the current through the inductor L B (green) and the gate-source voltage (blue). From Fig. 10 one can immediately see that the current through the inductor L B (green) is between -2.2 A and 3.3 V. With the two-point control, the current through the inductor L B (green) is limited between 0.2 A to 1.4 A now.
Conclusion
Battery chargers are very important, especially for low and medium voltages and low and medium power (e.g. handhelds, cars [8], solar applications [9]). The here treated converter is similar to the basic step-down converter. Due to a simple LC element which is added, the converter generates now a mean voltage across the diode, which can be higher or lower than the input voltage. This is especially useful when only low input voltages are available. The system can be described as a three order system. A twopoint control with a non-inverting two-point controller is useful to control the output voltage or output current. In this way, the active switch is turned on and off. Therefore, it is possible to keep the load current i LB in a defined range by stopping the rise of the coil current via the active switch. The converter is especially applicable for solar and small fuel cell applications. It can be designed also for medium power applications e.g. for charging car batteries out of a solar generator. | 3,196.2 | 2013-08-31T00:00:00.000 | [
"Engineering"
] |
Comparison of the Structural Performance of Ballasted and Non-Ballasted Pavement of the Railway with the Help of Beam Models on Elastic Bed and Finite Element Model
: Today, many countries are opting for ballast less paving of railways using concrete line slabs as an alternative. Ballast paving is particularly prevalent in high-speed lines, urban railways, as well as bridges and tunnels designed for high loads and speeds. The priority lies in ensuring that these lines are paved with proper structural performance, especially considering the mechanics of the line and the choice between ballast pavement and line slabs. This article delves into the structural modeling and analysis process of ballast and slabs for railway lines. Models for both ballast and slab lines under passage have been developed. The study investigates the train wheel load and the impact of bed conditions. Theoretical solutions, including the beam model on an elastic bed for ballast lines and the two-beam model for line slabs, have been presented. Multiple finite element models were explored using ABAQUS software, confirming the accuracy of the theoretical results with acceptable precision. This work establishes a suitable model for calculating the effects of vertical force on the line pavement structure and provides applications for comparing design performance between ballast and slab lines. The results highlight the optimal performance of the modeling and the superiority of the line slabs.
Introduction
Although several centuries have passed since the birth of the railway, the anticipated growth in this industry has not materialized.Numerous factors, including world wars, the evolution of the aviation industry, and various political influences, have played pivotal roles in shaping the industry's progress.These factors have significantly impacted its scientific and theoretical development (Guan et al., 2007;Törnquist, 2007).The absence of regulations and standardized guidelines for designing railway elements and components, coupled with a simplistic and non-specialized perspective in certain scientific fields, has hindered the advancement of this industry.The lack of synchronized progress with other sciences has resulted in the neglect of analysis and design for these structures.Considering the priorities of the country's rail transport vision, the development of customer-oriented transportation through scheduled cargo trains, combined transportation, and high-speed passenger trains
Suggested Citation
stands out as a crucial focus (Button et al., 2004;Crozet, 2004;Donato et al., 2004).Achieving this vision necessitates the establishment of a solid foundation for high-performance structured pavements.Without creating a robust framework to support the use of highperformance structured pavements, the objectives of the country's rail transport vision will remain unattainable.
Introduction of Railway Lines
The primary function of a railway line is to establish a durable and level surface for the smooth passage of trains and rail fleets.It has the task of distributing and diminishing the substantial stresses generated by the wheel's passage to a manageable level for various components of the road surface, including the bed.Therefore, the railway line and pavement are evaluated in terms of structural adequacy and performance.The structure of railway lines can be generally classified into two categories: ballasted and non-ballasted.The ballasted pavement of the railway comprises a set of rails, sleepers, and their connecting devices, all placed on the top layer.To enhance the structure's performance, it is constructed on a layer of carefully shaped ballast and a compact substrate.This configuration facilitates the transfer and distribution of the load, operating on the principle of stress dispersion layer by layer (Ishikawa et al., 2014;Koike et al., 2014;Woodward et al., 2014).
Since the beginning of the 20th century, with the increase in operating speed and axle loads on railway lines, the Yalas T line ceased to meet the structural requirements of some lines.Consequently, there was a serious consideration to replace it with high-durability materials, and this plan gained momentum in the years following 1990.Despite the extensive advantages and competition in terms of life cycle costs, nowadays, overhead lines are not only used for high-speed lines but also for other rail systems, including Intercity and intracity systems, in various locations such as technical buildings, bridges, and tunnels (Chai et al., 2018;Kaewunruen et al., 2019).Line slabs, also known as ballast less pavements, are connected by rail binding, forming a continuous concrete slab placed under the fixed base on the prepared bed.Often, a continuous rail is welded within them.These structures are categorized into different types, including in-situ and prefabricated.In this research, various parameters of both ballasted and non-ballasted pavement structures are studied to compare their performance (Kistanov, 2017;X. Yang et al., 2019).
Comparison of the Structure of Ballasted and Non-Ballasted Railway Pavements
Creating a solid bed to withstand the heavy loads of the rail fleet is a necessity in railway lines.This not only supports the heavy loads but also fulfills other functions of the line, such as stabilizing width and geometrical specifications and controlling vibrations.In comparison with ballasted pavement, some structural advantages of lines without ballast are as follows: -High resistance against incoming loads and more uniform distribution of stress.
-Controlling the change of shapes and making them uniform, -Absence of upward pulling force on the line when the high-speed train passes.
-Better ability to bear the longitudinal and transverse forces of the line, -Better lateral stability and reduced risk of buckling.
-Higher shear resistance of concrete slabs in lateral movement against lateral acceleration in screws compared to ballast bed.
-Increasing the life span of the line (weaknesses in the structure's operating criteria, especially cracks and fatigue are less visible), -Reducing plastic damage that changes the geometry of the line, -Less construction depth, reducing the height and weight of the structure, and -Reaching higher speeds and improving safety in walking and moving.
Construction, repair, and maintenance: Approximately 60 percent of maintenance activities are directly influenced by the use of high-quality materials, highlighting the impact of these materials in the repair and maintenance processes.Research indicates that the cost and time required for repair and maintenance in ballast less lines are significantly lower than those in ballasted pavements.Technically and economically conducted studies in passenger lines support this conclusion (Serdelová & Vičan, 2015;Y. Yang, Wu, Wu, Jiang, et al., 2015;Y. Yang, Wu, Wu, Zhang, et al., 2015).The capital cost for laying is nearly three times that of a ballast bed.On the other hand, considering the costs over the operational period and the reduced need for maintenance, the difference in investment cost can be recouped in approximately 10 years.Proper distribution of load to minimize pressure on the bed is illustrated in Figure ( 1).The figure depicts the base and substrate in two types of pavements, with and without ballast.Following the principle of stress distribution for a constant axial load in both systems, the stress decreases through various pavement layers until reaching the tolerable limit for each layer, concluding at the substrate layer.A comparison of stress values in each layer allows us to discern the superior performance of the line slab.Changes in the formed shapes can be compared using It is evident that the change in the shape of the bindings in the slab lines is more pronounced than in the ballast lines.This emphasizes the necessity of designing specific bindings for this type of lines to minimize shape alterations (Galvín et al., 2010).To mitigate changes in the shape of the ballast lines and enhance rigidity in such cases, solutions have been implemented.Providing and fixing the geometric parameters of the line in line slab configurations, along with the increase in operating speed, can be significant.Durability during the operational period: Examination of a sample of continuous armored lines, after handling a tonnage exceeding 750 million since their construction, indicates that these lines remained free of cracks throughout the service period (Guerrieri et al., 2012).
Material Selection
The innovative approach to analyzing and designing the slab structure of railway lines necessitates a comprehensive examination of the structure and various models as crucial tools in shaping the theory of analysis.In this research, theoretical analyses and numerous computer simulations have been investigated and planned for comparison based on them.Achievement of suitability for the performance of the structure of lines without concrete ballast and with ballast was attained (Paiva et al., 2015;Sol-Sánchez et al., 2015).However, specific information, such as features and characteristics of geometry, materials, loading, and design parameters, was selected and utilized (Wang et al., 2015).
The model of the beam on the elastic bed for ballast pavement, considering values pertinent to its structure, especially for the intermittent support module of the bed, and the two-beam model on the elastic bed, is regarded as the slab model for railway lines.The theory of these two models was then solved.While the use of beam theory on an elastic platform has been considered for ballast pavement for many years, the use of the two-beam theory on an elastic bed is a novel aspect.To validate the results of these theories, several finite element models were created using ABAQUS software, and the results were confirmed with appropriate accuracy.
To account for the dynamic effects of train movement and apply quasi-static loads (Pd) for utilizing legal relations in pavement design, the dimensionless dynamic impact coefficient is consistently greater than one for the vertical wheel load (Ps).Relation (1) is employed as the necessary equation in the research process.
In this relationship, V represents the speed, and D stands for the diameter of the wagon or locomotive.To calculate the dynamic impact coefficient, various regulations and research results provide relationships, primarily of an experimental nature.Each of these relationships takes into account the effect of various parameters.
Model of the Structure of Railway Lines
The primary task of line models is to establish the relationship between the components of the superstructures and substructures of the line, ensuring accurate interactions.The complexity lies in determining the impact of traffic loads on the stresses, strains, and deformations of the system.Generally, railway lines are subjected to loads applied in three directions: vertical, lateral, and longitudinal.However, some models only consider the vertical components of the load.
Various models, such as the beam on continuous elastic bed, beam on discrete and separate support cases, two-beam model on a continuous elastic bed, two-dimensional beam and plate models, and three-dimensional finite element models, can be considered (Morelli et al., 2014;Podworna, 2014).Kamyab Moghaddam has conducted research on recent developments in ballast less versus ballasted tracks, focusing on high-speed and urban lines.His investigation unveiled that flexible fastening systems can offer reliable track support and present a favorable life cycle cost for a slab track system.The outcomes of the direct fixation fastening model, concerning lateral deflection and longitudinal resistance of fasteners in this study, were compared against the criteria reviewed by Kamyab Moghaddam, confirming the validity of the model (Moghaddam, 2017).
One of the crucial components of the railway line is the rail, which is modeled as a continuous beam structure on an elastic support.The dynamic displacement of the rail under passing loads is primarily based on two theories: the first being Euler-Bernoulli's theory.However, this theory falls short in accounting for the effects of shear deformation and torsional inertia of the beam under vibrations.For this purpose, Tymoshenko's theory is employed.
In recent years, the continuous beam model on elastic supports has been applied to ballasted pavements.A more detailed model, as depicted in Figure (3), is introduced, encompassing two layers of rails and sleepers, as well as the top and bottom layers.In this model, the rail takes the form of continuous beams on separate supports, while the sleepers are represented as masses.The concentric binders and ballast are modeled as spring and damper elements, and the bed is considered rigid (Cheng & Xue, 2014).
The Parameters and Limits of the Required Values of the Research
In the research process, specific values are necessary, and the limits and specifications of these parameters are outlined in Table 3.The predominant use of rail in railway lines, particularly in the country, can be traced back to the UIC60 section.Due to the unique shape of this section and the limitations of software modeling, it has been represented in the research process as section I, as illustrated in Figure ( 5) within the software.The error associated with this approximation is negligible, given the significance of the moment of inertia of the rail in the analysis.The most common analysis model for the railway structure is the beam on the elastic bed, wherein torsion and shear effects are considered negligible, and the Euler-Bernoulli beam theory is typically chosen.As an illustration, a theoretical example of a ballast line with a beam designed on an elastic base is compared with software modeling, and the results are presented (Gil & Im, 2014;Remennikov & Kaewunruen, 2014).
By utilizing the relations and values from the theoretical solution and taking into account the effect of only one spur wheel, the maximum displacement is calculated.(The impact of other wheels of the train can be calculated within the range of interaction, following the principle of superposition.)Understanding the change in positions allows for the determination of other parameters related to the structure's response.
The governing differential equation for this model is described in relation ( 2): Notably, the structural design of the rail and its interaction force with the pavement significantly influence the design results (Hudson et al., 2016).
Two-Beam Model on Elastic Bed
The dynamic conditions governing the slab structure of the lines are highly similar to those of ballast pavement and railway bridges.A comprehensive investigation was conducted on a wide range of sources, leading to the development of a specific structural dynamic model for pavement without ballast.Equations ( 3) and ( 4) in Raitah present the differential equations that govern this model: The variables y1 and y2 represent the displacement of the rail and slab, while u1 and u2 denote the elastic modulus of the intermittent support for the rail and slab, respectively.
Conclusion
In a broader perspective, the following results can be highlighted: the performance of the slab structure of the lines surpasses that of ballast pavement in handling incoming loads and addressing structural weaknesses.This superiority is attributed to the system's inherent high rigidity, complemented by targeted measures.According to the discussed models, the bed modulus and its accurate determination, along with dynamic axial load values, emerge as the most influential parameters.The impact of increasing the axial load on slab lines induces more substantial differences in displacements in the rail and the slab, particularly concerning the maximum anchor.The theories of beams on an elastic bed for ballast pavement and the theory of two beams on an elastic bed for slab lines can be employed with sufficient accuracy for modeling these structures.Investigating the behavior of slab lines using the theory of two beams on an elastic bed allows for a thorough examination, enabling the establishment of appropriate design limits derived from this theory.The exact solutions of the differential equations governing the discussed beam models, with results noteworthy enough to mention, can be employed in a codified and algorithmic manner, contributing to legal considerations and the development of computing software.Additionally, in this research, automatic calculations have been executed using Excel software with the aforementioned algorithmic approach, enhancing the practical applicability of the findings.
Above all, the discussion should explain how your research has moved the body of scientific knowledge forward.
Figure 3 .
Figure 3. Beam Model on Elastic Bed and Beam on Separate Supports for Ballast Pavement Model
Figure 4 .
Figure 4. Cross-Section of Models (a) Ballast Pavement (b) Slab of Rahda Line (c) Slab of Astaf Line (d) Slab of Floating Line
Figure
Figure 5. UIC60 Rail Section.Section I, Equivalent Shape for UIC60 Rail in Section Builder Software
Figure 6 .
Figure 6.Diagram of Displacement and Maximum Rail Anchor in Beam Model on Elastic Bed
Figure 8 .
Figure 8. Diagram of the Study of the Effect of the Modulus of the Support of the Bed on the Displacement of the Rail
Figure 9 .
Figure 9.The Effect of the Arrangement of the Wheels of a Passing Wagon on the Rail Displacement Diagram in ABAQUS Additionally, Es and Ec represent the elasticity modulus of the rail and the concrete slab, and Ic and Is are the moments of inertia of the rail and the concrete slab(D'Angelo et al., 2016;Lamas- Lopez et al., 2016).This model aims to solve the theory and present formulas or calculations through software.The results obtained with the help of various component models have certain limitations.To facilitate a more accurate comparison, and considering the specific tasks and type of line intended for the slab pavement, the wheel load in the calculations and disturbances has been calculated and set equal to 195.83.
Figure 10 .
Figure 10.Two-Beam Model on Elastic Bed for the General Model of Slab Lines
Figure 12 .
Figure 12.The Diagram for Studying the Effect of Axial Load on Displacement and Maximum Anchorage of Rail and Slab
Figure
Figure 13.Comparison Diagram of Rail Displacement in Two Beam Models and Two Beams on Elastic Bed
Table 1 . Comparison of Displacement in Ballast Lines and Line Slab (Budisa) Changing the shape in the ballast line (mm)
Table (1) for both binding and line slab configurations.The table provides valuable data for both types of ballast lines and line slabs. | 3,950.8 | 2023-11-01T00:00:00.000 | [
"Engineering"
] |
Impact of Investment Case on Equitable Access on Maternal and Child Health Services in Nepal: Quasi-experimental study
Different areas of disparities remain a concern in developing countries like Nepal regarding the utilization of maternal, neonatal and child health services like disparities in education, income, administrative regions, ethnic groups, province-level etc. In order to support equitable outcomes for Maternal, Neonatal and Child Health (MNCH) and to scale-up quality services, an Investment case was launched by developmental partners in the Asia-Pacic region. Investment Case (IC) at the local level aims to develop a coherent plan with local level development plans, which is equitable and responsive to the bottlenecks and the local needs. The study aims to identify the factors affecting equitable access to maternal health services in Nepal. and (difference difference of 14.4), education:15.5, primary education: 14.4 and secondary or higher 4.3), and middle wealth husbands’ (no education and primary education 17.0). There were moderate to little improvements in other variables in comparison districts. Only the difference in service or business category was statistically signicant with at least 4 ANC with p-value 0.005.
Abstract
Background Different areas of disparities remain a concern in developing countries like Nepal regarding the utilization of maternal, neonatal and child health services like disparities in education, income, administrative regions, ethnic groups, province-level etc. In order to support equitable outcomes for Maternal, Neonatal and Child Health (MNCH) and to scale-up quality services, an Investment case was launched by developmental partners in the Asia-Paci c region. Investment Case (IC) at the local level aims to develop a coherent plan with local level development plans, which is equitable and responsive to the bottlenecks and the local needs. The study aims to identify the factors affecting equitable access to maternal health services in Nepal.
Methods
The study focuses on the impact of the intervention package developed by applying the investment case (IC) approach in maternal and child health services in Nepal introduced in 2011. Complex sample analysis was carried out to adjust the weight of the sample. Cross tabulation with Con dence Interval (CI) was used to generate weighted disaggregated data. Difference in Difference (DiD) analysis was carried out using a linear regression model. Finally, multivariate linear regression was carried out to gure out the effect of the intervention.
Results
Based on the data, the improvements before and after the intervention were calculated in both the intervention and comparison districts; no variables showed a signi cant association. Changes were similar for intervention and comparison areas: four antenatal care seeking (DiD=-4.8, p = 0.547 CI= -0.041-0.022), Skill Birth Attendance (SBA) delivery (DiD = 6.6, p = 0.325, CI= -0.010-0.039). Multivariable regression analysis also did not reveal any signi cant improvement in aggregate outcomes. The intervention did not play a signi cant role in any variables, i.e., four antenatal care seeking (p-value 0.062), SBA delivery (p-value 0.939).
Conclusion
The IC approach is itself a successful approach in most of the developing countries. After the implementation of IC, some of the MNCH indicators like ANC, SBA delivery have shown improvements in the intervention as well as comparison districts but have not shown signi cant with the intervention.
Background
Maternal mortality reduction has also been a globally, SERO regional and national commitment, with a vital role to be played in the Agenda for Sustainable Development. A major target under Sustainable Development Goal no. 3 is to reduce the global maternal mortality ratio to less than 70 per 100,000 live births. Almost all maternal deaths (99%) occur in developing countries (1).
Nepal is ranked 149 out of 187 countries in terms of human development, and 25% of the people live below the poverty line (2). The majority of the poor are women, Dalit, and disadvantaged Janjati (indigenous groups). According to the Central Bureau of Statistics (3), the most disadvantaged are households from the remote hill and mountain areas, as well as the Terai community. Due to the constraints and bottlenecks in the health system, the interventions do not reach the needy people (4). According to the Nepal Demographic and Health Survey, 2016, the proportion of four ANC visits and institutional delivery are not equally distributed across different provinces and geographical areas (5). The government of Nepal (GoN) is committed to bring about tangible changes in the health-sector development process and provide equitable access to quality health care for all people. The aim is to provide an equitable, high-quality health care system for the Nepalese people (6).
To decrease the inequities in health through responsive and accessible services and improved quality health system, the Government of Nepal has initiated engaging local-level stakeholders in planning and implementing programmes, as envisioned in Nepal Health Sector programme-Implementation Plan II (NHSP-IP II) (7). In order to support equitable outcomes for Maternal, Neonatal and Child Health and to scale-up quality services, the Investment case approach was launched by developmental partners in the Asia-Paci c region (8). Investment Case (IC) at the local level aims to develop a coherent plan with local level development plans, which is equitable and responsive to the bottlenecks and local needs (9).
The Investment Case (IC) is a strategic and evidence-based problem-solving approach to support better maternal, neonatal, and child healthcare planning and budgeting. It highlights the immediate need to accelerate progress towards health-related MDGs 4 and 5 by describing health issues being faced by a country in the area of MNCH. IC analysis is based on the 'Tanahashi model', bottlenecks framework, the idea of ve different determinants to measure the capacity and intervention to produce the desired quality of service i.e., effective coverage (9). Tanahashi's model is the model designed in 1978 in order to identify the gaps in quality and effectiveness in service delivery. The gap refers to the proportion of the target population that does not receive effective coverage (4). The ve determinants of the Tanahashi model are (i) availability (ii) accessibility (iii) acceptability (iv) contact and (v) effectiveness (10). It is designed to identify current barriers to better coverage and performance and to work out the costs and impacts of potential interventions to improve performance and overall equity (9). Implementation of the IC approach involves ve steps, which start from advocacy with the government, selection of interventions (tracers), data mapping and collection, data validation, bottlenecks analysis, and strategies development (11).
In order to achieve health-related MDG goals 4 and 5, in partnership with UNICEF Nepal, Nepal Government intervened Investment case approach in 19 districts of Nepal in 2011 in order to address the local constraints to MNCH intervention coverage. IC aims to explore the prevalent constraints in the existing health system of districts that hinder the desired outcomes, especially related to maternal, neonatal, and child health. The districts that had low coverage were chosen in the approach and representative from each ecological region (8). Hence, the study aims to identify the factors affecting equitable access to maternal health services in Nepal.
Study setting and population
Geographically Nepal is divided into three ecological regions-Mountains, hills, and plains. According to the census of 2011, 50 percent of the population resides in the Plain area, 43 percent in the hills and the rest 7 percent in mountain areas (12). In this study, 16 districts were taken as intervention districts in which the investment case approach was used and 24 districts with similar HDI were taken as comparison districts. Both intervention and comparison districts had Plain, hilly and mountainous areas.
Study design
The study used quasi-experimental study design in order to assess the impact of the intervention package developed by applying an investment case (IC) approach in maternal and child health services in Nepal. The study used the Nepal Demographic and Health Survey data for 16 intervention and 24 comparison districts of Nepal to assess the IC approach's impact. The study comprises data from two surveys i.e., NDHS 2011 and 2016. All the methods of both the survey were same.
Data Sources and variables
The study uses the Nepal Demographic and Health Survey to assess the impact of the intervention package developed by applying the investment case (IC) approach in maternal and child health services in Nepal. The Demographic and Health Survey (DHS) is a standardized survey that collects household data for population, health and nutrition. The DHS survey is nationally representative and uses a multi-stage sampling process for collecting data (13). The study comprises data from two surveys i.e., NDHS 2011 and 2016. All the methods of both the survey were same. Considering the intervention duration, data were restricted to the recent three years of the surveys (2009 to 2014) i.e. back to 2009 for 2011 surveys and back to 2014 for 2016 survey for many ANC, at least 4 ANC, Skilled Attended Delivery. The Difference in Difference (DiD) analysis was done to identify any differences between the intervention and comparison groups as a result of changes over time rather than as a consequence of the intervention itself. The linear regression method was used to calculate DiD. In the rst step, screening of the variable was performed where variables or interaction term with p-value ≤ 0.2. Eligible variables based on the cutoff value were further analysed using a multivariate linear regression model using the stepwise method. Before proceeding to the multivariate model, multi-collinearity was assessed for each model. Multi-collinearity was assessed on the basis of the variance in ation factor (VIF) cut-off value of 3. Control variables with p-value ≥ 0.05 were excluded (variables with the highest p-value were removed rst) from the model one by one. The nal resulting model was considered to give the true effects of interventions. The process was repeated for each independent variable.
Ethical approval was taken from the Ethical Review Board of the Nepal Health Research Council and the LMU Ethical Commission 3. Results Table 2 presents the difference in difference (DiD) ndings for the distribution of at least 4 ANC practices in relation to the female participants' independent variables by intervention and comparison areas and before and after the intervention period. After the intervention period, at least 4 ANC increased from 40. Table 3 shows the difference in differences and the level of signi cance within the independent variables with delivery conducted by SBA. Table 4 shows Table 5 shows the bivariate and multivariate regression for SBA delivery. The independent variables contributing towards change in SBA delivery were wealth index (p-value < 0.001), women education (p-value < 0.001), place of residence (p-value < 0.001), women's age (p-value 0.011), distance from health facility (p-value 0.007), and ecological region (p-value 0.024). Intervention did not play signi cant role for SBA delivery between the time period 2011 and 2016 (p-value 0.939).
Discussion
This study intends to assess the intervention's effectiveness in the 16 implemented districts taking other districts as a comparison. The study's main purpose was to assess the impact of the intervention package developed by applying the investment case (IC) approach in maternal health services in Nepal.
The maternal and child health indicators are greater than the Sub-Saharan African countries, and the reduction rate is also high (14). Some improvement has been observed in at least 4 ANC and skilled birth attendant delivery, but these improvements are generally similar in both intervention and comparison areas. Difference-in-differences did not show any signi cant improvement in the intervention and comparison districts. The intervention did not play signi cant role in any of the major indicators related to the Investment Case between the time period 2011 and 2016. Studies from a review of the survey of data from 54 countries indicated that skilled birth interventions were the least equitable intervention followed by four or more antenatal care visits (15).
A meta-analysis is a systematic review from eleven randomised and clustered randomized study from the countries like Nepal, Bangladesh, India, and Pakistan showed that community intervention and education of women showed no difference in the at least one ANC visit and three or more ANC visits and health care facility birth between the community interventions combined and control (16). A cluster randomized controlled trial of a package of community-based maternal-neonatal interventions was conducted in Mirzapur, Bangladesh in which indicators of practices and knowledge related to maternal and neonates improved in the intervention areas than comparison areas (17). A systematic review conducted for maternal health interventions in resource-limited countries revealed that the programs with multiple interventions would positively impact maternal health outcomes than the single intervention programs (18). A community-based intervention study conducted in India has shown signi cant health improvements in care-seeking and health behaviour (19).
A systematic review which analysed 208 innovative approaches reported in 259 studies and reports, including systematic and narrative reviews, randomized controlled trials (RCTs), cluster randomized controlled trials, controlled and uncontrolled pre-post and time-series studies, cross-sectional studies, and expert perspectives papers concluded that innovative approaches in MNH with innovative implementation and service delivery would help to improve equity in MNH services (20). The "Aama Suraksha Karyakram" (safe motherhood security program) implemented by the government of Nepal in order to address barriers in accessing maternal health services in Nepal is an effective and e cient program in order to reduce the barriers occurring inside health service and nancial condition (21).
Intervention Package
The investment complex is itself a complex program involving different interventions related to various maternal, neonatal, and child healthrelated indicators. All the Tanahashi model determinants are involved in the study, making it di cult and complex to monitor by health workers. Investment case was developed based on evidence from different trials and studies from other countries (22), and the settings may not be the same and neighbouring countries. The actors involved in the Investment case are all the stakeholders related to health in the districts. If several interventions are combined in a package. In that case, the overall effectiveness of the whole intervention gets diluted due to the dispersed attention of service providers and the policy and decision-makers. The evaluation of the Investment case of Asian and Paci c countries has also shown the complexity of the approach in which the district managers have mentioned the tool used was beyond the staff's capacity (8).
Implement intensity and Quality
During the investment case implementation, more focus was on both budgeting and technical assistance in the districts. The workshop conducted in the districts concluded with action plans with the responsibilities of different stakeholders related to different tracers (23). But the action plans were not monitored on a regular basis by the stakeholders as well as the implementing partners. There were no technical o cers in the districts who regularly monitored the intervention. The resources limitation also played a major role in the quality of intervention as the hard to reach areas had very limited resources (especially human resources) to carry out the activities (11).
Strengths And Limitations
Study design and study population Study design, study population, and time frame are important factors to be considered in the study design in order to show a potential impact on the population. The data used are taken from Nepal Demographic and Health Survey. The study has used data for a quasi-experimental design and analysed using difference-in-differences and multivariable regression techniques. As the study is quasi-experimental, it may pose different potential biases (24). The sample taken for the districts from NDHS may not represent the whole population of the districts as some districts sample are bigger and some have smaller sample sizes and the population of each selected districts may also differ from one another. The sample taken for MNCH related indicators is smaller, which reduces the power of the study, which may have affected the signi cant association between the variables (25).
For comparison population, the districts selected had almost the same Human Development Index as the intervention districts but were relatively easy to reach districts as all the hard to reach districts were already selected by the IC approach as intervention districts. Thus, making the intervention and comparison districts incomparable.
Data and analysis
As DHS includes a cross-sectional survey, the retrospective data are collected related to maternal, neonatal, and child health, which may be subject to recall bias. DHS data are designed to represent at the national level but may not necessarily represent the district level. The sample of 1000-1500 women and children would be better for valid estimation of fertility and child morbidity and mortality (13). The analysis was carried out using multiple analysis approaches (e.g. difference-in-differences and univariate/multivariable regression analysis) helped to examine the effect with and without adjustment for possible confounders. Weighted values were calculated for all the variables.
Conclusion
This study utilized the data of NDHS 2011 and 2016 to assess the impact of the investment case approach in the implemented districts of Nepal. The data from Nepal Demography and Health Survey of 2011 and 2016 indicated that the overall progress. Some improvements have been observed in the major indicators like at least 4 ANC and skilled birth attendant delivery; however, these improvements are generally similar in both intervention and comparison areas. The results indicated that the re-design of the intervention strategies should be done in order to maximize the effectiveness of the IC approach. | 3,983 | 2020-12-11T00:00:00.000 | [
"Medicine",
"Economics"
] |
A Least Squares Solution to Regionalize VTEC Estimates for Positioning Applications
A new approach is presented to improve the spatial and temporal resolution of the Vertical Total Electron Content (VTEC) estimates for regional positioning applications. The proposed technique utilises a priori information from the Global Ionosphere Maps (GIMs) of the Center for Orbit Determination in Europe (CODE), provided in terms of Spherical Harmonic (SH) coefficients of up to degree and order 15. Then, it updates the VTEC estimates using a new set of base-functions (with better resolution than SHs) while using the measurements of a regional GNSS network. To achieve the highest accuracy possible, our implementation is based on a transformation of the GIM/CODE VTECs to their equivalent coefficients in terms of (spherical) Slepian functions. These functions are band-limited and reflect the majority of signal energy inside an arbitrarily defined region, yet their orthogonal property is remained. Then, new dual-frequency GNSS measurements are introduced to a Least Squares (LS) updating step that modifies the Slepian VTEC coefficients within the region of interest. Numerical application of this study is demonstrated using a synthetic example and ground-based GPS data in South America. The results are also validated against the VTEC estimations derived from independent GPS stations (that are not used in the modelling), and the VTEC products of international centres. Our results indicate that, by using 62 GPS stations in South America, the ionospheric delay estimation can be considerably improved. For example, using the new VTEC estimates in a Precise Point Positioning (PPP) experiment improved the positioning accuracy compared to the usage of GIM/CODE and Klobuchar models. The reductions in the root mean squared of errors were ∼23% and 25% for a day with moderate solar activity while 26% and ∼35% for a day with high solar activity, respectively.
Introduction
The Global Navigation Satellite System (GNSS) technique has become an integral part of all applications, where mobility plays an important role. The basic observable in the GNSS positioning is the time required for electromagnetic signals to travel from the GNSS satellite (transmitter) to a GNSS receiver. This travelling time, multiplied by the speed of light, provides a measure of the apparent distance (pseudo-range) between them. By knowing the position of GNSS satellites from Precise Orbit Determination (POD), the unknown position of the GNSS receiver and its uncertainty can be computed when at least four range measurements exist.
From the mostly used GNSS constellations, GPS and GLONASS satellites orbit at altitudes of around 20,000 km, while BeiDou and Galileo satellites orbit a bit higher, i.e., around 21,500 km and 23,000 km, respectively. The signals from GNSS satellites must transit the ionosphere (i.e., the part of atmosphere between 60 and 2000 km containing ionized plasma of different gas components) on their way to receivers. These free electrons add delay on the code-derived pseudo-range and advance the career phase signals. These effects must be eliminated in some way to achieve high accuracy in GNSS positioning, navigation and timing applications. As a result, the ionospheric modelling has received ever increasing attention in various fields including radio communication, navigation, satellite positioning and other space technologies [1].
Ionospheric models can be divided into three main categories: group (i) includes physical models, in which ionospheric changes are simulated based on the physical laws or the assumptions concerning the structure and variations of ionosphere such as the Global Assimilative Ionospheric Model GAIM [2]; group (ii) is known as empirical models that use deterministic functions to describe periodic and sudden changes in ionosphere, see, e.g., [3], including the International Reference Ionosphere IRI [4] and the NeQuick model [5]; and finally group (iii) consists of mathematical models, which are estimated in terms of mathematical (base) functions using observation techniques that provide ionospheric variables such as the Total Electron Content (TEC), Vertical TEC (VTEC) and Ionospheric Electron Density (IED).
This study follows the computational strategy of (iii), for which the spatial changes of ionospheric density can be formulated as two-dimensional (2D) or three-dimensional (3D) models. The 2D technique is formulated as a Single-Layer Model (SLM), where all free electrons within the ionosphere are concentrated in an extremely thin layer at a constant height. Thus, in the 2D approach, the vertical gradient of electron density changes is not considered [6][7][8][9][10]. The 2D models are often used for estimating total ionospheric delay in (precise) positioning applications. A comprehensive summary of these models can be found in El-Arini et al. [11].
In the 3D modelling techniques, horizontal changes in the ionospheric electron contents (e.g., with respect to the geodetic latitude and longitude), as well as their vertical variations (i.e., along the altitude that is measured from surface of the Earth) are described. Therefore, the 3D models require more observations (than the 2D techniques) and a careful parameterization to obtain a relatively stable relationships between observations and model's unknown parameters. To gain reasonable horizontal and vertical coverage for the 3D ionospheric tomography, a combination of various TEC observations is considered in previous studies, see e.g., [12][13][14]. This includes STEC (e.g., from GNSS observation), IED (e.g., from Radio Occultation (RO) of Low-Earth-Orbiting (LEO) satellites) and VTEC measurements (e.g., from satellite altimetry) for various techniques of ionospheric density modelling, see, e.g., [15]. While the 3D techniques are often used to study the structure of ionosphere, the 2D models (SLMs) are applied for computing total ionospheric delays in Precise Point Positioning (PPP) applications. It is worth mentioning here that the temporal variation of above mentioned techniques can be achieved by modelling the ionosphere as a short-time (e.g., 2-hourly) static field, see, e.g., [16] or dynamic as in [17]. In this study, we will show how our regionalized 2D (spatial) TEC model can improve the performance of PPP applications.
The single-layer TEC models, see, e.g., [11], are often described based on a set of (mathematical) base functions. The choice of these functions is arbitrary and application-dependent. For example, polynomial functions were chosen in [18,19], while Spherical Harmonics (i.e., often selected to be up to degree and order 15) are considered in, e.g., [20] for global TEC inversions. The Center for Orbit Determination in Europe (CODE) is one of the IGS analysis centers, which determines the precise GPS orbits using the IGS network data and provides the orbit information to GPS users worldwide. The CODE has also produced daily maps of the Earth's ionosphere on a regular basis since 1 January 1996. The GIM/CODE is modeled by 256 coefficients of the Spherical Harmonics (SHs) expansion up to degree and order 15. Principles of the TEC mapping technique by CODE are described in, e.g., [6,21].
Though modelling based on SHs is well-understood and it is convenient to be applied for the global representation and analyzing the ionosphere, it is not well suited for regional applications [22]. Therefore, regional base functions are introduced to the regional TEC modelling applications. Most of these studies have taken advantage of the orthogonal but strictly band-limited functions that can be concentrated within a region of interest, or by considering an appropriate orthogonal family of the strictly space-limited functions [23]. For example, the B-Spline functions were applied in [14,[24][25][26][27], which are based on the Euclidean quadratic B-Spline wavelets. The useful properties of these base functions, i.e., continuity, smoothness and computational efficiency, provide a great advantage for regional modelling of the ionosphere or when the observations are unevenly distributed over the globe. The Spherical-cap Harmonic were applied in [28][29][30] to reduce the lack of orthogonality of the global SHs for regional applications. To mitigate a limitation of these functions that can only be built in regions with smooth boundaries, the regional TEC inversion was formulated in [10] based on the spherical Slepian functions [23] that optimizes field separation over arbitrary regions with irregular boundaries. The other benefit of this formulation is the direct relationship between spherical Slepian functions and the global SHs, which will be discussed in Section 3.
Building on the methodology of Farzaneh and Forootan [10], the main focus of this study is to present an efficient approach to use already available TEC information or maps as a priori information and update them when new TEC observations become available. This update is implemented through a Least Squares (LS) approximation of the Bayesian formulation that considers the uncertainties of both a priori information and observations. We will demonstrate the efficiency of this approach through a synthetic example, where the 'true' solutions of the regional VTEC is known by definition. We also test the proposed algorithm by providing an accurate and fast estimation of the regional TECs for a Precise Point Positioning (PPP) application. Therefore, the methodology of this study is formulated as a 2D (spatial) mathematical approach followed by an LS update to regionalize the available TEC maps in the region of interest. Our main assumption is that the local TEC changes are well presented in the dual frequency GNSS observations of the local network. The presented method, after some modifications, can also be extended to the 3D formulation. Theoretically, the presented method can also accept other TEC observations (e.g., from RO, altimetry and the Swarm mission) besides the GNSS observations. These extensions, however, will be a subject of our future investigations.
Recently, Erdogan et al. [17] developed a near real-time processing framework to model spatial and temporal variations within the ionosphere in terms of compactly supported B-spline functions and the recursive filtering using GNSS measurements. The DGFI-TUM's high-resolution ionospheric product was compared by Goss et al. [31] with the GIM/CODE and the voxel solution from the Universitat Politècnica de Catalunya (UPC). The authors found a better ionosphere representation using DGFI-TUM estimations during the test period of September 2017. Olivares-Pulido et al. [26] presented a real-time TEC modelling approach using B-Spline base functions and a sequential Kalman filter updating scheme, with the goal of providing ionospheric corrections for Precise Point Positioning-Real Time Kinematic (PPP-RTK) applications. Compared to these techniques, the proposed approach of this study addresses a regional estimation of TEC by taking advantage of available a priori information and models the localized anomalies based on the spherical Slepian functions.
In what follows, the data sources of this study are described in Section 2. Estimation of the Slant TEC (STEC) using GNSS observations and the description of global Vertical TEC (VTEC) maps used as a priori information are described in Sections 2.1 and 2.2, respectively. In Section 2.3, the VTEC products of the NASA's Jet Propulsion Laboratory (JPL), European Space Agency (ESA) and the International GNSS Service (IGS) are introduced, where they are later compared with the regionalized VTEC estimates of this study and those of GIM/CODE. TEC modelling is introduced in Section 3, where, in Section 3.1, the (spherical) Slepian functions are introduced, and the LS formulation to estimate TECs in the presence of a priori data are described in Section 3.2. Results of this study are presented in Section 4, where the simulation is presented in Sections 4.1 and 4.2, the Independent Component Analysis (ICA), as in [32][33][34], is applied to compare one year of the regionalized TEC maps with those of GIM/CODE. The regionalized VTEC estimates are compared with those of CODE, JPL, ESA and IGS in Section 4.3, where the VTEC estimates from three GPS stations are used as an independent validation. An assessment of the regionalized TEC estimations in a PPP application is performed in Section 4.4, and, finally, this study is concluded in Section 5.
TEC and VTEC Determination Using GNSS Observations
In theory, the Slant Total Electron Contents (STECs) can be directly computed from the differential code or carrier phase measurements received by dual frequency receivers. Here, the formulation is presented based on the GPS measurements on both L1 (1575.420 MHz) and L2 (1227.600 MHz) frequencies. The noise level of the carrier phase measurements is significantly lower than that of the pseudo-range ones. However, for estimating STECs from the carrier phase measurements, one must account for the (float/integer) full cycle ambiguity (see [35], which is often estimated at the pre-processing step). In order to benefit from the ambiguity-independent estimates of STECs derived from the code pseudo-ranges and the high precision of the carrier phase measurement, the pseudo-range ionospheric observations are smoothed using the "carrier to code leveling process" method [25,36], i.e., whereP 4 is the pseudo-range ionospheric observable smoothed by the carrier-phase ionospheric one, i.e.,P In these equations, b r and b s are the code Inter-Frequency Biases (IFBs) for the receiver and satellite, f 1 and f 2 are the L1 (1575.420 MHz) and L2 (1227.600 MHz) frequencies, I 1 and I 2 are the ionospheric refraction delays at L1 and L2, and P 4 and Φ 4 are the geometry-free linear combination of pseudorange and carrier phase measurements in the continuous observational arc (the interval at which no cycle-slip error has occurred). Finally, ε p and ε L represent the effects of multi-path and measurement noise on the pseudo-range and carrier phase, respectively. The STEC values can be converted into the height-independent VTEC by introducing the single layer mapping function [20]: with where R is the mean Earth radius, z and z are respectively the zenith angles of GPS satellite at the user position and the ionospheric pierce point, and H is the mean altitude (that approximately corresponds to the altitude of the maximum electron density and its height, i.e., varying between 250 and 500 km) depending on the latitude, season, solar and geomagnetic activity conditions [37,38]. The uncertainty of GPS-derived VTECs is related to the quality of code pseudo-range and carrier phase measurements at L1 and L2 frequencies, and phase and code inter-frequency biases for the receiver and satellites. The usage of the mapping function to covert STEC to VTEC (Equation (3)) can also be considered as an additional source of uncertainties, but this has not been considered in this study. According to covariance propagation theory, these uncertainties can be estimated as follows: where σ P = σ P 1 = σ P 2 = 0.2 m is standard deviation of code pseudo ranges observation and σ φ = σ φ 1 = σ φ 2 = 0.02 cycle is the standard deviation of carrier phase pseudo ranges observation. Assuming that the code pseudo-range and carrier phase derived TECs are uncorrelated, uncertainties of the pseudo-range ionospheric observable smoothedP 4 can be derived using the formal error propagation law, which gives: where λ 1 and λ 2 are the wavelength of carrier phase observations (i.e., 19.03 cm and 24.42 cm), n is the number of measurements in the continuous arc, while σ br and σ bs are reachable from the IONEX files, which are produced by the IGS Analysis Centers (ftp://ftp.unibe.ch/aiub/CODE/) and contain the GPS-derived TEC maps with their uncertainties.
Global Ionospheric Model (GIM/CODE)
The Global Ionosphere Maps (GIMs) from the Center for Orbit Determination in Europe (CODE) are modeled by 256 coefficients of the Spherical Harmonics expansion up to degree and order 15.
whereP nm are normalized Legendre polynomials. Equation (8) can be written as a linear transformation in the form of σ VTEC−GI M/CODE i = MΣ X , where Σ X contains the errors (σ C nm , and σ S nm ) and M contains the known values ofP nm , cos(ms si ) and sin(ms si ). The covariance matrix of GIM/CODE data in the grid domain can be derived as:
Ionospheric Models Used for Comparisons
VTEC estimates from the official products of the NASA's Jet Propulsion Laboratory (JPL), the European Space Agency (ESA) and the International GNSS Service (IGS) are used here for comparison. The IGS ionosphere working group was established since 1998, and has developed different techniques to provide VTEC maps using the IONosphere EXchange (IONEX) format [39][40][41] since then. IONEX provides TEC estimates with a spatial resolution of 5 • and 2.5 • in longitude and latitude, respectively, and a temporal resolution of few minutes to several hours in real-time, rapid and final modes. Although the real-time TEC products have been proposed by the IGS, users now can only access the rapid and final products with a latency of a few days. Various VTEC products from CODE, Universitat Politècnica de Catalunya/IonSAT (UPC), JPL, ESA are accessible from ftp://cddis.gsfc.nasa.gov/pub/gps/products/ionex/ with 2 h steps [20,21,42,43]. Comparing TEC estimates from different centers can reflect the consistency of our estimates with respect to the existing modelling methods.
Method
In what follows, in Section 3.1, we introduce the (spherical) Slepian functions that can be used for efficient regional TEC modelling, see, e.g., [10]. The relationships between these functions and SHs are also presented. In Section 3.2, a Least Squares (LS) approximation of the Bayesian-type update is provided to compute TEC maps using GNSS observations, while taking the GIM/CODE maps (in terms of SHs) as a priori information.
Spherical Slepian Functions
A global field can be expanded by SH functions and their coefficients as: where Y lm (r) represents SHs of degree l and order m, r is the location of a point on the surface of a unit sphere Ω. The corresponding coefficients ( f lm ) can be estimated by solving, e.g., an integral over the entire sphere (unit sphere Ω), i.e., In order to localize these functions into a region of interest (target region), the optimization of a local energy criterion can be utilized. This will give a new set of functions in the sense of [44]. The spherical Slepian function can be presented as a band-limited spherical harmonic expansion as: with To maximize the spatial concentration of the band-limited function g(r) within the target region R, the ratio of the norms should be maximized as: where 0 ≤ λ ≤ 1 is a measure of spatial concentration. The maximization of this concentration criterion can be achieved in the spectral domain by solving the algebraic eigenvalue problem as: with D lm,l m being the Gram matrix of energy within the target region R, i.e., Therefore, the desired signal within the region of interest R can be efficiently approximated by: where g n (r), d n and N are the spherical Slepian functions, corresponding unknown coefficients and Shannon number, respectively; see more details in [45,46]. A linear transformation can be used to convert SH coefficients into spherical Slepian ones, whose energy is concentrated onto specific patches of the sphere [22,23,47]. Considering f to be the coefficients of SHs, which can be, for example, those of VTEC coefficients from the GIM/CODE, and a given variance-covariance matrix Σ X ; this global field can be localized inside the region of interest using: where d localized n is the localized field and g = (g 00 . . . g lm . . . g LL ) T is a localization matrix from Equation (10). The covariance matrix of the localized GIM/CODE coefficients, which is used as the prior covariance matrix in the update step (Section 3.2), can be estimated through a covariance propagation as:
A Least Squares (LS) Approximation of the Bayesian Update
In Equation (17), we showed how to convert SH coefficients to their corresponding spherical Slepian coefficients. These values and their errors can then be used as a priori information in a Bayesian formulation to be updated by the VTEC values that are estimated from GNSS observations (or any other techniques that measure TEC or VTEC). To estimate these updates, suppose L is a vector of VTEC observations (e.g., computed by rearranging Equation (3), i.e., VTEC = STEC cos z . The distribution of L (i.e., P(L)) conditionally relates to the distribution of the unknown Slepian coefficients d n (i.e., shown by P(d n )). Relationships between observation and unknowns are introduced by P(L|d n ), which is known as the likelihood function and the distribution of unknown parameters in the presence of observation L (i.e., shown by P(d n |L)) is derived by the Bayesian theory, i.e., The distribution of unknown parameters (P(d n )) is already known before the observations (L) were taken. Once the observations (L) are introduced, P(d n |L) represents the posterior distribution of the parameter vector (d n ). Thus, this is an update of a priori information by the introduced observations (L). By knowing the distribution of parameters (P(d n )), one can compute the mathematical expectation of parameters, i.e., d localized n and its covariance matrix, i.e., Σ d localized n .
Here, for simplicity, we suppose that P(d n ) is normal. Thus, a priori distribution of the unknown parameters d n is P(d n ) ∼ N(d localized n , Σ d localized n ). Moreover, if we assume that the observation vector (L) is also normally distributed and the variance factor to be σ 2 , a priori distribution of the unknown parameters (d n ) is a conjugate prior. Hence, the posterior distribution of d n is also normal and can be computed as: , where A is the design matrix containing the Slepian functions, i.e., g in Equation (12), P is the weight matrix of the observations, and d 0 is the posterior mathematical expectation. Therefore, the LS approximation of the Bayesian update that provides d n is given by:d with A being a full column rank n × u matrix. Here, n is the total number of observations (length of the observation vector L); u is the number of unknowns (i.e., the total number of unknown Slepian coefficients); P is the known positive definite n × n weight matrix of the observations. The variance of the unknown Slepian coefficients can be computed as: where V = Ad n − L, V X =d n − d localized n , Σ −1 X is the prior covariance matrix and n is the total number of observations. An overview of VTEC estimation using the proposed approach is presented in Figure 1.
Results and Discussion
The regional VTEC modelling of this study is based on the ground-based GNSS observations collected across South America and few stations in North America. GNSS observations of 62 stations belong to IGS and the Brazilian Network for Continuous Monitoring (RBMC). The data are obtained from www.ibge.gov.br with the sampling rate of 30 s. In order to solve STEC from these observations, receivers' Differential Code Bias (DCB) and the Inter-Frequency Bias (IFB) values for the GNSS satellites were calculated using the regional formulation as in [48][49][50].
The STEC and VTEC values from each GNSS observation were computed using Equations (1)-(3). The altitude of the Single Layer Model (SLM) was set to 450 km (to be consistent with those of GIM/CODE), and the elevation cut-off angle of 15 • was used to select valid GNSS satellites. The precise orbit files, which are provided the IGS agencies, were downloaded from ftp://cddis.gsfc.nasa.gov/ pub/gps/products and interpolated to determine satellite positions. An overview of the input data used for the ionospheric modelling of this study is shown in Figure 2, where one can see that our VTEC modelling domain covers an extended region above the the GNSS network. In what follows, we validate the proposed approach by a synthetic example (Section 4.1). The VTEC estimates are then assessed in three different ways, in Section 4.2, the 2-hourly regionalized maps of this study are compared with those of GIM/CODE to see whether the new model represents expected spatial-temporal as reflected in the global model. In Section 4.3, the regionalized vTEC estimates are compared with the predicted values provided by the NASA's JPL, European Space Agency (ESA), IGS and GIM/CODE. Finally, in Section 4.4, the VTEC estimations are assessed in a Precise Point Positioning (PPP) application.
Simulation
In order to validate our modelling approach, a synthetic example is designed to assess the ability of the Bayesian approach in recovering the regional signals. To introduce the 'true' VTEC patterns, the spherical harmonic coefficients of the GIM/CODE is used to produce a smooth grid with 0.5 degree resolution that covers South America. Then, we added periodic oscillations with the magnitude of 10 TECU (F(longitude, latitude) = 10 * sin(20 longitude) * cos(20 latitude)) on the top of GIM (see Figure 3, plot on the left). For comparison, we show a simple synthesizing of the true signal using the spherical harmonics expansions of degree and order 15 and 90 in Figure 3A1,A2, respectively. Their differences with the true pattern are shown in B1 and B2, respectively. The regionalize VTEC estimation method is implemented by considering the low-degree VTEC estimates of Figure 3A1 as a priori information. Two regionalization experiments are considered, where the first is shown in Figure 3A3. Here, the 'true' VTECs are interpolated at 2140 points that are located at the pierce points (see the locations in Figure 3C1) that are derived by connecting the 62 stations of Figure 2 to the GPS satellites during 30 s. In Figure 3A4, 1 • VTEC values of the true VTEC signal (i.e., 8191 points of Figure 3C2) are considered as observation. The differences with the true signal are shown in Figure 3B3,B4, respectively. Though the VTEC recovery by spherical harmonics is a simple synthesizing the introduced VTEC field, its accuracy is found to be limited due to the truncation that is dictated by the selected spectral resolution. This is demonstrated in the residual plots ( Figure 3B1,B2, where the differences of 10 TECU can be detected that include mismatch anomalies compared with the VTEC estimates from the GIM/CODE, as well as the artificial regional anomalies. The magnitude of residuals derived from the Bayesian update using only 30 s of the local GPS network is one level of magnitude better than Figure 3A2,B2. In Figure 3A4, it can be seen that using a well covered VTEC observations results in an accurate recovery of the truth with very negligible error magnitude (i.e., 10 −4 in Figure 3B4). This simulation indicates that indeed the regional anomalies are able to modify a priori information. Therefore, the method can will be applied to real case studies.
A Comparison between the Regionalized VTEC Maps and GIM/CODE Products
Two-hourly snapshots of the regionalized VTEC maps for 17 March (DOY 76) and 21 December 2013 (DOY 355) are presented in 12 maps presented on the top panel of Figures 4 and 5, respectively. These values are presented as TEC Unite (TECU). Considering these maps, the ionosphere maximum appears around the local noon as travelling along with the Sun. Relatively bigger values estimated in DOY 76 (compared to DOY 355) are related to the higher magnetic activity during this day (i.e., the K p values of these days were +6 and −2, respectively). Equatorial TEC anomalies can be detected in both days, where a sunrise enhancement is seen in VTEC estimates at 12:00 UT during both days. The high values of TEC at 19:00 and 23:00 UT are related to the physical structures of the diurnal equatorial ionization anomaly and its resurgence after sunset, respectively [51][52][53]. Figure 4). An average value of VTEC during March (with high magnetic activity) is found to be ∼60. On 21 December 2013, due to its lower magnetic activity, the values VTEC are found to be smaller, i.e., the minimum VTEC of ∼10 around 8:00 AM and ∼68 around 8:00 PM (see Figure 5). Considering the differences between the regionalized solutions and the GIM/CODE (12 plots on the bottom of Figures 4 and 5), the GIM/CODE products are found to underestimate VTEC variations, mostly when the magnitude of VTEC is higher. The maximum differences are found to be ∼15 TECU on 17 March 2013 and ∼10 TECU on 21 December 2013.
Here, we extend the assessment of the new VTEC estimates for the entire year 2013, by applying the Independent Component Analysis ICA [32,33] on the differences between the two-hourly GIM/CODE products and regionalized results. The first two dominant independent modes are shown in Figure 6 that correspond to 65% of the total variance of VTEC differences. Signals on the oceans are masked to highlight the changes over the land and to provide an indication for possible impacts on positioning applications. In each mode, spatial functions ( Figure 6-left) are anomaly maps in terms of TECU, which can be multiplied by the unit-less time series (Figure 6-right) known as normalized Independent Components (ICs) to derive independent modes of variability. The results indicate that the magnitude of differences in the year 2013 reach up to 6 TECU, and they are dominated by the diurnal and semi-diurnal frequencies. By analysing the temporal ICs, it became clear that the magnitude of differences during May to middle September (DOY ∼120-258) is almost half of the VTEC differences during the rest of 2013 (see IC1 and IC2 of Figure 6). To assess whether there is a relationship between the VTEC differences and geomagnetic activity, the ICs are smoothed by "loess" and "rloess" methods [54], while using 5% of the data and they are compared with the loess-smoothed geomagnetic K p index. The numerical results indicate a considerably high correlation coefficient (0.7) between them during January to middle May and middle September to December, where the magnitude of VTEC changes was higher than rest of the year.
In Figure 7, the amplitude of diurnal and semi-diurnal VTEC differences is shown. These two frequencies are selected because of their dominance as it was shown in Figure 6. The amplitude of diurnal differences reach up to 5 TECU (Figure 7-left), while that of semi-diurnal is found to be 3 TECU (Figure 7-right). These differences represent considerable impact on the accuracy of positioning applications, where, roughly speaking, 1 TECU and corresponds to 16 cm positioning error at the f 1 frequency (1575.420 MHz). The ICA results (first two independent modes) of VTEC differences (GIM/CODE-regionalized estimate) for the entire 2013. The left plots are anomaly maps in terms of TECU, which can be multiplied by the unit-less time series (ICs) and provide statistically independent modes of VTEC differences.
Comparing Regionalized VTEC Maps with the JPL, ESA and IGS Products
In this section, we evaluate the regionalized VTEC estimates by comparing them with those derived from other groups. For this, TEC values along the line of sights between GNSS stations and satellites are computed (Equation (1)). These values are then converted to VTEC by implementing the single layer mapping function (Equation (3)) [20], and the results are compared with other models. As reference stations, we used the dual frequency GPS observations of Bogota (lat = 4.64007 and lon = 285.91906), Unsa (lat = −24.72746 and lon = 294.59236) and Punta Arenas (lat = −53.13695 and lon = 289.12011). These observations were not used in the regionalized VTEC estimates of this study, but they are used for validations (locations of the stations are shown by three black dots in Figure 1). Plots in Figure 8 top-left and top-right show the observed VTEC estimates (from dual frequency GPS observations) of the three stations during 17 March 2013 (DOY 76 with K p = +6) and 21 December 2013 (DOY 355 with K p = −2), respectively. For comparison, the differences between these values and the regionalized VTEC estimates, as well as those of GIM/CODE, JPL, ESA and IGS centers [21] are shown in separate plots. The regionalized VTEC estimates are computed in near-real time mode whenever the observations are available. Therefore, they are generated with the same sampling rats of the GPS observation (i.e., every 30 s), while the official products are provided 2-hourly (120 min) and with the latency of several days. By comparing the results of Figure 8, the estimated VTEC residuals of the regionalized model are found to be smaller than the other products. Table 1
A PPP Assessment of the Regionalized VTEC Estimates
Three GNSS stations (i.e., Bogota, Unsa and Punta Arenas) of Section 4.3 are chosen to assess the impact of VTEC estimates on the positioning accuracy. The dates of assessments are also chosen to be the same as previous section to make the interpretation easier. As a measure of accuracy, the Root Mean Squares Error (RMSE) of the positioning residuals is calculated. To compute an accurate position (to be used as reference for estimating the position accuracy), the Precise Point Positioning (PPP) solution as in [55] is chosen as our computation technique. Ionospheric-free combination is created using GPS observations including both L1 and L2 signals. Based on this, station coordinates, receiver clock error, systems time difference parameters with the GPS system, troposphere parameter and phase ambiguity are estimated during 24 h of 17 March 2013 (DOY 76 with K p = +6) and 21 December 2013 (DOY 355 with K p = −2). Table 2 presents the processing strategy and the error modelling for the performed PPP experiment. It is worth mentioning that the convergence period of the PPP experiments was not considered in the computation of the quality measures. To assess the impact of VTEC modelling on the position accuracy, the above PPP experiment is repeated with the same setup but, for estimating the ionospheric delays, the regionalized VTEC estimates, and those of Klobuchar [63] and GIM/CODE are replaced. The ionosphere-free combination is therefore replaced by the single-frequency PPP [64]. The position estimates of these experiments are compared with those of the reference (computed by the ionosphere-free combination of L1 and L2 measurement as described before). The error plots that correspond to the regionalized VTEC estimates are found to be very smooth similar to those of the ionosphere-free combinations. Figure 11 summarizes the RMSE of the positioning residuals for each station compared with errors from the dual frequency PPP estimates. In comparison with GIM/CODE and Klobuchar models, the use of regionalized VTEC estimates improves the positioning accuracy by 30% and 33% for Bogota, 25% and 38% for Punta Arenas, as well as 24% and 42% for Unsa during 17 March 2013. These values are found to be 27% and 24% for Bogota, 15% and 23% for Punta Arenas, as well as 28% and 29% for Unsa during 21 December 2013, respectively. The differences in the magnitude of improvements are related to the differences in geomagnetic activity of these two days.
Summary and Conclusions
The ionosphere is the major error source that affects the positioning accuracy of GNSS positioning. In this study, a Least Squares (LS) approximation of the Bayesian formulation is introduced to use a priori information from, e.g., already available VTEC maps from the Global Ionosphere Map (GIM) of the Center for Orbit Determination in Europe (CODE). Then, we use TEC estimates from local GNSS networks to update a priori values within the region of interest. The presented VTEC estimation follows a 2-step algorithm, where, in step-1, the GIM/CODE's VTEC values are transformed from the global spherical harmonic coefficients to an optimum band-limited local spherical Slepian coefficients in the region of interest. In step-2, we use new VTEC observations from GNSS observations in a Bayesian equation to update the spherical Slepian VTEC coefficients of step-1. The numerical assessments are performed on a network in South America including 62 stations. Comparisons are performed with the VTEC products of GIM/CODE, and the external data of JPL, ESA and IGS. A Precise Point Positioning (PPP) experiment is implemented to compare the impact of VTEC models in terms of position errors with the position derived from 24-hour double-frequency measurements of three IGS stations. The main findings of this study include:
•
Comparisons with the GIM/CODE confirm that the regionalized model estimates TEC without unexpected oscillations, though the range of variations from the IGS models is found to be underestimated.
•
Comparing the regionalized estimates of this study with the VTEC estimations using the dual-frequency measurements of three GPS stations indicates that the average of absolute differences is less than 2 TECU, which indicates an accepted performance of the presented technique.
•
Performing VTEC analyses for the entire 2013 shows that the presented regionalization technique is appropriate for VTEC modelling under normal and high geomagnetic conditions. • Comparison with various VTEC models, the quality of the new estimates, and hence the ionospheric corrections, is found to be better within near real-time PPP applications.
•
The results showed that the positioning accuracy of single-frequency positioning with the external ionospheric model correction can obtain meter-level accuracy, and the vertical error is found to be relatively larger than the horizontal components.
•
Results indicate that the regionalized model is better suited to correct ionospheric impact of GPS positioning compared with the usage of Klobuchar and GIM/CODE in a precise point positioning setup.
Some ideas for further development and improvement of the results for future work are: • The new European satellite navigation system, Galileo and the restored Russian system, GLONASS, are examples of other constellations that can double the quantity of (V)TEC data. The multi-constellation observations will be used for future studies to improve the quality of TEC observations. • Combining different data sources, e.g., from radio occultation and satellite altimetry, will be considered to improve the spatial coverage of TEC observations. • Further investigations need to be conducted for other GNSS networks at different latitudes with higher or lower reference station density.
•
The LS approximation of the Bayesian update can be replaced by a more efficient Markov Chain Monte Carlo optimization to avoid the assumption of Gaussian distribution for a priori information and observations. | 8,649.2 | 2020-10-29T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Monitoring DC Motor Based on LoRa and IOT
— Electrical energy efficiency is a dynamic in itself that continues to be driven by electrical energy providers. In this work, long-range (LoRa) technology is used to monitor DC motors. In the modern world, IoT is becoming increasingly prevalent. Embedded systems are now widely used in daily life. More can be done remotely in terms of control and monitoring. LoRa is a new technology discovered and developing rapidly. LoRa technology addresses the need for battery-operated embedded devices. LoRa technology is a long-range, low-power technology. In this investigation, a LoRa transmitter and a LoRa receiver were employed. This study employed a range of cases to test the LoRa device. In the first instance, there are no barriers, whereas there are in the second instance. The results of the two trials showed that the LoRa transmitter and receiver had successful communication. In this study, the room temperature is used to control DC motors. So that the DC motor's speed adjusts to fluctuations in the room's temperature. Additionally, measuring tools and the sensors utilised in this investigation were contrasted. The encoder sensor and the INA 219 sensor were the two measured sensors employed in this study. According to the findings of the experiment, the tool was functioning properly.
INTRODUCTION
Most tropical and subtropical nations, particularly those with warm climates like India, China, the US, Brazil, etc., use ceiling fans as a prominent domestic appliance.They make a considerable contribution to household electricity usage [1]- [6].For instance, it is estimated that the electrical energy used by just one ceiling fan in India in 2000 accounted for 6% of the energy used by all household equipment.In 2020, this consumption is anticipated to increase by 9%.Singlephase induction motors are preferred over other types of motors for 95% of applications due to their straightforward design and low cost.However, the single phase induction's efficiency is only between 30 and 40%.For instance, to create an airflow, approximately 65-75 Watts of electricity are required from the source.However, the actual mechanical power need is 20 W, and single phase induction motor speed regulation is exceedingly difficult.The greatest energy-efficient machine to solve the aforementioned issue is the DC motor.DC motors are widely used in both industry and domestic settings [7].Electronic systems are supported by DC motors.The benefits of DC motors are their high torque, lack of reactive power losses, and lack of harmonic generation in the electrical power supply from which they draw their power [8]- [14].
The development of low-power wide-area network (LPWAN) technologies has generated a great deal of interest and helped make Internet of Things (IoT)-based applications very popular [15]- [18].Most LPWANs operate in the unlicensed industrial, scientific, and medical (ISM) bands, depending on the deployment area [19], [20].A LoRa-based network link is a wireless communication system that transmits data over a great distance using long-range, lowpower radio frequency transmissions [21]- [23].NB-IoT, Sigfox, and LoRa are some of these LPWAN technologies.The benefits of LoRa in terms of ultralong transmission, high stability, ultralow power, and low cost have been established [24]- [28].IoT devices are used in a variety of applications, such as smart cities, industrial automation, agricultural, and environmental monitoring [29]- [33].LoRa-based network linkages are intended to enable dependable and affordable communication.LoRa-based network links have two main components, i.e., LoRa nodes and LoRa gateways [34]- [40].The LoRa nodes are small, low-power devices that can be embedded in IoT sensors and devices to transmit data wirelessly [41]- [45].The LoRa gateways act as bridges between the LoRa nodes and the Internet, receiving data from the nodes and forwarding them to a cloud-based server or an IoT platform [46]- [50].Several researchers have implemented LoRa such as monitoring health [51]- [54], electrical [55]- [61], water quality [62]- [68] and coal mine [69]- [71].This paper will present monitoring of DC motors using LoRa and IOT.LoRa has the ability to communicate up to a certain distance depending on the surrounding environment.In urban areas, LoRa can communicate up to a distance of 5 km, while in rural areas the distance can be up to 15 km.The contributions of this paper are:
II. METHODS
A. Long-Range (LoRa) LoRa, which stands for "Long Range", is a long-range wireless communications system, promoted by the LoRa Alliance [72]- [74].This system aims at being usable in longlived battery-powered devices, where the energy consumption is of paramount importance.LoRa can commonly refer to two distinct layers: (i) a physical layer using the Chirp Spread Spectrum (CSS).radio modulation technique; and (ii) a MAC layer protocol (LoRaWAN), although the LoRa communications system also implies a specific access network architecture [75], [76].The LoRa physical layer, developed by Semtech, allows for long-range, low-power and low-throughput communications.LoRa uses the frequencies 433 MHz, 915 MHz, and 920 MHz to transmit data.The frequency used in Indonesia and Japan is 920 MHz [77]- [79].The payload of each transmission can range from 2-255 octets, and the data rate can reach up to 50 Kbps when channel aggregation is employed.The modulation technique is a proprietary technology from Semtech.LoRaWAN provides a medium access control mechanism, enabling many end-devices to communicate with a gateway using the LoRa modulation.While the LoRa modulation is proprietary, the LoRaWAN is an open standard being developed by the LoRa Alliance.Table I displays the features of various network technologies utilized for the Internet of Things.
Various end-device kinds are available for LoRa, depending on the communication method used [80].
• Class A: Class A devices are capable of bidirectional communication.The gadget can receive information during the two brief time slots that follow the time slot when it is sending.Devices in this category can only receive data from the server after sending data first.Class A gadgets thus guarantee the highest levels of energy efficiency.One must wait for the next planned slot uplink if they want to transfer data from the server.
• Class B: Devices in this category are capable of bidirectional communication and have an additional time period during which they can receive data.The type B device can use a series of receiving slots initiated by a Beacon-type message delivered by the Gateway, in addition to the random time slots the type A device permits for data reception.A data transfer rate between 300 bps and 5.5 kbps is the result of this restriction.The duty cycle, another factor, should be less than 1%.The transfer rate is further constrained by these factors.Data flow from the sensor to the Gateway module is facilitated for more effective communication.At the physical layer, the LoRa modulation is utilized with the LoRaWAN protocol.The increased sensitivity of the receiver is the key benefit of utilising this modulation.
B. WEMOS D1 R1
WeMos is a business that creates inexpensive, efficient IoT gadgets for various projects and goods.WeMos D1 R1 is designed for simultaneous electronic controlling of electronic projects, wireless connectivity, interfacing and processing of sensor data, and wireless connectivity.The following are the details of the Arduino Wemos D1 R1: -It uses 3.
C. DC Motor
A DC power supply is used to generate torque on DC motors, which have the ability to transform electrical energy into mechanical energy.The two main categories of DC motors are the externally regulated type and the self-exciting kind.An electric motor classified as a DC motor must run on direct current.After that, the generated direct current is transformed into mechanical energy, such as rotation or motion.DC motors are made up of the rotor and stator, among other parts [81], [82].
III. PROPOSED METHODS
This study uses Wemos D1 R1, Arduino UNO, ESP32 and LoRa.Besides that, the use of several software, such as Arduino IDE and IOT OnOff.The control system design developed is shown in Fig. 1.In this design, 2 LoRa modules are used, the first as a LoRa Transmitter and the second as a LoRa Receiver.Before that, it is necessary to know the components contained in the transmitter including Ardunio UNO R3, ESP32, Lora Sx1278 module, L298n driver, DHT11 sensor, INA219 sensor, Encoder sensor and DC motor.Meanwhile, the receiver includes Wemos D1 R1, Buzzer, and Oled.The Detail of hardware LoRa can be seen in Fig. 2.
IV. RESULTS AND DISCUSSION
In Fig. 3 it can be seen that the DC motor control in this study is based on room temperature.Arduino UNO has digital pins that can be applied to the PWM method.In setting the PWM there is a range from 0 to 255 to control the speed of the DC motor based on temperature by determining the duty cycle When the room temperature is below 20 ˚C, the duty cycle value is 0%.Because the Duty Cycle value is 0%, the DC motor speed does not rotate.If the room temperature is between 20 ˚C to 25˚C then the duty cycle value is 25%.If the temperature is between 25 ˚C to 30˚C then the duty cycle value is 50%.If the temperature is between 30 ˚C to 40˚C then the duty cycle value is 75%.If the temperature is above 40˚C then the duty cycle value is 100%.the DC motor runs to its maximum and triggers an alarm.
In addition, the application used will provide a notification.Details regarding the speed of the DC motor can be seen in Table II and the chart of output can be seen in Fig. 4. The display of application can be seen in Fig. 5.When the temperature changes, the data is read by the Dht11 sensor, this causes the voltage, current and speed of the DC motor to change.Changes in temperature will result in changes in voltage, current and speed as shown in Fig. 4. When the temperature exceeds the upper limit, namely 40 ˚C, the application will display a notification as shown in Fig. 5 To ensure accurate data, each sensor used in this study is compared to a measuring instrument.In this study there were 2 sensors to be measured, namely the encoder sensor and the INA 219 sensor.The output of the encoder sensor readings was compared to the tachometer.while the INA 219 sensor is compared to a multimeter.From Table III it can be seen that the largest error value is 0.023%.while the smallest error value is equal to 0.002%.while the average value of the encoder sensor test for a temperature of 20˚C < t < 25˚C is 0.0114.Testing of the encoder sensor for a temperature of 25˚C < t < 30˚C can be seen in Table IV.The average error value in Table III is 0.0338%.Table V is a detail of the encoder sensor test at a temperature of 30˚C < t < 40˚C with an average error value of 0.07%.The final encoder sensor test is for temperatures >40˚C.Table VI is a detail of the test for temperatures > 40˚C with an average error value of 0.0418.The research also conducted an experiment on the distance between the receiver and transmitter using the lineof-sight (LOS) model.Line-of-sight is presented as a testing ground for qualitative reasoning techniques developed in the temporal and spatial domains and also have potential applications in the field of computer vision.The experimental scheme uses 2 variations, namely without obstacles and with obstacles.Without obstacles, namely the condition of the transmitter and receiver are not obstructed by walls or roofs.on the other hand, with obstacles, namely the presence of obstacles, namely walls and roofs.LOS scheme horizontally can be seen in Fig. 6.Based on
V. CONCLUSION
This research presents DC motor monitoring using Long Range (LoRa) technology.The LoRa used in this research consists of a LoRa Transmitter and a LoRa receiver.In this research, LoRa device testing uses various problems.The first problem is the unobstructed condition.while the second condition is with obstacles.From these two experiments it is known that communication between the LoRa Transmitter and LoRa receiver runs well.Apart from that, the sensors used in this research were also compared with measuring instruments.This research used two sensors to be measured, namely the encoder and the INA 219 sensor.From experiments on the encoder sensor, an error was obtained with an average error of 0.03925%.Meanwhile, in the INA 219 sensor experiment, by comparing the voltage, the average error was 0.049%.From the experimental results it is known that the tool works well.Research using LORA needs to sharpen the complexity of the problem, such as distances that approach the LORA distance limit.In addition, it is necessary to study disturbances that have a major contribution.Application in rural or urban areas needs to be deepened.
Application of LORA and IOT as DC motor monitoring − Controlling DC Motor based on Temperature − The achievement of the proposed method is tested with several variations of cases This article has a structure, namely the second section, which discusses the ideas of LoRa, WEMOS, and DC motors.The results and discussion are in the third section.Drawing conclusions from the research is the final step.ISSN: 2715-5072 55 Dimas Ahmad Nur Kholis Suhermanto, Monitoring DC Motor Based on LoRa and IOT 3 V as the operational voltage.-It is includes dedicated pins for i2c, one-wire, PWM, SPI, and interrupt functionalities among its 16 digital IO pins.-It has a single analogue input or ADC pin and is micro-USB-based for programming purposes.-4Mbytes of flash memory.-Timebase: 80 MHz.-IC CH340G is used for serial communication.
Fig. 6 .
Fig. 6.System testing scheme with horizontal LOS case studies (a) Without Obstacle (b)With Obstacle
•
Class C: Devices in this category are capable of bidirectional communication and have set times during which they can continuously receive information.A class C device can only send information; it cannot receive data at any other time.Each module must implement the class A communication method in accordance with the LoRa standard, but the particular features of the other categories are optional.The following are some benefits of LoRaWAN technology: Low-Power Wide-Area Networks) standard known as LoRa proposes the trade-off of a slower data rate for greater communication ranges.The standard is thus appropriate for applications where little data is sent and the data gathered from the sensors does not change significantly over time.The ERP (Effective Radiated Power) of LoRa devices in the 867-869 MHz range of the European frequency band is restricted to 25 mW.The reduction in communication channel bandwidth has led to the imposition of this restriction.
TABLE I .
THE TECHNOLOGIES' FREQUENCY AND TOPOLOGY FOR COMMUNICATION Dimas Ahmad Nur Kholis Suhermanto, Monitoring DC Motor Based on LoRa and IOT
TABLE II .
DC MOTOR TEST DATA
TABLE VI .
ENCODER SENSOR TEST RESULTS WITH TEMPERATURE > 40˚C
Table
VII, the average error percentage of the INA219 sensor voltage obtained at a temperature of 20˚C < t < 25˚C is 0.0994%.Testing at a temperature of 25˚C < t < 30˚C can be seen in TableVIIIwith an average error percentage of 0.0724%.Table IX is a detail of testing the INA219 sensor voltage with a temperature of 30˚C < t < 40˚C with an average error percentage of 0.0154%.Testing the voltage on the INA219 sensor for temperatures > 40˚C can be seen in TableX.
TABLE VII .
VOLTAGE TEST RESULTS ON THE INA219 SENSOR WITH A TEMPERATURE OF 20˚C < T < 25˚C
TABLE VIII .
VOLTAGE TEST RESULTS ON THE INA219 SENSOR WITH A TEMPERATURE OF 25˚C < T < 30˚C Table XI with the same distance every 1 meter with obstacles or no obstacles.LoRa Transmitter and LoRa Receiver can connect and communicate properly.
TABLE XI .
HORIZONTAL LOS (LINE OF SIGHT) TESTING | 3,539.2 | 2024-01-04T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Rapid and Easy High-Molecular-Weight Glutenin Subunit Identification System by Lab-on-a-Chip in Wheat (Triticum aestivum L.)
Lab-on-a-chip technology is an emerging and convenient system to easily and quickly separate proteins of high molecular weight. The current study established a high-molecular-weight glutenin subunit (HMW-GS) identification system using Lab-on-a-chip for three, six, and three of the allelic variations at the Glu-A1, Glu-B1, and Glu-D1 loci, respectively, which are commonly used in wheat breeding programs. The molecular weight of 1Ax1 and 1Ax2* encoded by Glu-A1 locus were of 200 kDa and 192 kDa and positioned below 1Dx subunits. The HMW-GS encoded by Glu-B1 locus were electrophoresed in the following order below 1Ax1 and 1Ax2*: 1Bx13 ≥ 1Bx7 = 1Bx7OE > 1Bx17 > 1By16 > 1By8 = 1By18 > 1By9. 1Dx2 and Dx5 showed around 4-kDa difference in their molecular weights, with 1Dy10 and 1Dy12 having 11-kDa difference, and were clearly differentiated on Lab-on-a-chip. Additionally, some of the HMW-GS, including 1By8, 1By18, and 1Dy10, having different theoretical molecular weights showed similar electrophoretic mobility patterns on Lab-on-a-chip. The relative protein amount of 1Bx7OE was two-fold higher than that of 1Bx7 or 1Dx5 and, therefore, translated a significant increase in the protein amount in 1Bx7OE. Similarly, the relative protein amounts of 8 & 10 and 10 & 18 were higher than each subunit taken alone. Therefore, this study suggests the established HMW-GS identification system using Lab-on-a-chip as a reliable approach for evaluating HMW-GS for wheat breeding programs.
Introduction
Wheat (Triticum aestivum L.) is an important staple food crop, which provides substantial amounts of various components, such as proteins and vitamins, that are essential for human consumption and health and the industry. In addition to being an important source of energy, wheat serves as an ingredient for diverse foods due to the presence of the seed storage protein gluten [1], which is built up of subunits and imparts elasticity to a dough [2].
Gluten that affects the end-use quality of common wheat, also known as bread wheat, consists of glutenins and gliadins. The glutenins are protein aggregates divided into high-molecular-weight (HMW-GS, 70~140 kDa) and low-molecular-weight (LMW-GS, 30~50 kDa) subunits [3]. HMW-GS represent approximately 10% of the total seed storage proteins and critically determine the strength and elasticity of dough with LMW-GS [4]. An x-type and a y-type subunits of HMW-GS encoded in each Glu-1 locus are located on the long arms of chromosome 1 on the A, B, and D genomes of bread wheat [5]. These genes on the Glu-1 loci are tightly linked, considering that the physical distance between an x-type and a y-type subunits ranges higher than that observed on SDS-PAGE, and the electrophoresis pattern of HMW-GS on Lab-on-a-chip was different from SDS-PAGE. Useful molecular markers, such as Kompetitive Allele-Specific PCR (KASP) and single-nucleotide polymorphism (SNP) markers, were developed to identify the allelic variation for the Glu-1 loci [32,33]. These molecular markers are useful for the high-throughput identification of allelic variations of genetic resources and the screening of breeding lines for quality improvement [33]. However, they do not have the ability to examine all allelic variations at the same time. Therefore, owing to the above, there is a sustaining need for the development of allelic specific markers for HMW-GS identification [33].
Reports indicate that the annual wheat flour consumption per capita in Korea is about 32 kg, the second largest after rice. However, the self-sufficiency rate is less than 2%. Developing new cultivars with a good quality for making noodles and bread are required to improve the self-sufficiency rate in Korea. In this study, we established a numbering system of HMW-GS for Lab-on-a-chip to easily identify the HMW-GS in a relatively short time. The newly developed numbering system was verified to be effective and reliable and could be applied in breeding programs.
Identification of 1Ax1 and 1Ax2* at the Glu-A1 Locus by Lab-on-a-Chip
The current study established a numbering system for HMW-GS identification with Lab-on-a-chip technology parallel to the widely used SDS-PAGE. We included wheat cultivars with known HMW-GS compositions using standard cultivars (Table 1). These cultivars covered three, six, and three of the allelic variations at the Glu-A1, Glu-B1, and Glu-D1 loci, respectively, which are commonly used in wheat breeding programs. When comparing the electrophoresis patterns of HMW-GS by Lab-on-a-chip, 1Dx2.2 was by default set as the upper marker because of its large size compared to the systematic upper marker (240 kDa). Therefore, the upper marker in the standard cultivars harboring 1Dx2.2, such as Uri and Seodun, was adjusted before analyzing the HMW-GS composition ( Figure 1). The recorded molecular weights of HMW-GS on Lab-on-a-chip were above 120 kDa [31]. Table 1. High-molecular-weight glutenin subunit (HMW-GS) composition used as the standard varieties in this study.
Electrophoresis Patterns of HMW-GS Encoded by the Glu-B1 Locus
In order to validate the electrophoresis banding patterns of HMW-GS encoded by the Glu-B1 locus on Lab-on-a-chip, we used six standard wheat cultivars covering six alleles. To determine the molecular weight of 1Bx7 on Lab-on-a-chip, we first examined the HMW-GS of Petrel, in which the 1Bx7 subunit was only translated from the Glu-B1 locus. The protein band size of 177.8 kDa in Petrel was identified as 1Bx7 on Lab-on-a-chip, which was similar with the molecular weight of the 1Bx7 (177 kDa) subunit reported earlier [36]. Additionally, similar protein bands were identified in wheat varieties Seodun and Anbaek, carrying the 1Bx7 subunit. Meanwhile, Seodun and Anbaek harbored different HMW-GS at the Glu-1B locus, 1Bx7 + 1By8 and 1Bx7 + 1By9, respectively. When the electrophoresis patterns of HMW-GS in Seodun and Anbaek were compared to one another, 133.9 kDa and 129.9 kDa of the protein bands were detected ( Figure 1). A protein band of 133.9 kDa was examined in varieties carrying the 1By8 subunit, such as Jokyung and Uri. Owing to the above, these results suggest that the detected protein band size of 177.8 kDa in Petrel indicates 1Bx7, while the one of 133.9 kDa in Seodun and 129.9 kDa in Anbaek are 1By8 and 1By9, respectively ( Figure 1).
The wheat variety Joeun harboring 1Bx13 + 1By16 at the Glu-B1 locus was used to establish the electrophoresis pattern of 1Bx13 + 1By16. The 171.7 kDa and 141.4 kDa protein bands in Joeun were clearly different compared with Seodun and Anbaek, carrying 1Bx7 + 1By8 and 1Bx7 + 1By9, respectively ( Figure 1 and Table S1). The molecular weight of 1Bx13 was higher than that of the 1Bx16 protein observed on SDS-PAGE. Thus, the upper and lower bands in Joeun are considered as 1Bx13 and 1By16, respectively. The molecular weight of 1Bx13 is about 1 kDa larger than that of 1Bx7. The 1Bx13 protein band was located a little more upper than 1Bx7 on Lab-on-a-chip ( Figure 1). Meanwhile, 1By16 was positioned above 1By8 with 5-kDa difference on Lab-on-a-chip and was clearly distinguished from other HMW-GS.
To determine the position of 1Bx17 + 1By18 and 1Bx7 OE + 1By8 on Lab-on-a-chip, we used Joongmo2008 and Vesna possessing 1Bx17 + 1By18 and 1Bx7 OE + 1By8 at the Glu-B1 locus, respectively. The electrophoresis pattern of Joongmo2008 carrying both 1Bx17 + 1By18 was compared with Petrel, which harbors 1Bx7. The molecular weight of the 1Bx17 protein on Lab-on-a-chip was 159.6 kDa and was clearly differentiated from other subunits ( Figure 1). However, we could not distinguish the 1By18 and 1Dy10 protein bands on Lab-on-a-chip. Nevertheless, the 1Bx17 + 1By18 subunit could be identified using the electrophoresis position of the 1Bx17 subunit from other HMW- To validate the numbering system for the Glu-A1 locus on the Lab-on-a-chip system, we initially compared the observed banding patterns in Jokyung, Keumgang, Uri, and Petrel. Wheat varieties Jokyung and Keumgang harbored the same HMW-GS except at the Glu-1 locus, while Uri and Petrel carried the 1Ax null subunit on the Glu-A1 locus. In addition, the gel image of Jokyung and Keumgang, which carry the 1Ax1 and 1Ax2* subunits, respectively, showed that only two protein bands of about 200.6 kDa and 192.4 kDa marked the difference between Jokyung and Keumgang ( Figure 1 and Figure S1, and Table S1). However, these protein bands could not be found in Uri and Petrel harboring the 1Ax null subunit. Therefore, these data revealed that the observed protein bands of 200.6 kDa and 192.4 kDa on Lab-on-a-chip are Glu-1Ax1 and Glu-1Ax2*, respectively.
Electrophoresis Patterns of HMW-GS Encoded by the Glu-B1 Locus
In order to validate the electrophoresis banding patterns of HMW-GS encoded by the Glu-B1 locus on Lab-on-a-chip, we used six standard wheat cultivars covering six alleles. To determine the molecular weight of 1Bx7 on Lab-on-a-chip, we first examined the HMW-GS of Petrel, in which the 1Bx7 subunit was only translated from the Glu-B1 locus. The protein band size of 177.8 kDa in Petrel was identified as 1Bx7 on Lab-on-a-chip, which was similar with the molecular weight of the 1Bx7 (177 kDa) subunit reported earlier [36]. Additionally, similar protein bands were identified in wheat varieties Seodun and Anbaek, carrying the 1Bx7 subunit. Meanwhile, Seodun and Anbaek harbored different HMW-GS at the Glu-1B locus, 1Bx7 + 1By8 and 1Bx7 + 1By9, respectively. When the electrophoresis patterns of HMW-GS in Seodun and Anbaek were compared to one another, 133.9 kDa and 129.9 kDa of the protein bands were detected ( Figure 1). A protein band of 133.9 kDa was examined in varieties carrying the 1By8 subunit, such as Jokyung and Uri. Owing to the above, these results suggest that the detected protein band size of 177.8 kDa in Petrel indicates 1Bx7, while the one of 133.9 kDa in Seodun and 129.9 kDa in Anbaek are 1By8 and 1By9, respectively ( Figure 1).
The wheat variety Joeun harboring 1Bx13 + 1By16 at the Glu-B1 locus was used to establish the electrophoresis pattern of 1Bx13 + 1By16. The 171.7 kDa and 141.4 kDa protein bands in Joeun were clearly different compared with Seodun and Anbaek, carrying 1Bx7 + 1By8 and 1Bx7 + 1By9, respectively ( Figure 1 and Table S1). The molecular weight of 1Bx13 was higher than that of the 1Bx16 protein observed on SDS-PAGE. Thus, the upper and lower bands in Joeun are considered as 1Bx13 and 1By16, respectively. The molecular weight of 1Bx13 is about 1 kDa larger than that of 1Bx7. The 1Bx13 protein band was located a little more upper than 1Bx7 on Lab-on-a-chip ( Figure 1). Meanwhile, 1By16 was positioned above 1By8 with 5-kDa difference on Lab-on-a-chip and was clearly distinguished from other HMW-GS.
To determine the position of 1Bx17 + 1By18 and 1Bx7 OE + 1By8 on Lab-on-a-chip, we used Joongmo2008 and Vesna possessing 1Bx17 + 1By18 and 1Bx7 OE + 1By8 at the Glu-B1 locus, respectively. The electrophoresis pattern of Joongmo2008 carrying both 1Bx17 + 1By18 was compared with Petrel, which harbors 1Bx7. The molecular weight of the 1Bx17 protein on Lab-on-a-chip was 159.6 kDa and was clearly differentiated from other subunits ( Figure 1). However, we could not distinguish the 1By18 and 1Dy10 protein bands on Lab-on-a-chip. Nevertheless, the 1Bx17 + 1By18 subunit could be identified using the electrophoresis position of the 1Bx17 subunit from other HMW-GS. In addition, the gel-like image of Vesna harboring 1Bx7 OE + 1By8 was investigated with the purpose of validating the electrophoresis position of 1Bx7 OE + 1By8 ( Figure 1). Interestingly, the electrophoresis pattern of 1Bx7 OE + 1By8 on Vesna was the same with Jokyung harboring 1Bx7 + 1By8, considering that 1Bx7 OE encoded by Glu-B1al are overexpressed by the insertion of 43 bp in the promoter and/or gene duplication of 1Bx7 [19,20]. We could not separate 1Bx7 OE + 1By8 from 1Bx7 + 1By8 with the gel-like image of Lab-on-a-chip.
Identification of the 1Dx5 + 1Dy10 Subunit from Other Subunits Encoded from the Glu-D1 Locus
To validate the electrophoresis pattern of HMW-GS encoded by the Glu-D1 locus, we compared the three kinds of alleles in the Glu-D1 locus: 1Dx2 + 1Dy12, 1Dx5 + 1Dy10, and 1Dx2.2 + 1Dy12. The molecular weights of 1Dx2 in Anbaek, 1Dx5 in Petrel, and 1Dx2.2 in Seodun were 229.0 kDa, 224.0 kDa, and 281.7 kDa, respectively ( Figure 1 and Table S1). We mentioned earlier that 1Dx2.2 was located above the systemic upper marker (240 kDa). The molecular weight of 1Dy10 in Petrel and 1Dy12 in Seodun were 133.7 kDa and 123.3 kDa, respectively. The data showed that 1Dx subunits encoded by the Glu-D1 locus were positioned above the 1Ax1 and 1Ax2*-encoded Glu-A1 locus, even though the theoretical molecular weight of the 1Dx subunits were smaller than the 1Ax subunits. The electrophoresis of 1Dy12 on Lab-on-a-chip was faster than the other HMW-GS and was placed at the bottom ( Figure 1). When the band sizes of these 1Dx were compared with one another, 1Dx5 was discriminated from 1Dx2. However, it was a bit difficult to distinguish 1Dx2 and 1Dx5, because the molecular weights of these proteins by Lab-on-a-chip were slightly different.
Discrimination of 8 & 10, 10 & 18, and 7 OE Subunit by Analyzing the Electropherogram
HMW-GS were clearly separated by SDS-PAGE. However, some of the HMW-GS were electrophoresed, including the one with the same molecular weight on Lab-on-a-chip: 1By8, 1By18, and 1Dy10. Additionally, the 1Bx7 OE subunit showed the same molecular weight with the 1Bx7 subunit. Therefore, it is difficult to distinguish subunits clearly with a gel-like image or molecular weights of Lab-on-a-chip (Figure 1, Tables S2 and S3). To distinguish the 1By8 + 1Dy10 (here referred to as 8 & 10) subunit from 1By8 and 1Dy10, we used three wheat varieties as the background to perform an electrophoresis by Lab-on-a-chip. The results showed that the gel-like image and electropherogram of the 1By8, 1Dy10, and 8 & 10 subunits had similar protein band positions based on their molecular weights on Lab-on-a-chip ( Figure 2 and Table S2). However, the protein amount indicated by the peak height on the electropherogram for 8 & 10 was about two-fold higher than that of 1By8 and 1Dy10 taken alone, despite the fact that the band peak height of 1Bx7 in Hanbaek carrying 8 & 10 was slightly lower than the one observed in Saekeumgang and Chapingo ( Figure 2). Interestingly, the recorded protein amount indicated by the peak height and relative protein quantity of 8 & 10 was about two-fold higher than that of 1Bx7 (Table 2). Besides, the 1Bx7 OE + 1By8 subunit was identified on Lab-on-a-chip in nine wheat varieties that carry similar HMW-GS, without 1Bx7 or 1Bx7 OE . The results indicated that all the selected varieties exhibited a similar electrophoresis pattern on the gel-like image, but 1Bx7 OE showed a high band density compared to that of the other HMW-GS ( Figure 4). The analysis of the electropherogram of the nine wheat varieties showed that 1Bx7 OE had peak heights of about two-fold higher than those of 1Dx5 or 8&10 ( Figure S2 and Table S4). Despite the fact that 1Bx7 OE and 1Bx7 showed similar protein sizes on Lab-on-a-chip, 1Bx7 OE could be distinguished from 1Bx7 by its protein amount (thick band) on the gel-like image and relative protein amount on the electropherogram in Lab-on-a-chip. Additionally, the discrimination of the 1By18 + 1Dy10 (referred to as 10&18) and 1Dy10 protein bands indicated in the electropherogram of Chapingo and Garnet, which harbor 1Dy10 and 10 & 18, respectively, revealed that the protein amount of 10 & 18 was about two-fold higher than that of 1Dy10 alone (Table 3). Meanwhile, the band peak height of 1Ax2* and 1Dx5 showed similar patterns ( Figure 3). Furthermore, 8 & 10 and 10 & 18 exhibited similar protein amounts ( Figure 3). Thus, it is believed that, when 8 & 10 and 10 & 18 are electrophoresed together on Lab-on-a-chip, these subunits could be distinguished by analyzing the relative protein amounts on an electropherogram ( Figure 3 and Table 3).
Besides, the 1Bx7 OE + 1By8 subunit was identified on Lab-on-a-chip in nine wheat varieties that carry similar HMW-GS, without 1Bx7 or 1Bx7 OE . The results indicated that all the selected varieties exhibited a similar electrophoresis pattern on the gel-like image, but 1Bx7 OE showed a high band density compared to that of the other HMW-GS (Figure 4). The analysis of the electropherogram of the nine wheat varieties showed that 1Bx7 OE had peak heights of about two-fold higher than those of 1Dx5 or 8&10 ( Figure S2 and Table S4). Despite the fact that 1Bx7 OE and 1Bx7 showed similar protein sizes on Lab-on-a-chip, 1Bx7 OE could be distinguished from 1Bx7 by its protein amount (thick band) on the gel-like image and relative protein amount on the electropherogram in Lab-on-a-chip.
HMW-GS Composition Identification of Genetic Resources by Lab-on-a-Chip
To examine whether the established numbering system with Lab-on-a-chip could be effectively used for HMW-GS identification in a wheat breeding program, we tested the HMW-GS of 121 varieties by both the gel-like image and electropherogram of Lab-on-a-chip. The data showed that 1Ax1, 1Ax2*, and 1Ax null at the Glu-A1 locus were found in 51, 56, and 14 verities with their respective molecular weights on Lab-on-a-chip of 201.5 kDa and 191.8 kDa ( Figure 5). Of all the HMW-GS at the Glu-B1 locus, the 1Bx7 + 1By9 or 1Bx17 + 1By18 subunits were detected at a high frequency compared to the composition of the other subunits. In essence, 34 varieties carried the 1Bx7 + 1By9 subunit, while the 1Bx13 + 1By16 and 1Bx7 OE + 1By8 subunits were found in 10 and 18 varieties, respectively. However, no variety carried an independent 1Bx7 subunit. The average molecular weight of these subunits encoded by the Glu-B1 locus ranged from 125.9-168.9 kDa on Lab-on-a-chip (Tables S5 and S6). Whereas 1Bx7 and 1Bx7 OE exhibited an average molecular weight of 166.6 kDa and 166.8 kDa, and 1Bx13 and 1Bx17 showed 168.9 kDa and 153.9 kDa, respectively. Subunits 1By8, 1By9, Plants 2020, 9, 1517 8 of 14 1By16, and 1By18 had 133.7 kDa, 125.9 kDa, 139.3 kDa, and 133.9 kDa, respectively. Moreover, 105, 3, and 13 of varieties harbored the 1Dx5 + 1Dy10, 1Dx2 + 1Dy12, and 1Dx2.2 + 1Dy12 encoded by the Glu-D1 locus. The average molecular weights of 1Dx5, 1Dx2, and 1Dx2.2 on Lab-on-a-chip were 210.8 kDa, 218.0 kDa, and 277.3 kDa, while the ones of 1Dy10 and 1Dy12 were 133.2 kDa and 122.4 kDa, respectively ( Figure 5). Figure 5). Figure 5).
Discussion
Recent decades have been marked by significant progress in the genetic characterization of gluten proteins in wheat using improved procedures or methods of protein fractionation and the higher availability of genetic resource stocks.
The Lab-on-a-chip electrophoresis system is a reliable and efficient technology for fast and easy protein separation and quantification. In addition, Lab-on-a-Chip equipment offers the unique comparative advantage of being deployable beyond the laboratory compared to conventional methods of protein analysis. However, it still needs to be established for the HMW-GS identification for wheat breeding. Earlier, a study conducted by Rhazi et al. [37] investigated the separation and quantification of HMW-GS in wheat using a high-throughput microchip capillary electrophoresis-sodium dodecyl sulfate (microchip CE) platform, the LabChip 90 system. Their study proposed that the microchip CE analysis could provide a comparable resolution and sensitivity to conventional RP-HPLC for the identification of HMW-GS but faster compared to the latter. In the same way, Uthayakumaran and his colleagues [38,39] proposed a similar separation and identification system of HWM-GS with a longer sample processing time using an Agilent 2100 Bioanalyzer with a Protein 200 + LabChip.
In addition, other studies have established a numbering system for HMW-GS identification by SDS-PAGE, based on the molecular weight of HMW-GS [5]. The HMW-GS were named by electrophoresis mobility on SDS-PAGE, and this system has been used for HMW-GS identification for years [2]. However, the HMW-GS identification system by SDS-PAGE was shown to be time-consuming and required a large gel system to clearly separate HMW-GS. Many research groups have tried to apply the Lab-on-a-chip system for HMW-GS identification, due to the fast and effective protein separation and quantification that offers the Lab-on-a-chip electrophoresis system [31,40]. However, the latter system is not commonly used for HMW-GS identification due to particular reasons [31]. On the one hand, the molecular weight of HMW-GS on Lab-on-a-chip was shown to be higher than that observed on SDS-PAGE [31].
In the current study, we distinguished and validated the exact molecular weights and quantified each HMW-GS on Lab-on-a-chip using standard wheat varieties and other genetic resources. On the one hand, the theoretical molecular weight of HMW-GS ranged between 70-150 kDa, but the molecular weights investigated in standard varieties and genetic resources on Lab-on-a-chip ranged between 120-280 kDa ( Figure 5). On the other hand, the electrophoresis patterns of 1Ax1 and 1Ax2* on the Lab-on-a-chip system were found to be different from the one on SDS-PAGE. The 1Ax1 and 1Ax2* subunits were detected above the 1Dx2 and 1Dx5 subunits in SDS-PAGE, considering its theoretical molecular weight. However, these subunits were positioned below the 1Dx2 and 1Dx5 subunits on Lab-on-a-chip ( Figure 1) [2,40]. It has been reported that Tris-acetate acrylamide gel is ideal for separating large-molecular-weight proteins. In our previous study, when HMW-GS proteins were electrophoresed in 3-8% of gradient Tris-acetate acrylamide gel on SDS-PAGE, the 1Ax1 and 1Ax2* proteins were detected at a lower position than Dx2 and Dx5 in this gel condition like Lab-on-a-chip [35]. This phenomenon was also observed when acid polyacrylamide gel (A-PAGE) was applied for HMW-GS separation [30]. In addition, the 1Dy10 subunit was clearly separated from the 1By8 and 1By18 subunits on SDS-PAGE. The application of another Lab-on-a-chip system, the Experion Pro 260 assay kit (Bio-Rad, Hercules, CA, USA), revealed that the electrophoresis patterns of HMW-GS were different from those observed with a Protein 230 assay kit (Agilent Technologies, Palo Alto, CA, USA). The 1Dy10 protein was discriminated from 1Bx8, but 1Bx8 was found to overlap with 1Dy12 in the Experion Pro 260 assay kit [41]. We did not find the main reason for the 1Dy10 subunit to be slowly electrophoresed on Lab-on-a-chip with the Protein 230 assay kit. However, it is thought that it could be due to the buffer and gel system of the Protein 230 assay kit for Lab-on-a-chip. The 240-kDa protein considered as a systemic upper marker is used to determine the protein molecular weight on Lab-on-a-chip. Any protein that could be detected in the sample having a larger size than the upper marker was, by default, set as the systemic upper marker. The molecular weight of 1Dx2.2 on Lab-on-a-chip was higher than the systemic upper marker, which led to the adjustment of the systemic upper marker position prior to analyzing the HMW-GS composition (Figure 1). Jang et al. [5] evaluated the HMW-GS composition wheat varieties, and they used 16 standard wheat varieties for HMW-GS identification and 38 Korean wheat cultivars by RP-HPLC and SDS-PAGE [5]. In our case, to establish the HMW-GS identification in the wheat breeding program on Lab-on-a-chip, we used nine varieties of which the HMW-GS were identified earlier [5]. We then screened Vesna carrying 1Bx7 OE by the Kompetitive allele-specific PCR (KASP) assay with the Bx7 OE _866_SNP marker and later used it for 1Bx7 OE subunit identification [35]. These varieties covered three, six, and three allelic variations at the Glu-A1, Glu-B1, and Glu-D1 loci, respectively (Figure 1). They did not cover all allelic variations of the Glu-1 loci, but we thought that the allelic variations used in this study were enough to be applied in wheat breeding programs. In the present study, four varieties were used to specify the molecular weight of 1Ax1, 1Ax2*, and 1Ax null encoded by the Glu-A1 locus. We first found 1Ax1 of Jokyung and 1Ax2* of Keumgang by excluding the same positional proteins after comparing with Uri and Petrel harboring the 1Ax null. The molecular weights of 1Ax1 and 1Ax2* were lower than 1Dx2 and 1Dx5 on Lab-on-a-chip ( Figure 1) [31,41]. Previously, diverse alleles were reported in the Glu-B1 locus. We determined the molecular weight and relative quantity of six alleles in the Glu-B1 locus on Lab-on-a-chip [42]. The molecular weight of HMW-GS encoded by the Glu-Bl locus was shown in the following order: 1Bx13 ≥ 1Bx7 = 1Bx7 OE > 1Bx17 > 1By16 > 1By8 = 1By18 > 1By9. The molecular weight of 1Bx13 was slightly higher than that of 1Bx7 and 1Bx7 OE , and the electrophoresis mobility of 1By8 was similar with 1By18 on Lab-on-a-chip.
In the case of HMW-GS on the Glu-D1 locus, 1Dx2.2 + 1Dy12 was clearly observed, and this was facilitated by the position of 1Dx2.2 above the systematic upper marker. The molecular weight of 1Dx2 was 7 kDa higher than 1Dx5, though, in similar cases, it was not always easy to distinguish 1Dx2 from 1Dx5 on Lab-on-a-chip. However, 1Dy12 was faster electrophoresed than 1Dy10. Thus, 1Dx2 + 1Dy12 and 1Dx5 + 1Dy10 were identified by examining 1Dy12 on a gel-like image of Lab-on-a-chip ( Figure 1). Subunits 1By8 and 1Dy10 were clearly differentiated by Lab-on-a-chip with the Experion Pro 260 assay kit [41]. Three of the HMW-GS, 1By8, 1By18, and 1Dy10, were electrophoresed as having similar molecular weights on the Lab-on-a-chip with the Protein 230 assay kit (Figures 2 and 3). It is then believed that the difference between the Experion Pro 260 assay kit and Protein 230 assay kit of Agilent could be partly explained by different buffer systems for Lab-on-a-chip. Despite the existing differences in the protein electrophoresis mobility of HMW-GS between manufacturers, the 8 & 10 and 10 & 18 bands could be distinguished from 1By8, 1By18, and 1Dy10 alone by analyzing the relative protein quantity of the electropherogram (Figures 2 and 3). Additionally, no reports could be found elaborating on the electropherogram analysis to identify HMW-GS currently, but the electropherogram analysis was found to be an important way for clear HMW-GS identification, such as 8 & 10.
Besides, the reported molecular markers were useful for high-throughput HMW-GS identification, and KASP assay makers for HMW-GS identification were recently developed for rapid genotyping [33,35]. However, it still requires independent experiments for each HMW-GS. Lab-on-a-chip takes about 25 min to process 10 samples for HMW-GS identification, and approximately 120 samples could be tested in a day [31]. All HMW-GS could be identified and quantified at the same time. Therefore, this study established an HMW-GS identification system for Lab-on-a-chip, with standard varieties covering diverse HMW-GS. The Lab-on-a-chip system is relatively easier and faster than SDS-PAGE and RP-HPLC. Nevertheless, downstream studies are required to validate for minor alleles of HMW-GS. Owing to the above, this study suggests that the Lab-on-a-chip system could be served as a reliable and effective technology to identify and quantify the HMW-GS for the wheat breeding program.
HMW-GS Composition Identification of Genetic Resources by Lab-on-a-Chip
Nine wheat varieties covering diverse HMW-GS were used to develop the numbering system for the HMW-GS identification by Lab-on-a-chip (Table 3). These varieties included three alleles at the Glu-A1 locus, six alleles at the Glu-B1 locus, and three alleles at the Glu-D1 locus. Three, four, and eight varieties were used to evaluate the protein amounts of 8 & 10, 10 & 18, and 7 OE subunits, respectively, on an electropherogram. Another set of 121 wheat varieties were obtained from the National Agrobiodiversity Center, National Institute of Agricultural Science, Rural Development Administration (RDA), Republic of Korea. These varieties were used to examine and verify the efficiency and reliability of the newly developed numbering system for HMW-GS identification.
The glutenin was extracted from wheat flour following the procedure reported by Van Den Broeck et al. [43]. Briefly, a mixture of approximately 100 mg of flour and 1 mL of 50% (v/v) propanol was incubated for 30 min at 65 • C, followed by centrifugation at 10,000× g for 5 min, and the supernatant containing gliadin was discarded. Then, the precipitate was suspended in 0.7-mL 80-mM Tris-HCl (pH 8.0) containing 2% SDS and 1% (w/v) dithiothreitol (DTT) and incubated at 65 • C for 30 min. The mixture was centrifuged at 10,000× g for 5 min, and 0.3 mL of the buffer containing 1% DTT was added and incubated at 65 • C for 15 min. After centrifugation at 10,000× g for 5 min, the supernatant was collected and used for downstream analysis by Lab-on-a-chip.
The Protein 230 assay kit (Agilent Technologies, Palo Alto, CA, USA) was used for HMW-GS identification. A total of 12 µg (3 µg/µL) of extract glutenin proteins were mixed with 2 µL of denaturing solution. After heating at 95 • C for 5 min, 84 µL of deionized water was added to the sample tube. Six microliters of diluted samples were loaded on the chip. Then, the diluted samples were separated on the 2100 Bioanalyzer (Agilent Technologies, Palo Alto, CA, USA), based on gel electrophoresis principles replicated onto a chip format following the manufacturer's instructions [44]. The upper marker (240 kDa) was adjusted before analyzing the HMW-GS for varieties harboring 1Dx2.2. The protein concentration was quantified with electropherogram on the 2100 Expert program (Agilent Technologies, Palo Alto, CA, USA). Samples were collected in triplicate for each wheat variety used in the study. The values of protein amounts obtained from different band picks were compared using the Student's t-test (p < 0.05).
Glutenin Proteins Extraction and HMW-GS Composition Identification of Genetic Resources by SDS-PAGE
Glutenin subunits were also separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) following the procedure described earlier [45]. Briefly, the separation gel contained 1.5-M Tris-HCl (pH 8.8) and 0.27% SDS. Gels were made of 7.5% (w/v) acrylamide and 0.2% (w/v) bis-acrylamide. The stacking gel was made of 0.25-M Tris-HCl (pH 6.8), 0.2% SDS, and 7.5% (w/v) acrylamide and 0.2% (w/v) bis-acrylamide. Wheat flour was suspended in 300-mL 0.25-M Tris-HCl buffer (pH 6.8), containing 2% (w/v) SDS, 10% (v/v) glycerol, and 5% 2-mercaptoethanol, followed by shaking for 2 h at room temperature. Then, the slurry was heated for 3 min at 95 • C, and the supernatant was subjected to SDS-PAGE. The HMW numbering system of glutenin subunit bands and that for the allelic classification at different loci previously proposed by Payne and his colleague [11] were used in the current study. For the determination of the electrophoretic mobility of each HMW glutenin subunit by SDS-PAGE, standard wheat varieties that included the spectra of the subunits expected were used. Thus, the overall quality scores of HMW glutenin subunits for a particular variety could be obtained as the sum of the scores of each individual subunit and compared with the standard bread-making quality of the wheat varieties [46]. | 7,092.4 | 2020-11-01T00:00:00.000 | [
"Agricultural and Food Sciences",
"Engineering"
] |
Ultrafast triggered transient energy storage by atomic layer deposition into porous silicon for integrated transient electronics †
Here we demonstrate the fi rst on-chip silicon-integrated recharge-able transient power source based on atomic layer deposition (ALD) coating of vanadium oxide (VO x ) into porous silicon. A stable speci fi c capacitance above 20 F g − 1 is achieved until the device is triggered with alkaline solutions. Due to the rational design of the active VO x coating enabled by ALD, transience occurs through a rapid disabling step that occurs within seconds, followed by full dissolution of all active materials within 30 minutes of the initial trigger. This work demonstrates how engineered materials for energy storage can provide a basis for next-generation transient systems and highlights porous silicon as a versatile sca ff old to integrate transient energy storage into transient electronics.
Introduction
Transient electronics represents a class of devices where a trigger can be used to dissolve or destroy a device and any information it contains.The premise of a transient system is to exhibit stable and invariant device performance until an external trigger is applied, which can be in the form of pH, light, temperature, or a combination of these stimuli. 1,2The trigger initiates a series of reactions or mechanisms in the device that partially or fully dissolves the device, and renders the device inoperable in a manner that destroys the device and any information it may contain.There are many applications that can benefit from transient devices, ranging from information-sensitive electronic devices to biodegradable medical applications.Such a vast array of application areas also brings the requirement of diverse transient properties, including time of dissolution, controlled toxicity, and the type of external trigger source.For example, biological or medical applications may be more centered on toxicity of dissolution products, whereas surveillance and spy applications require fast disablement and dissolution times.
Recent efforts in the development of transient technology have been concentrated in small electronics [3][4][5] usually centered on silicon materials, silicon based photovoltaics, 1 medical energy harvesters, 6 bioresorbable electronics, 7,8 and biodegradable primary batteries. 9The first rechargeable power source with transient behavior has been described by Fu et al. which demonstrates a rechargeable lithium ion battery that dissolves through a chemical cascade reaction. 10This elegant design involves a power source packaged separately from other electronics or systems that it powers.Building from this work, the intersection of transient energy storage with integrated silicon-based systems could enable facile design of integrated silicon transient electronics and power systems.
2][13][14][15] This presents a challenge for transient behavior since the robust electrochemical stability of carbon leaves it incompatible with triggers used for transience, and incomplete passivation of the surface (that would enable transience) has the adverse effect of compromised function of the device prior to being triggered. 15,160][31] Vanadium oxide has also been shown to dissolve in basic solutions, thus making it suitable for transient applications. 10,32,33herefore, in this report, we demonstrate the first integrated silicon-based on-chip energy storage system that exhibits transient behavior.This builds from vanadium oxide coated onto the interior of porous silicon that is electrochemically etched into a bulk silicon chip.By controlling the ALD deposition profile in a transient scaffold, ultrafast deactivation of the device occurs when exposed to an alkaline trigger solution, and full dissolution of all components occurs within 30 minutes.This identifies a general route of combining the versatility of ALD with porous silicon transient materials to design integrated silicon transient power storage systems.
Results and discussion
Schematic representation of the concept of an integrated device as shown in Fig. 1 gives insight into fabrication and the transient behavior of the fabricated device.The active energy storage material (VO x ) is deposited to play a dual role that both (1) inhibits corrosion-based deactivation of nanoscale silicon in electrochemical environments, and (2) provides active redox storage as an electrochemical capacitor.This architecture is designed so that in a triggering environment, the VO x will dissolve and expose the unstable porous silicon material to corrosive conditions that dissolve and remove all materials except the bulk silicon.However, in the electrolyte environment, the VO x /porous silicon system will exhibit stable, invariant performance.Here, porous silicon plays a role of a tunable, high surface area transient template for the coating of active material, and can be integrated directly into bulk silicon materials using standard semiconductor processing technology.This means that this material can be processed into the backside of silicon electronics, or even coupled with other silicon-based transient electronics, which is schematically illustrated in Fig. 1.A gel based PVA/0.5 M LiClO 4 electrolyte (ESI †) was used as an electrolyte to couple VO x coated porous silicon transient devices.Additionally, for this system to be fully transient, a polyethylene oxide (PEO) based separator (ESI †), which dissolves in the aqueous trigger media, was used.This leads to a device that can be a functional energy storage material until an external trigger (1 M NaOH) disables and disintegrates the device.
A key aspect of this transient device design is the ALD deposition of VO x materials. 34,35To accomplish this, we utilized sequential pulses of VO(OC 3 H 7 ) 3 and H 2 O with 2 second residence times.Due to the diffusion-limited growth of ALD coatings on the interior of high aspect ratio nanoporous materials, longer residence times are required for uniform coatings. 36owever, to design a material with optimized transient performance, we reverted to shorter residence times which yield a thickness gradient from top of the porous material to the bottom.Thickness of the films was measured based on ellipsometry of films coated on planar silicon surfaces (ESI †) and penetration of the VO x coatings was confirmed based on scanning electron microscopy (SEM) characterization.Full details of the ALD chemistry and experimental parameters are detailed in the ESI.† Uncoated porous silicon with 5 µm deep pores is shown in Fig. 2a.VO x deposition on the porous silicon for 100 and 400 ALD cycles and the resulting pore morphology is shown in Fig. 2b and c, respectively.The total areal mass loading of vanadium oxide in these composites is 0.057 mg cm −2 and 0.229 mg cm −2 for 100 and 400 ALD cycles, respectively.Evidence of a thicker VO x coating is visually apparent in the case of 400 ALD cycles when compared to 100 ALD cycles, noting that the underlying porous silicon template is identical between Fig. 2a-c.Energy dispersive X-ray (EDS) on the crosssection of the porous silicon was performed, and line profiles corresponding to EDS scans are shown in Fig. 2d and e.Based on EDS analysis, relative weight and atomic percentages of vanadium and oxygen are 64% vanadium and 36% oxygen (weight) and 36% and 64% (atomic), respectively.To further analyze the state of the VO x material, we annealed ALD deposited VO x in air at 450 °C for 1 hour and characterized the material using Raman spectroscopy (see ESI †) before and after annealing.Due to the highly Raman active modes of crystalline vanadium oxide, Raman analysis demonstrates the transition from a non-crystalline or amorphous state of vanadium oxide to crystalline V 2 O 5 . 35We therefore associate the ALD material to a non-stoichiometric and non-crystalline form of vanadium oxide which we label as VO x .Notably, a greater concentration of VO x is observed at the top of the porous silicon material, and this slightly decreases near the base of the porous silicon.Whereas this effect is due to diffusion-limited ALD growth that can be improved with higher vapor pressure precursors, longer residence times, or higher temperatures, such asymmetric thickness profiles bring a distinct benefit to the function of a transient device.In this design, the thin coating of the VO x active material near the base of the porous silicon will dissolve away more rapidly than the thicker coating near the top of the material, (Fig. 2f ) and this will in-turn expose the base of the porous siliconwhich is highly reactive in aqueous basic solutions.This causes the rapid detachment of the active material from the base (within seconds), which deactivates the energy storage function of the material.There- fore, transience occurs through a two-step process with rapid deactivation on the timescale of a few seconds, and eventual full dissolution on longer scales of minutes.To explicitly demonstrate this mechanism, we produced a control sample where the residence time was increased, leading to a uniform VO x coating in the porous silicon.This system exhibited no rapid deactivation and a 4× longer transience time than the gradient coated samples (Fig. S4 and S5 †).Notably, this highlights the principle that engineered coating processes can enable wide versatility in the utility of transient functions in energy storage systems.
Electrochemical measurements were carried out to assess the energy storage capability of these transient devices.This device was tested in a symmetric two-electrode configuration to assess the storage capability of the VO x , as three-electrode measurements can over exaggerate the measured results. 34,37s the focus of this work is a transient, integrated energy storage platform, further development asymmetric designs can build upon the same general approach and enable a tunable range of operating voltages relative to that measured for VO x .To characterize the electrochemical performance of these devices, we compared the same ALD parameters using 300, 400, and 500 ALD cycles, which corresponds to ∼15, 19, and 24 nm thick coatings based on extrapolated from ellipsometry analysis (ESI †).Cyclic voltammograms (CV) taken at 100 mV s −1 and galvanostatic charge-discharge curves taken at 0.1 mA cm −2 corresponding to VO x coated onto porous silicon templates at these three thicknesses are shown in Fig. 3a and b.Analysis of both CV and charge-discharge curves emphasizes that a 19 nm thick VO x layer corresponding to 400 ALD cycles exhibits the most fully developed stable redox peaks centered around 0 V which is expected for faradaic redox capacitors in symmetric two-electrode configurations.Compared to other thicknesses, this coating thickness also minimizes the resistance polarization in the device that is attributed to power loss on fast charge cycling.This coating thickness also leads to the highest total measured capacitance based on galvanostatic measurements.As the electrolyte used in this system is PVA/ LiClO 4 , the redox behavior of VO x arises from the near-surface redox intercalation of Li + represented by: Two key variables represent the total energy storage capability of these materials: (1) the total mass of VO x coated onto porous silicon that is optimized with thick coatings, and (2) the ability for the electrolyte to penetrate into the pore struc- ture that is inhibited when thick coatings are applied.For the sample with 400 ALD cycles, specific capacitance based on charge-discharge curves is measured as ∼21 F g −1 .Whereas this specific capacitance is lower than that achieved using materials such as RuO 2 or Ni(OH) 2 , the VO x provides a distinct medium between transient behavior, the surface passivation role for electrochemical use of porous silicon, and a versatile ALD chemistry that can enable the ultrafast transience on silicon templates.These thickness studies hence establish an optimized VO x thickness for this system near ∼19 nm, and less than 24 nm.Charge-discharge tests at various current densities were performed on the devices with 19 nm VO x coatings.The discharge curves at various current densities (Fig. 3c) indicate stable near-surface redox intercalation reactions represented by the plateau region.Increasing current densities decreases the plateau region corresponding to decreased charge storage which is a result of the nature of these coated 3D porous electrodes.At higher rates, not all redox active VO x sites on the 3D porous silicon structure are accessed resulting in decreased charge storage.The VO x coatings on the porous silicon substrate were optimally chosen to achieve good transience as well as respectable electrochemical performance for integrated applications.Durability measurements were performed on devices prepared with the optimal coating thickness (Fig. 3d Demonstration of the transient behavior of the VO x based porous silicon system (Fig. 4) highlights the potential application of integrated transient energy storage using this technique.Transient behavior triggered by an alkaline solution (1 M NaOH) disables the VO x -porous silicon electrode in less than 5 seconds due to the electrode design discussed in Fig. 2.This is in part due to the reactive nature of porous silicon in the triggering solution, with the full (uncoated) porous silicon material dissolution occurring in well under 1 min (Fig. 4a).Coating the porous silicon surface with VO x enables surface passivation in the electrolyte environment, but rapid dissolution of the full material when exposed to the trigger solution.The VO x gradient that results in a thinner coating near the bottom of the porous material results in faster dissolution of this bottom section of the material, causing deactivation to occur within 5 seconds due to detachment of this material.Such ultrafast triggering can be highly beneficial for applications requiring immediate transience.A video of this ultrafast deactivation and eventual dissolution is included in the ESI.† Following the dissolution of the porous silicon/bulk silicon interface, the VO x coated layer was observed to fully dissolve in around 30 minutes.In a full device, the penetration of the trigger solution into the device happens due to the initial swelling of the gel electrolyte.The transient behavior of the PEO/LiClO 4 separator is given in Fig. 4c.The polymer separator dissolves within 30 minutes following the initial swelling in the trigger solution.Whereas a 1 M NaOH (pH = 13) triggering solution ensures rapid dissolution, certain applications of transient electronics Whereas this effort so far demonstrates the transience of VO x /porous silicon electrodes, we further performed experiments to demonstrate the direct integration of transient energy storage into microelectronic systems.To accomplish this, we used a commercially obtained integrated silicon microchip made with copper processing.To produce a fully integrated transient energy storage electrode, we etched porous silicon into the backside of the microchip, and coated the porous silicon with 19 nm VO x in a similar manner as described previously (Fig. 5A and B).This leads to a configuration where the energy stored in the backside the microchip can power the front-side components, and operation of the total system can be systematically deactivated based on the transient energy storage material.To demonstrate transient behavior of the integrated electrode, it was exposed to identical triggering environments (1 M NaOH) and after 30 minutes, full dissolution of VO x /porous silicon active material was observed.This demonstrates integration of transient energy storage with silicon electronics for the first time.In addition to fully transient systems, this also provides a route toward integrated transient electronics where the electronic components by themselves may not be transient, but their electronic function can be disabled by the transient behavior of the integrated power source.
Overall, whereas we highlight this route for VO x coated onto porous silicon, we emphasize that porous silicon is a universal template for transient energy storage.The ability to coat other metal oxides or nitrides that exhibit energy storage capability into the interior of transient porous silicon materials opens a full design space to engineer new transient systems for electronics, biomedical applications, or defense applications.This builds on the principle that any system that is designed to dissolve or disappear when triggered still requires a power source to facilitate operation prior to triggeringwith the most elegant design being an integrated and fully transient power system.Here we demonstrate the first such integrated power source into silicon materials, with promise to diverse transient applications.
Conclusions
In summary, we demonstrate the first design for a transient energy storage electrode material that is integrated seamlessly into silicon material that can dually function for either transient or non-transient silicon-based electronics.By combining the native transient properties of porous silicon with the controlled gradient coatings of active materials using atomic layer deposition, our work highlights the capability to achieve stable energy storage (>20 F g −1 ) until a trigger is applied, which deactivates the energy storage function in a matter of seconds, with full dissolution occurring within 30 minutes.We demonstrate this specifically for vanadium oxide (VO x ) coated onto porous silicon, where the VO x plays a role to protect the reactive porous silicon and provide active redox storage until a trigger is applied, which dissolves both the porous silicon and the VO x materials.We further explicitly demonstrate the integration of transient energy storage using this approach into the backside of a silicon integrated circuit, emphasizing the simplicity in transitioning this approach to integrated applications.As silicon is a benchmark material for evolving efforts in transient electronics, this technique is generalizable to a wide range of different coatings that can be coupled with porous silicon using ALD for stable performance and triggered transience.In the circumstance that not all components in a circuit may be transient, the utility of a transient power source is that all on-board components being powered will ultimately be disabled in concert with the integrated power systema feat that we show can be achieved in a matter of seconds using this design.
Fig. 1
Fig. 1 Schematic representation demonstrating the integration of transient energy storage into silicon-based (transient or non-transient) electronics.Triggering the system with 1 M NaOH aqueous solution leads to near-immediate disablement of the device, and full dissolution within 30 minutes.
Fig. 2 (
Fig. 2 (A-C) SEM images of uncoated and coated porous silicon: (A) uncoated porous silicon, (B) 100 ALD cycles (6 nm) of VO x ALD coated on porous silicon, and (C) 400 ALD cycles (19 nm) of VO x ALD coated on porous silicon.(D-E) Cross sectional EDX maps of VO x coated porous silicon showing: (D) silicon and (E) vanadium down the cross-section of the porous silicon.(F) Scheme showing how the thinner coating at the porous silicon base leads to ultrafast disablement.
Fig. 3 (
Fig. 3 (A) Cyclic voltammograms (100 mV s −1 ) of VO x coated porous silicon at different coating thicknesses compared to uncoated porous silicon reference, (B) galvanostatic charge-discharge curves showing the 5 th cycle for each coating thickness, and (C) galvanostatic rate study showing discharge performance as a function of discharge current, and (D) voltammetric cycling performance for transient energy storage device with 19 nm VO x coating on porous silicon.
) for over 100 CV cycles, showing stable cycling behavior.Based on the electrochemical tests, 19 nm thick VO x coated porous silicon was used to demonstrate the transient behavior of the device.Electrochemical Impedance Spectroscopy (EIS) measurements were further performed in a symmetric two electrode setup with 19 nm VO x coatings and the results indicate low equivalent series resistance and charge transfer resistance indicating stable charge storage capability (see ESI †).
Fig. 4
Fig. 4 Transient dissolution of device components including (A) uncoated porous silicon (control), (B) 19 nm VO x gradient coating on porous silicon with bulk silicon substrate (left behind after 30 min), and (C) PEO/LiClO 4 separator.Dissolution tests with uniformly coated VO x (ESI †) support the mechanism of the gradient coating in rapid deactivation of the energy storage material that is illustrated in these dissolution studies.
Fig. 5
Fig. 5 Transient behavior of an integrated circuit microchip where the backside is directly etched and coated with VO x to provide on-board integrated storage.(A) Front side image of the integrated circuit microchip, (B) SEM image showing the interface between the silicon material in the microchip and the on-board transient energy storage material, and (C) image showing the backside of the microchip before and after triggering, where the transient energy storage material is visually fully dissolved.This opens a new class of transient electronics where onboard transient power sources that power integrated electronics can enable transient operation even when the electronic components themselves do not exhibit transience.
1 M and 0.01 M NaOH trigger solutions, EIS analysis for VO x coated devices, and EDS compositional analysis of VO x .(ii) Video showing transient behavior of integrated VO x /porous silicon scaffolds.See DOI: 10.1039/c5nr09095d ‡ Equal contribution first author.
thank Adam P. Cohn, Landon Oakes, Andrew Westover, and Mengya Li for useful discussions regarding this work, and Dr Rizia Bardhan for the use of lab facilities and insights regarding material fabrication and characterization.This work was supported by National Science Foundation grant CMMI 1334269.A.D. and K.S. are supported in part by the National Science Foundation Graduate Research Fellowship under Grant No. 1445197. | 4,704 | 2016-03-31T00:00:00.000 | [
"Engineering",
"Physics"
] |
Fault detection and diagnosis in refrigeration systems using machine learning algorithms
The functionality of industrial refrigeration systems is important for environment-friendly companies and organizations, since faulty systems can impact human health by lowering food quality, cause pollution
Introduction
Machine Learning (ML) is a common term for many processing methods used for data-driven tasks.The main intention of ML is to enable computers to learn, predict, or decide on an unseen data without human assistance (Saravanan and Sujatha, 2018).In the 2010s, rapid development of processors, IoT, and an increasing amount of generated data paved the way for large improvements in ML capabilities.Thus, the popularity of ML increased exponentially in many industries.Machine learning is used in various contexts, such as computer vision, text classification, fault detection, language processing, image recognition, and so forth.
The idea of using ML for fault detection and diagnosis dates back to the 1980s where the existing ML methods were not as efficient as specialized experts.However, the technologies have been improved, and as of today, the availability of powerful programming tools and algorithms for self-learning allow computers to make strategic decisions and even diagnose new events (Gauglitz, 2019).
In particular, ML-based methods have been studied for fault detection and diagnosis (FDD) in different fields with promising results.For instance, ML is used for fault detection in brushless synchronous generators in Rahnama et al. (2019), in water distribution network (Quiñones-Grueiro et al., 2021), in age intelligence systems (Liu et al., 2021), and in high-temperature super conducting DC power cables (Choi et al., 2021).In Hajji et al. (2021), several supervised ML algorithms are compared for FDD in photovoltaic systems.In Hajji et al. (2021), data from non-faulty condition and five different faulty conditions are used both for training and test; and the results confirm that supervised learning algorithms can be used for fault detection and ease the FDD procedure.Moreover, machine learning models are compared for sensor fault detection in Sana Ullah et al. (2021), in which five types of sensor faults are emulated, namely, drift, bias, precision degradation, spike, and stuck faults.
For fault detection in office building systems, various data mining methods, in particular, Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Kernelized Discriminant Analysis (KDA), semi-supervised LDA, and semi-supervised KDA have been compared in Shioya et al. (2015).In Choudhary et al. (2021), different component faults in a rotating machine are classified using a Convolutional Neural Network (CNN) algorithm.According to Lo et al. (2019), in many industrial applications, good system models are difficult or even impossible to obtain due to the system's complexity or large numbers of configurations involved in the production process.The refrigeration industry is not an exception, as the system configuration varies based on different owners' demands.Hence, model based FDD is often sensitive to model parameters in such a way that small changes in the system may lead to a poor fault detection response.In such cases, ML can be a viable approach to handling unseen situations when well trained.
In Soltani et al. (2020), a CNN model is used for evaporator fan fault detection in supermarket refrigeration systems.The same system configuration and information are used in Soltani et al. (2021) to classify the same fault and investigate the robustness of the fault detection model.However, instead of CNN, shallow learning Support Vector Machines SVM and PCA-SVM classifiers are used.In Han et al. (2010), SVM and PCA-SVM are studied for the detection of 8 type of faults in a simulated vapour-compression refrigeration system in which PCA-SVM achieved a better result compared to SVM and back-propagation neural network.
In the refrigeration industry, good performance of a fault detection algorithm can be defined as high classification accuracy, low computation time, and low false positive rate.High classification accuracy ensures an accurate fault description for the technicians for quick troubleshooting, while low computation time is important because it lowers the detection time and the hardware cost.A low false positive rate increases the reliability of the fault detection model and results in lower expenses regarding service call rate.Therefore, it is essential to evaluate the FDD algorithms based on these factors.
Because of increasing usage of digitalization in refrigeration systems (RS), many companies aim for improving existing FDD performance by utilising various data.As mentioned above, FDD algorithms perform satisfactorily in many other applications; thus, data driven FDD algorithms are selected and evaluated in this work.That is, we evaluate and optimize various FDD algorithms for the purpose of selecting the best classifier for use in RS industry applications.
The main contributions of this study is summarized below: • The best approach from an industrial perspective is proposed to detect a faulty system and localize the fault.
In this study, all sensor faults and some component faults are simulated using a high fidelity RS model.The model is already in use at Bitzer Electronics to develop and verify control algorithms.Notice that we will restrict our attention to steady state operating conditions, which are commonly encountered in industrial application such as reefer containers, cold storage houses and so on.It is acknowledged that transient operation is important in many applications as well, e.g., in supermarket refrigeration systems.However, transient behavior presents its own set of unique challenges, and is considered out of scope of this work.
The faults include positive and negative offsets in sensors as well as specific component faults; the faults are detailed in section 2. Three classifiers, namely CNN, SVM, and LDA, are compared to diagnose every selected fault.For pre-processing of the input data LDA and PCA are compared.
The results indicate that the SVM classifier is the superior method, being able to diagnose all classes with 100% classification accuracy except non-faulty and malfunctioning of expansion valve conditions which are diagnosed with 98% and 96% classification accuracy, respectively.The LDA and LDA-SVM classifiers are capable of detecting the faulty condition with 100% classification accuracy.However, these models have poor performance regarding robustness as a significant drop in classification accuracy is observed.Finally, CNN and PCA-SVM show a general lack in performance.
The remainder of this paper are structured as follows.First, refrigeration systems background and specification, as well as data acquisition and its specification, are introduced in section 2.Then, in section 3, the mathematical approaches of the classifiers mentioned above are explained.Afterwards, the specification of each model and the result of the classification is presented in section 4. Finally, the work is concluded in section 5.
Background
In general, RS are used to cool down the goods inside of an insulated room, which is called a cold room, by transferring the heat to the environment.Fig. 1 illustrates a RS in which the refrigerant runs through the pipes.In each refrigeration cycle, heat is absorbed and dissipated.The compressor receives low pressure, low temperature refrigerant gas and releases high pressure, high temperature gas to the inlet of the condenser.The condenser is responsible for dissipating the refrigerant heat to the ambient environment, and finally gives out liquid refrigerant at high pressure while the temperature decreases.Afterwards, an expansion valve decreases the pressure of the refrigerant.Low pressure, low temperature refrigerant enters the evaporator pipes in order to absorb the heat from the cold room environment.Thus, the refrigerant changes phase from liquid to gas before reaching the compressor.
Defective components or sensors in RS lead to high power consumption, air pollution, wear and tear of the components, and/or food waste.RS have the best efficiency when everything is nominal.Thus, when faults occur, the system might deviate from the peak efficiency point.By some of the fault, the system runs outside of its permitted envelope, some of the faults lead to wear and tear of the components due to high temperature, too little lubrication, and too high pressure on the components.Late fault detection may cause the temperature of the refrigerated goods to exceed the permitted limits.Therefore, early fault detection in RS ensures maintaining the required quality of refrigerated goods such as food products or medicine, and preventing excessive maintenance and spoilage cost.
The high fidelity model used by Bitzer Electronics is presented in Fig. 2. In this model, a two-stage semi-hermetic reciprocating compressor is simulated with operating speed in the range 25-87 Hz.
Here, compressor cooling capacity (V cpr ) is defined as compressor operating speed in percentage.Therefore, compressor speed under 25 Hz and full speed operation of 87 Hz are defined as 0% and 100% compressor cooling capacity, respectively.The refrigerant type is R134a, and an electrical expansion valve is simulated.Maximum cooling capacity of the cold room is 17 kW at 10 ∘ C ambient temperature (T amb ) and 5 ∘ C cold room temperature (T room ).The controller is designed so as it controls over opening degree of expansion valve (vexp) using superheat temperature (T sh ) measurements as an input.T sh is the difference between the refrigerant evaporation temperature (T 0 ) and suction gas temperature (T suc ).In addition, V cpr , evaporator fan speed (V evap ), condenser fan speed (V cond ), are controlled using the mentioned controller inputs in Fig. 2. In this paper the supply temperature (T sup ) is the same as cold room temperature (T room ) and used as set point in the simulation model.Thus, set point is the temperature of the air after transferring heat to the refrigerant.Z. Soltani et al.In Fig. 2, the main components of the model are presented with grey blocks.The red blocks indicate some of the fault inputs which are added to the corresponding parameters.Twenty types of faults are simulated, including positive and negative offsets in sensors as well as a number of component faults; the faults are described in Table 1.When collecting a data set, the model is first run with no effect of the red blocks, thus producing non-faulty data.After logging sufficient non-faulty samples, one fault is applied to the model and data collection continues.Simulation of some of the faults such as pressure sensors offset and T dis sensor offset, are not visible in Fig. 2, since they are simulated inside of the relevant block diagrams.
Data acquisition
Machine learning models learn based on input information.Thus, the quality of training data is an essential factor.The training data should contain sufficient information to have a generic algorithm to make a correct decision when receiving a new observation.Using simulated data for training phase can, in fact, improve the verification result since it firstly allows data collection in different operating conditions, and secondly, data of specific faults can be correctly labeled, and finally, we ensure that the training data is not taken from an already faulty system with unwanted or unknown fault.
To prevent overfitting the model, the input data needs to be taken from various operating conditions in an acceptable range and under the same operation conditions for each fault.That is, the model has to be able to deal with operational variations.Generally, in RS, operations vary based on several factors, such as required temperature set point, compressor cooling capacity or heat load, compressor type, ambient temperature, etc.In this work, various data sets from different operating conditions are taken as training data.As shown in Fig. 3, the set point is changed in the range 0 to 15 ∘ C, and the heat load in the cooling room varies in the range 3 to 20 kW to obtain compressor speeds variation.Another data set is taken in which, besides set point and heat load, the T amb is varied; therefore, the data is referred to as having large operation condition range.In this data set, T amb is varied in the range 10 to 30 ∘ C to investigate how the classification accuracy differs if training data includes more variations.Then, the verification data set is collected using different operating conditions from the training conditions to investigate how the model performs classification in an unseen operation condition, see the blue block in Fig. 3.
Each fault in the system is considered a class.As introduced in Table 1, twenty faults are taken into account in this work which are all observed in the real systems.Therefore, twenty-one classes are studied, including non-faulty condition.In particular, the expansion valve faults are modeled as wrong valve positions compared to the command signal.In fault 8, the actual valve position is 120 % of the command signal, while in fault 18, the valve opens 80% of the command signal.Changes in condensing temperature (T C ) is compensated by condenser fan work, because there is a feedback control on condenser fan to keep constant pressure relative to T amb and the controller controls V cpr based on T sup .Thus, it is hard to observe any visual changes in the data characteristics during steady state response.However, in some other cases, the fault affects the controller response immediately, and the changes can be observed in the data easily.For example, fault 6, which is shown in Fig. 4, clearly gives rise to variations in T dis , and V cpr .The compressor works based on the controller command.In the case of fault 6, ρ and/or P suc which are fed into the controller are measurements of the faulty sensor.Therefore, the compressor behavior is based on the faulty sensor measurement.However, as the real P suc less than required, it causes drop in mass flow rate.In Fig. 2, the P suc offset is applied only to the sensor reading.The controller controls both expansion valve opening degree and compressor speed to reach a desired pressure, and when the reading is positively offset the controller must lower the actual suction pressure to reach the desired reading.
Data specification and dimensionality reduction
The idea behind dimensionality reduction techniques is to remove dependent and redundant features from original data by projecting data to a lower-dimensional space, which holds only essential information.These approaches deal with noisy data and reduce the computation load for classification purposes (Soltani et al., 2021).In this work, the input data has 14 feature vectors or dimensions, including sensor signals, and some of the variables from RS controller, including superheat temperature, saturated evaporation temperature, compressor cooling capacity/speed, condenser fan speed, and vapour density.Statistical approaches such as PCA and LDA are used to reduce the input data dimensions before passing them through the classifiers.In this paper, all transient part of the data is removed, both for training and validation data.The 14-dimensional data is reduced to 2-dimensional data using PCA as the input to the SVM.LDA is also used for dimensionality reduction and transfers the data into a 6-dimensional data set before sending the data into the SVM classifier.Moreover, CNN and SVM are also applied to the 14-dimensional data set.For SVM and LDA classification, each class of data contains 1200 samples, and for the CNN classifier, 18000 samples with a sample rate of 1 Hz.Remark that LDA and SVM are shallow learning neural networks which, as an advantage, do not require as many samples as CNN.Too many samples result in too high computation load and low classification accuracy.As described in 2.1, the training data of each class contains various RS operating conditions.These varieties prevent overfitting and increase the model's capability for the classification of unseen operating conditions.
Methods
SVM, LDA, and CNN are all supervised learning methods which are sub-fields of the linear classifiers (Saravanan and Sujatha, 2018).Supervised ML classifiers categorize a new data set using a pre-trained model.Thus, the model is first trained using input data and defined labels.
CNN is a deep learning classifier commonly used for image processing purposes.A CNN is comprised of two phases of feature extraction and classification.The input data consists of feature vectors χ φ ∈ R n×1 , φ = 1, ⋯, c which are gathered in data matrices X κ ∈ R n×c , one for each class κ, κ = 1, …, ν.The numbers n = n κ and c quantify the number of samples in each class and the number of features, respectively.For convenience, it is assumed that all the data matrices have the same dimensions, although this is not a strict requirement.
In the feature extraction phase, the CNN makes use of so-called neurons which take data matrices X κ as input and return (neuron) where S is the number of neurons (see Fig. 5).Each neuron has a weight matrix W k ∈ R n×c and a bias matrix b k ∈ R n×c associated with it.For each κ, X κ is partitioned into n Then the neuron output y k κ is a matrix whose entries are defined as: where ⊙ denote element-wise multiplication of matrices, 1 denotes a vector of ones, and f : R→R is an activation function.
It is noted that the size n × c and number S of W k 's are hyperparameters, which can be tuned during the design of the CNN model to optimally filter different information of the input.
As illustrated in Fig. 6, the output of the feature extraction phase contains the essential information of the input data.This output is then before being used as input to the classification phase, which is a fully connected Multi-layer Perceptron, see Geidarov (2017), and Bishop (2006) with N MLP fully connected layers.The output vector of each MLP layer Y l ∈ R n l ×1 is computed recursively as where W l ∈ R n l ×n l− 1 is a layer weight matrix, b l ∈ R n l ×1 is a bias vector, f : R n l →R n l is the l'th layer's neuron activation function, and n NMLP = ν.
The output ŷ ∈ R ν of the CNN is generated by the so-called Softmax activation function where the κth coordinate of ŷ is given by: with Y NMLP κ being the κth coordinate of Y NMLP .Here, it is noted that since the CNN output is normalized ( ∑ ν κ=1 ŷκ = 1), ŷκ may be considered as the probability of a new input X belonging to class κ.
During the training process, the estimation of the classes are compared with the true labels y κ using a loss function.The loss function is also a hyper parameter that needs to be determined for the model; a common loss function is cross entropy: The training process aims at adjusting the weights in such a way that better prediction of the correct class is achieved.In other words, the minimum loss is obtained.
Minimization of the loss function can be done using different optimization techniques; the most common being Backpropagation (Bishop, 2006), which is a variant of gradient descent.Once the weights have been adjusted to yield the optimal output for a validation data set, this model can be used to classify unlabeled, new data.
LDA classifier
Linear discriminant analysis (LDA) can be used both for dimensionality reduction and classification purposes.In LDA, as it is depicted in Fig. 7, linear separation of classes is done after projecting data onto another space.LDA seeks a large separation between transformed classes compared to the original one after the dimension of the transformed data is reduced.A transformation matrix is obtained by use of the between-classes variance and the variance within each class (Bishop, 2006).
The variance between classes S B ∈ R c×c is calculated as follows: where μ κ ∈ R 1×c is the mean value of class κ, and μ ∈ R 1×c is mean of all μ κ .Afterwards, the within-class variance S s ∈ R c×c is calculated by where (X κ ) j is the jth row (or sample) in X κ .S s and S B are used to find the transformation matrix Ω ∈ R c×c defined Fig. 5.A feature extraction layer of CNN, a sub-matrix (x κ ) ij is convolved with each weight matrix W k , resulting in a number of matrices as the output of the layer.
Z. Soltani et al. as Afterwards, this transformation matrix is used to generate data in another space in which the classes are linearly separable.In order to reduce the dimensions of the data in the new space, eigenvectors and eigenvalues of Ω are obtained.The eigenvectors with higher eigenvalues carry more information of the data distribution (Tharwat et al., 2017).
Order the eigenvalues of Ω in decreasing order the first α ≤ c corresponding eigenvectors and organize them in a new The lower-dimensional samples r j ∈ R 1×α , j = 1, ⋯, n in class κ are then the rows of the matrix product X κ V.
SVM classifier
Support vector machine (SVM) is a supervised machine learning method and linear classifier which classifies data into two or more classes.In the sequel we focus on the case of two classes.
Consider the two classes X κ , κ = 1, 2 containing the samples as rows and set y j = − 1 or y j = 1 if x j ∈ R 1×c is a row in X 1 or a row in X 2 , respectively.Assume that the two classes are linearly separable, that is, the samples of each class can be separated by a (linear) hyper plane.Then there exists a hyper plane with weight w ∈ R 1×c and bias b ∈ R such that 1/ ‖ w ‖ is the distance from H to the nearest sample in class 1 and class 2. These nearest samples are usually called support vectors (see Fig. 8).Moreover, w and b may be found as the solution to the optimisation problem The optimal (or hard) margin (that is, 1/ ‖ w ‖ with w the solution to (8)) may not always lead to the best result when feeding unseen data to the model.The optimal margin might result in overfitting or margin violations.In particular, outliers can fall into the wrong class and be misclassified (Murty and Raghava, 2016).In practice, the classifier is allowed to do small misclassifications during the training, which is called soft margin (shown in Fig. 8).To do so, a slack variable ζ is added to the optimization problems: where C is a hyper parameter that determines the size of the allowed misclassification.The size of the parameter C is tuned by software such that the classification accuracy of unseen data is high.In many classification problems a linear classification is not possible.The kernel trick is a method for dealing with this case.It yields a transformation of the input space, that is the space which the samples belong to, into another higher dimensional space, in which the samples are linearly separable (Murty and Raghava, 2016).This new space is typically called the feature space.The kernel trick relies on the use of kernel function.In this work we consider a special class of kernel function, called the Gaussian Radial Basis Functions (GRBF) given by The hyper parameter γ > 0 determines the influence of each sample on selecting the hyper plane during training.It should be noted that choosing γ too big results in overfitting and choosing γ too small leads to under-fitting of the model (Bishop, 2006).
Multi-class classification
In the case of more than two classes, the problem can be solved using two approaches.The first one is to consider each class against the rest of the classes and is called One Versus the Rest (OVR).For the model training using OVR, one binary classifier is used for each class against all the other classes as the second category.Therefore, for a data set including ν classes, ν binary classifiers are created.For unseen data classification, each classifier is tested to determine to which class the new sample belongs.However, in many cases, the result of OVR is inconsistent as the sample can belong to either more than one class or none of them, illustrated as the gray stars in Fig. 9. Since, OVR picks one class against all other classes together, the number of samples in the corresponding class is typically a lot fewer than the rest of the classes.Therefore, the big difference between the number of samples often impacts the decision boundary (Bishop, 2006).
The second multi-class classification approach takes each class versus another and is called the One Versus One (OVO) approach.Thus, for each pair of classes, one classifier is trained.Finally, ν(ν− 1) 2 classifiers determine each class boundaries as shown in Fig. 9.
The OVO approach is not as computationally effective as OVR due to using more classifiers.Moreover, the OVO approach has a tendency to overfit (Platt et al., 2000).However, in the end, a certain amount of trial and error is unavoidable in selecting a multi-class SVM classifier, as it depends on the input data and feature space.
Experiments
In this work, PCA and LDA are built in Python for dimensionality reduction purposes.It is advantageous to use lower-dimensional input data if it reduces the computation time of the classification and/or increases accuracy by removing redundant information in the data set such as noise, etc.This work tests and compares PCA-SVM and LDA-SVM models to the SVM classifier with full-dimensional data.The algorithms are built using the scikit-learn library in python which provides many efficient algorithms in ML, dimensionality reduction and classification.In Aurélien (2019), the ways of implementing aforementioned ML techniques in the scikit-learn library are described.In this work, the label -1 is assigned to non-faulty data, while other labels are specified in Table 1.Moreover, the classifiers are fed with two sets of training data which are described in section 2, in order to evaluate the qualification of the training data.
Full-dimensional classifiers
The input data used for the SVM model includes n = 1200 samples of 14 feature vectors for each class.In addition, the input data contains samples from different system configurations.Each sample is labelled with one of the labels in Table 1.The SVM classifier performs OVO classification using C = 1000, and γ = 0.01 (see section 3.2); the hyperparameters were found by trial-and-error.The result of classification is represented in Fig. 10.True labels are the labels assigned to each class during the training phase, while Predicted labels refers to the prediction of the classifier during the training process.Thus, the diagonal values represent correct classifications.In this test, 250 samples with 1 Hz sample rate are selected for prediction.
The SVM result shows high classification accuracy for most of the classes, and there are no false positives.At 93% accuracy, the broken compressor with label 17 in Table 1 is the only fault that is misclassified.
As mentioned in section 3, LDA can be used both for dimensionality reduction and classification purposes.Here, LDA is used to classify all 21 classes of data while reducing the dimensions of the input data from 14 to 5. As shown in Fig. 10, the response of the LDA classifier is very similar to SVM classification, exhibiting 100% classification accuracy for most of the classes and no false positives.The only misclassification of about 3% is the broken compressor, which is mistaken for either P suc sensor negative offset or broken evaporator fan.
CNN is a deep learning model and needs more samples compared to LDA or SVM.In the CNN model experiment, the data set for each class contains 12000 samples of all 14 feature vectors.The classification response of the training is represented in Fig. 11.The CNN classifier obtained a total accuracy of 94% and could classify most of the faults with 100% accuracy.The noticeable drawback is the false positive rate of 58%.The non-faulty condition was misclassified as classes with labels 8 and 18, which are both expansion valve malfunctions.
Reduced-dimension classifiers
In this part, PCA and LDA are used to reduce the input dimensionality.These approaches are investigated to see whether PCA or LDA can improve classification results.In addition, it is vital to study whether low dimensional inputs can reduce training computation time in the case of PCA and LDA.
After feeding data into PCA and transforming to the new space, it appears that the first two dimensions of the transformed data contain more than 80% of the variations in the new space, as seen in Fig. 12.Therefore, the first two principal components are used as the inputs to the SVM instead of 14-dimensional data.Fig. 13 shows the response of the PCA-SVM classifier with C = 1000, γ = 0.01, and OVO decision function.
The result of PCA-SVM shows misclassification of most of the classes.PCA causes classes to overlap as the most uncorrelated information is squeezed into the first two principal components.The result of PCA-SVM classification is not satisfactory for the multi-class classification even though it represents satisfactory results for binary classification in Soltani et al. (2021).
LDA is already used for classification, as shown in Fig. 13.However, it can also be used only for dimensionality reduction; then, the transformed lower dimensional data is used in a classification algorithm such as SVM.The first five eigenvectors corresponding to the first highest eigenvalues indicate that LDA reduces the input dimensions from eleven to five.A LDA-SVM classifier is built using C = 1000,γ = 0.01, and OVO decision function for the SVM part.The LDA-SVM classifier performs satisfactorily for many of the classes shown in Fig. 13.However, the As seen in Table 2, SVM and LDA achieved the best results, with high accuracy and no false positives.However, the prediction time is relatively low for the LDA classifier compared to SVM, PCA-SVM, and LDA-SVM.On the other hand, the CNN classifier has the lowest prediction time, but the false positive rate is unacceptable.Therefore, LDA is found as the best model for multi-fault classification.Afterwards, more investigation is done on SVM, LDA and LDA-SVM, which perform better during the training phase.
The classifiers verification
In this part, the validation data is specified with a set point, heat load and ambient temperature which is different from what are used for the training set.In this data set, T set is 4 ∘ C, heat load is 13 kW and ambient temperature is 17 ∘ C. Fig. 14 shows the response of SVM, LDA, and LDA-SVM classifier trained with the first training data set, with variations in set point and heat load.The overview of the results in Table 3 indicates that even though the classifiers did a good job during the training and test, they can not deal with the new data which are taken from a system in a new operating condition.Therefore, the classification results are not satisfactory, especially when looking at the false positive rate.
Effect of data variation
To deal with the challenge of misclassification of unseen data, a new training data set is fed into the same model, which contains more excitation by varying the RS operation around ambient temperature from 10 to 30 ∘ C, set point from 0 to 12 ∘ C, and heat load from 3 to 18 kW.In addition, to obtain better results, all 14 feature vectors are tested to see if one can affect misclassification.Thus, three features of input data, namely, P suc , compressor power consumption and density that were already used, are removed from the training and validation data set as they adversely affect the classification accuracy.The results are depicted in Fig. 15.
The overview of the results in Table 4 shows that the SVM classifier obtains more accurate results after training with more excited training data and removing the three mentioned feature vectors.However, for the LDA-SVM and LDA classifiers, the most accurate results are obtained when just the power consumption of the compressor and density are removed.Using this adjustment, the false positive percentage is improved a lot and SVM stands alone regarding the diagnosis of all faults simultaneously with high accuracy.It is seen that SVM has the highest classification accuracy of 95% with a 4% false positive rate.The only class which SVM does not diagnose is the blocked expansion valve, which is misclassified with the loose expansion valve.Therefore, even though this fault is misclassified, we can still trust that the malfunctioning valve needs to be checked by the technicians.
Conclusion
From an industrial point of view, it is very beneficial to have one classifier that can diagnose twenty one classes.Moreover, the classifiers considered in this work can be trained off-line.Off-line training may have two advantages.First, It is possible to train the classifier with simulation data and use the trained classifier for classification of real data to ensure that we do not train the classifier with the real data which are wrongly labeled.Second, the trained classifier would be computationally lighter compared if the training process were to be executed on embedded software as well.This is an advantage when the capacity of the processor of typical refrigeration systems is considered.The SVM model obtained the best classification accuracy at the algorithms tested.If a lower false positive percentage is considered, LDA can be used with a 0% false positive rate only for distinguishing the non-faulty class from the other faulty classes.Therefore, the system could benefit from having two classifiers, to make the diagnosis result more reliable.Before implementation of the classifier on real refrigeration systems, verification of the trained classifier by using real data from the field will be done in the future work.
Declaration of Competing Interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Zahra Soltani reports financial support was provided by Innovation
Fig. 4 represents four examples of data sets taken from the same model and under the same conditions.These examples represent a non-faulty condition, a suction pressure sensor fault with 0.2 bar positive offset indicating fault 6, a loose expansion valve fault where it reacts 20%
Fig. 2 .
Fig. 2. The grey blocks indicate the main components of the RS.The red blocks are the faults or offsets that can be applied to each variable.
more than the commanded value from the controller indicating fault 8, and a blocked expansion valve that reacts 20% less than the commanded value.During data acquisition, the model is run in non-faulty condition until sample 6000.Then, each fault is introduced from sample 6001 to 12000 as seen in Fig.4.It is observed that in some cases, such as fault 8, the data looks very similar to some of the other faulty or non-faulty data.
Fig. 3 .
Fig. 3.An overview of data collection and ML setup.The red section indicates the training phase where data is collected and used for training of the ML model.The blue section shows verification data specification and classification.
Fig. 4 .
Fig. 4. Four examples of data set from different classes which have the same system configuration.The set point to T sup = T room is set to 7 ∘ C, heat load in the cooling room is 13 kW at the ambient temperature of 25 ∘ C.
Fig. 9 .
Fig. 9. multi-class data classification using OVR at the left and OVO at the right.
Fig. 12 .
Fig. 12.The first two principle components contain the most variation among all 14 principle components.
Fig. 14 .
Fig. 14.Three classification responses of validation data with different system operating condition.
Fund
Denmark and Bitzer electronics A/S. 15.Higher classification accuracy after training with new training data for all three classifiers comparing to Fig. 14.
• A deep learning and several shallow learning classifiers are proposed for detecting and diagnosing twenty types of faults in RS. • Importance of training data qualification regarding data variation and features selection is illustrated.• All of the proposed classifiers are compared regarding classification accuracy, computation time and false positive rate.
Table 1
fault types and descriptions.
Table 2
Comparison of different classifiers.
Table 3
Robustness of classifiers against different operating conditions.
Table 4
Robustness of classifiers after using qualified training data. | 8,205.4 | 2022-08-01T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Review of the Magnetohydrodynamic Waves and Their Stability in Solar Spicules and X-Ray Jets
One of the most enduring mysteries in solar physics is why the Sun’s outer atmosphere, or corona, is millions of kelvins hotter than its surface. Among suggested theories for coronal heating is one that considers the role of spicules – narrow jets of plasma shooting up from just above the Sun’s surface – in that process (Athay & Holzer, 1982; Athay, 2000). For decades, it was thought that spicules might be sending heat into the corona. However, following observational research in the 1980s, it was found that spicule plasma did not reach coronal temperatures, and so this line of study largely fell out of vogue. Kukhianidze et al. (Kukhianidze et al., 2006) were first to report the observation of kink waves in solar spicules – the wavelength was found to be ∼3500 km, and the period of waves has been estimated to be in the range of 35–70 s. The authors argue that these waves may carry photospheric energy into the corona and therefore can be of importance in coronal heating. Zaqarashvili et al. (Zaqarashvili et al., 2007) analyzed consecutive height series of Hα spectra in solar limb spicules at the heights of 3800–8700 km above the photosphere and detected Doppler-shift oscillations with periods of 20–25 and 75–110 s. According to authors, the oscillations can be caused by waves’ propagation in thin magnetic flux tubes anchored in the photosphere. Moreover, observed waves can be used as a tool for spicule seismology, and the magnetic filed induction in spicules at the height of ∼6000 km above the photosphere is estimated as 12–15 G. De Pontieu et al. (De Pontieu et al., 2007) identified a new class of spicules (see Fig. 1) that moved much faster and were shorter lived than the traditional spicules, which have speeds of between 20 and 40 kms−1 and lifespans of 3 to 7 minutes. These Type II spicules, observed in Ca II 854.2 nm and Hα lines (Sterling et al., 2010), are much more dynamic: they form rapidly (in ∼10 s), are very thin ( 200 km wide), have lifetimes of 10 to 150 s (at any one height), and shoot upwards at high speeds, often in excess of 100–150 kms−1, before disappearing. The rapid disappearance of these jets had suggested that the plasma they carried might get very hot, but direct observational evidence of this process was missing. Both types of spicules are observed to carry Alfven waves with significant amplitudes of order 20 kms−1. In a recent paper, De Pontieu et al. (De Pontieu et al., 2011) used new observations from the Atmospheric Imaging Assembly on NASA’s recently launched Solar Dynamics Observatory and its Focal Plane Package for the Solar Optical Telescope (SOT) on the Japanese Hinode satellite. Their observations reveal “a ubiquitous coronal mass supply in which chromospheric plasma in fountainlike jets or spicules (see Fig. 2) is accelerated upward into the corona, with much of the plasma heated to temperatures between ∼0.02 and 0.1 million kelvin (MK) and a small but sufficient fraction to temperatures above 1 MK. These observations provide constraints 6
Introduction
One of the most enduring mysteries in solar physics is why the Sun's outer atmosphere, or corona, is millions of kelvins hotter than its surface.Among suggested theories for coronal heating is one that considers the role of spicules -narrow jets of plasma shooting up from just above the Sun's surface -in that process (Athay & Holzer, 1982;Athay, 2000).For decades, it was thought that spicules might be sending heat into the corona.However, following observational research in the 1980s, it was found that spicule plasma did not reach coronal temperatures, and so this line of study largely fell out of vogue.Kukhianidze et al. (Kukhianidze et al., 2006) were first to report the observation of kink waves in solar spicules -the wavelength was found to be ∼ 3500 km, and the period of waves has been estimated to be in the range of 35-70 s.The authors argue that these waves may carry photospheric energy into the corona and therefore can be of importance in coronal heating.Zaqarashvili et al. (Zaqarashvili et al., 2007) analyzed consecutive height series of Hα spectra in solar limb spicules at the heights of 3800-8700 km above the photosphere and detected Doppler-shift oscillations with periods of 20-25 and 75-110 s.According to authors, the oscillations can be caused by waves' propagation in thin magnetic flux tubes anchored in the photosphere.Moreover, observed waves can be used as a tool for spicule seismology, and the magnetic filed induction in spicules at the height of ∼6000 km above the photosphere is estimated as 12-15 G. De Pontieu et al. (De Pontieu et al., 2007) identified a new class of spicules (see Fig. 1) that moved much faster and were shorter lived than the traditional spicules, which have speeds of between 20 and 40 km s −1 and lifespans of 3 to 7 minutes.These Type II spicules, observed in Ca II 854.2 nm and Hα lines (Sterling et al., 2010), are much more dynamic: they form rapidly (in ∼10 s), are very thin ( 200 km wide), have lifetimes of 10 to 150 s (at any one height), and shoot upwards at high speeds, often in excess of 100-150 km s −1 , before disappearing.The rapid disappearance of these jets had suggested that the plasma they carried might get very hot, but direct observational evidence of this process was missing.Both types of spicules are observed to carry Alfvén waves with significant amplitudes of order 20 km s −1 .In a recent paper, De Pontieu et al. (De Pontieu et al., 2011) used new observations from the Atmospheric Imaging Assembly on NASA's recently launched Solar Dynamics Observatory and its Focal Plane Package for the Solar Optical Telescope (SOT) on the Japanese Hinode satellite.Their observations reveal "a ubiquitous coronal mass supply in which chromospheric plasma in fountainlike jets or spicules (see Fig. 2) is accelerated upward into the corona, with much of the plasma heated to temperatures between ∼0.02 and 0.1 million kelvin (MK) and a small but sufficient fraction to temperatures above 1 MK.These observations provide constraints on the coronal heating mechanism(s) and highlight the importance of the interface region between photosphere and corona."Nevertheless, Moore et al. (Moore et al., 2011) from Hinode observations of solar X-ray jets, Type II spicules, and granule-size emerging bipolar magnetic fields in quiet regions and coronal holes, advocate a scenario for powering coronal heating and the solar wind.In this scenario, Type II spicules and Alfvén waves are generated by the granule-size emerging bipoles in the manner of the generation of X-ray jets by larger magnetic bipoles.From observations and this scenario, the authors estimate that Type II spicules and their co-generated Alfvén waves carry into the corona an area-average flux of mechanical energy of ∼7 × 10 5 erg s −1 cm −2 .This is enough to power the corona and solar wind in quiet regions and coronal holes, hence indicates that the granule-size emerging bipoles are the main engines that generate and sustain the entire heliosphere.The upward propagation of highand low-frequency Alfvén waves along spicules detected from SOT's observations on Hinode was also reported by He et al. (He et al, 1999) and Tavabi et al. (Tavabi et al., 2011).He et al. found in four cases that the spicules are modulated by high-frequency ( 0.02 Hz) transverse fluctuations.These fluctuations are suggested to be Alfvén waves that propagate upwards along the spicules with phase speed ranges from 50 to 150 km s −1 .Three of the modulated spicules show clear wave-like shapes with short wavelengths less than 8 Mm.We note that at the same time, Kudoh & Shibata (Kudoh & Shibata, 1999) presented a torsional Alfvén-wave model of spicules (actually the classical Type I spicules) and discussed the possibility for wave coronal heating -the energy flux transported into corona was estimated to be of about 3 × 10 5 erg s −1 cm −2 , i.e., roughly half of the flux carried by the Alfvén waves running on Type II spicules (Moore et al., 2011).Tavabi et al. (Tavabi et al., 2011), performed a statistical analysis of the SOT/Hinode observations of solar spicules and their wave-like behavior, and argued that there is a possible upward propagation of Alfvén waves inside a doublet spicule with a typical wave's period of 110 s.
No less effective in coronal heating are the so called X-ray jets.We recall, however, that whilst the classical spicules were first discovered in 1870's by the Jesuit astronomer Pietro Angelo Secchi (Secchi, 1877) and named as "spicules" by Roberts (Roberts, 1945), the X-ray jets are relatively a new discovered phenomenon.They, the jets, were extensively observed with the Soft X-ray Telescope on Yohkoh (Shibata et al., 1992;Shimojo et al., 1996), and their structure and dynamics have been better resolved by the X-Ray Telescope (XRT) on Hinode, in movies having 1 arc sec pixels and ∼1-minute cadence (Cirtain et al., 2007) -see Fig. to Cirtain et al. (Cirtain et al., 2007), "coronal magnetic fields are dynamic, and field lines may misalign, reassemble, and release energy by means of magnetic reconnection.Giant releases may generate solar flares and coronal mass ejections and, on a smaller scale, produce X-ray jets.Hinode observations of polar coronal holes reveal that X-ray jets have two distinct velocities: one near the Alfvén speed (∼800 kilometers per second) and another near the sound speed (200 kilometers per second).The X-ray jets are from 2 × 10 3 to 2 × 10 4 kilometers wide and 1 × 10 5 kilometers long and last from 100 to 2500 seconds.The large number of events, coupled with the high velocities of the apparent outflows, indicates that the jets may contribute to the high-speed solar wind."The more recent observations (Madjarska, 2011;Shimojo & Shibata, 2000) yield that the temperature of X-ray jets is from 1.3 to 12 MK (i.e., the jets are hotter than the ambient corona) and the electron/ion number density is of about (0.7-4) × 10 9 cm −3 with average of 1.7 × 10 9 cm −3 .The X-ray jets can have velocities above 10 3 km s −1 , reach heights of a solar radius or more, and have kinetic energies of the order of 10 29 erg.
Since both spicules and X-ray jets support Alfvén (or more generally magnetohydrodynamic) waves' propagation it is of great importance to determine their dispersion characteristics 137 Review of the Magnetohydrodynamic Waves and Their Stability in Solar Spicules and X-Ray Jets www.intechopen.comand more specifically their stability/instability status.If while propagating along the jets MHD waves become unstable and the expected instability is of the Kelvin-Helmholtz type, that instability can trigger the onset of wave turbulence leading to an effective plasma jet heating and the acceleration of the charged particles.We note that the Alfvénic turbulence is considered to be the most promising source of heating in the chromosphere and extended corona (van Ballegooijen et al., 2011).In this study, we investigate these travelling wave properties for a realistic, cylindrical geometry of the spicules and X-ray jets considering appropriate values for the basic plasma jet parameters (mass density, magnetic fields, sound, Alfvén, and jet speeds), as well as those of the surrounding medium.For detailed reviews of the oscillations and waves in magnetically structured solar spicules we refer the reader to (Zaqarashvili & Erdélyi, 2009) and (Zaqarashvili, 2011).Our research concerns the dispersion curves of kink and sausage modes for the MHD waves travelling primarily along the Type II spicules and X-ray jets for various values of the jet speed.In studying wave propagation characteristics, we assume that the axial wave number k z ( ẑ is the direction of the embedded constant magnetic fields in the two media) is real, while the angular wave frequency, ω,i s complex.The imaginary part of that complex frequency is the wave growth rate when a given mode becomes unstable.All of our analysis is based on a linearized set of equations for the adopted form of magnetohydrodynamics.We show that the stability/instability status of the travelling waves depends entirely on the magnitudes of the flow velocities and the values of two important control parameters, namely the so-called density contrast (the ratio of the mass density inside to that outside the flux tube) and the ratio of the background magnetic field of the environment to that of the spicules and X-ray jets.
Geometry and basic magnetohydrodynamic equations
The simplest model of spicules is a straight vertical cylinder (see Fig. 4) with radius a filled with ideal compressible plasma of density ρ i ∼ 3 × 10 −13 gcm −3 (Sterling, 2000) and immersed in a constant magnetic field B i directed along the z axis.Such a cylinder is usually termed magnetic flux tube or simply 'flux tube.'The most natural discontinuity, which occurs at the surface binding the cylinder, is the tangential one because it is the discontinuity that ensures an equilibrium total pressure balance.Moreover, it is worth noting that the jet is non-rotating and without twist -otherwise the centrifugal and the magnetic tension forces should be taken into account.Due to the specific form of the real flux tube which models a spicule, that part of the whole flux tube having a constant radius actually starts at the height of 2 Mm from the tube footpoint.The flow velocity, U i , like the ambient magnetic field, is directed along the z axis.The mass density of the environment, ρ e , is much, say 50-100 times, less than that of the spicule, while the magnetic field induction B e might be of the order or less than B i ∼ 10-15 G.Both the magnetic field, B e , and flow velocity, U e (if any), are also in the ẑ-direction.We note that while the parameters of classical Type I spicules are well-documented (Beckers, 1968;1972) those of Type II spicules are generally disputed; Centeno et al. (Centeno et al., 2010), for example, on using a novel inversion code for Stokes profiles caused by the joint action of atomic level polarization and the Hanle and Zeeman effects to interpret the observations, claim that magnetic fields as strong as ∼50 G were detected in a very localized area of the slit, which might represent a lower field strength of organized network spicules.
The flux tube modelling of the X-ray jets is actually the same as that for spicules, however, with different magnitudes of the mass densities, flow velocities, and background magnetic fields.When studying waves' propagation and their stability/instability status for a given solar structure (spicule or X-ray jet), the values of the basic parameters will be additionally specified.Now let us see what are the basic magnetohydrodynamic equations governing the motions in a flowing solar plasma.
Basic equations of ideal magnetohydrodynamics
Magnetohydrodynamics (MHD) studies the dynamics of electrically conducting fluids.Examples of such fluids include plasmas and liquid metals.The field of MHD was initiated in 1942 by the Swedish physicist Hannes Alfvén (1908Alfvén ( -1995)), who received the Nobel Prize in Physics (1970) for "fundamental work and discoveries in magnetohydrodynamics with fruitful applications in different parts of plasma physics."The fundamental concept behind MHD is that magnetic fields can induce currents in a moving conductive fluid, which in turn creates forces on the fluid and also changes the magnetic field itself.The set of equations, which describe MHD are a combination of the equations of motion of fluid dynamics (Navier-Stokes equations) and Maxwell's equations of electromagnetism.These partial differential equations have to be solved simultaneously, either analytically or numerically.
Magnetohydrodynamics is a macroscopic theory.Its equations can in principle be derived from the kinetic Boltzmann's equation assuming space and time scales to be larger than all inherent scale-lengths such as the Debye length or the gyro-radii of the charged particles (Chen, 1995).It is, however, more convenient to obtain the MHD equations in a phenomenological way as an electromagnetic extension of the hydrodynamic equations of ordinary fluids, where the main approximation is to neglect the displacement current ∝ ∂E/∂t in Ampère's law.
In the standard nonrelativistic form the MHD equations consist of the basic conservation laws of mass, momentum, and energy together with the induction equation for the magnetic field.Thus, the MHD equations of our magnetized quasineutral plasma with singly charged ions (and electrons) are where ρ is the mass density and v is the bulk fluid velocity.Equation ( 1) is the so called continuity equation in our basis set of equations.
The momentum equation is where j × B (with j being the current density and B magnetic field induction) is the Lorentz force term, −∇p is the pressure-gradient term, and ρg is the gravity force.
Faraday's law reads where E is the electric field.The ideal Ohm's law for a plasma, which yields a useful relation between electric and magnetic fields, is The low-frequency Ampère's law, which neglects the displacement current, is given by where µ 0 is the permeability of free space.
The magnetic divergency constraint is By determining the current density j from Ampère's Eq. ( 5), the expression of the Lorentz force can be presented in the form where the first term on the right hand side is the magnetic tension force and the second term is the magnetic pressure force.Thus, momentum Eq. ( 2) can be rewritten in a more convenient form, notably On the other hand, on using Ohm's law (4) the Faraday's law (or induction equation) takes the form 140
Topics in Magnetohydrodynamics
Finally, the equation of the thermal energy is given by d dt where γ = 5/3 is the ratio of specific heats for an adiabatic equation of state.This equation usually is written as an equation for the pressure, p, Equation ( 9) implies that the equation of state of the ideal fully ionized gas has the form where T is the temperature, m i the ion mass, k B is the Boltzmann constant, and the factor 2 arises because ions and electrons contribute equally.
In total the ideal MHD equations thus consist of two vector equations, ( 7) and ( 8), and two scalar equations, (1) and ( 9), respectively.Occasionally, when studying wave propagation in magnetized plasmas, one might also be necessary to use Eq. ( 6).We note that the basic variables of the ideal MHD are the mass density, ρ, the fluid bulk velocity, v, the pressure, p, and the magnetic induction, B; the electric field, E, has been excluded via Ohm's law.
In MHD there is a few dimensionless numbers, which are widely used in studying various phenomena in magnetized plasmas.Such an important dimensionless number in MHD theory is the plasma beta, β, defined as the ratio of gas pressure, p, to the magnetic pressure, When the magnetic field dominates in the fluid, β ≪ 1, the fluid is forced to move along with the field.In the opposite case, when the field is weak, β ≫ 1, the field is swirled along by the fluid.
We finish our short introduction to MHD recalling that in ideal MHD Lenz's law dictates that the fluid is in a sense tied to the magnetic field lines, or, equivalently, magnetic filed lines are frozen into the fluid.To explain, in ideal MHD a small rope-like volume of fluid surrounding a field line will continue to lie along a magnetic field line, even as it is twisted and distorted by fluid flows in the system.The connection between magnetic field lines and fluid in ideal MHD fixes the topology of the magnetic field in the fluid.
Wave dispersion relations
It is well-known that in infinite magnetized plasmas there exist three types of MHD waves (Chen, 1995), namely the Alfvén wave and the fast and slow magnetoacoustic waves.Alfvén wave (Alfvén, 1942;Gekelman et al., 2011), is a transverse wave propagating at speed , where B 0 and ρ 0 are the equilibrium (not perturbed) magnetic field and mass density, respectively.The propagation characteristics of magnetoacoustic waves depend upon their plasma beta environment.In particular, in high-beta plasmas (β ≫ 1) the fast magnetoacoustic wave behaves like a sound wave travelling at sound speed c s = (γp 0 /ρ 0 ) 1/2 , while in low-beta plasmas (β ≪ 1) it propagates roughly isotropically and across the magnetic field lines at Alfvén speed, v A .The slow magnetoacoustic wave in high-beta plasmas is guided along the magnetic field B 0 at Alfvén speed, v A -in the opposite case of low-beta plasmas it is a longitudinally propagating along B 0 wave at sound speed, c s .A question that immediately raises is how these waves will change when the magnetized plasma is spatially bounded (or magnetically structured) as in our case of spicules or X-ray jets.The answer to that question is not trivial -we actually have to derive the normal modes supported by the flux tube, which models the jets.
As we will study a linear wave propagation, the basic MHD variables can be presented in the form where ρ 0 , p 0 , and B 0 are the equilibrium values in either medium, U i and U e are the flow velocities inside and outside the flux tube, δρ, δp, δv, and δB being the small perturbations of the basic MHD variables.For convenience, we chose the frame of reference to be attached to the ambient medium.In that case is the relative flow velocity whose magnitude is a non-zero number inside the jet, and zero in the surrounding medium.For spicules, U e ≈ 0; which is why the relative flow velocity is indeed the jet velocity, which we later denote as simply U.
With the above assumptions, the basic set of MHD equations for the perturbations of the mass density, pressure, fluid velocity, and magnetic field become We note that the gravity force term in momentum Eq. ( 11) has been omitted because one assumes that the mass density of the jet does not change appreciably in the limits of the spicule's length of order 10-11 Mm.
From Eq. ( 10) we obtain that Inserting this expression into Eq.( 13) we get which means that the pressure's and density's perturbations are related via the expression 142
Topics in Magnetohydrodynamics
Assuming that each perturbation is presented as a plain wave g(r) exp [i (−ωt + mϕ + k z z)] with its amplitude g(r) being just a function of r, and that in cylindrical coordinates the nabla operator has the form Eq. ( 11) reads Accordingly Eq. ( 13) yields Induction Eq. ( 12) gives Finally Eq. ( 14) yields From Eq. ( 19) we obtain while Eq. ( 20) gives which means that After some rearranging this expression can be rewritten in the form Let us now differentiate Eq. ( 17) with respect to r: 143 Review of the Magnetohydrodynamic Waves and Their Stability in Solar Spicules and X-Ray Jets www.intechopen.com But according to Eqs. ( 26) and ( 24) Then Eq. ( 27) becomes In order to simplify notation we introduce a new variable, namely the perturbation of the total pressure, δp tot = δp + 1 µ 0 B 0 δB z .From Eqs. ( 17) to ( 19) one can get that Inserting these expressions into Eq.( 28), we obtain Bearing in mind that according to Eq. ( 15) from Eq. ( 23) we get On using Eq. ( 16) we express δρ in the above equation as δp/c 2 s , multiply it by Topics in Magnetohydrodynamics www.intechopen.comwhere, we remember, v A = B 0 /(µ 0 ρ 0 ) 1/2 is the Alfvén speed.After inserting in the above equation δp expressed in terms of δv z -see Eq. ( 25) -and performing some straightforward algebra we obtain that Next step is to insert above expression of δv z into Eq. ( 29 and combine the −k 2 z δp-term with the last member in the same equation to get a new form of Eq. ( 29), notably Here, κ 2 is given by the expression where is the so-called tube velocity (Edwin & Roberts, 1983).It is important to notice that both κ 2 (respectively κ) and the tube velocity, c T , have different values inside and outside the jet due to the different sound and Alfvén speeds, which characterize correspondingly the jet and its surrounding medium.
As can be seen, Eq. ( 30) is the equation for the modified Bessel functions I m and K m and, accordingly, its solutions in both media (the jet and its environment) are: δp tot (r)= A i I m (κ i r) for r a, A e K m (κ e r) for r a.
From Eq. ( 17) one can obtain an expression of δv r and inserting it in the expression of δB r deduced from Eq. ( 21) one gets a formula relating δv r with the first derivative (with respect to r)ofδp tot It is clear that we have two different expressions of δv r , which, bearing in mind the solutions to the ordinary second order differential Eq. ( 30), read respectively.Now it is time to apply some boundary conditions, which link the solutions of total pressure and fluid velocity perturbations at the interface r = a.The appropriate boundary conditions are: • δp tot has to be continuous across the interface, • the perturbed interface, , has also to be continuous (Chandrasekhar, 1961).
After applying the boundary conditions (we recall that for the ambient medium U = 0) finally we arrive at the required dispersion relation of the normal MHD modes propagating along the jet (Nakariakov, 2007;Terra-Homen et al., 2003) For the azimuthal mode number m = 0 the above equation describes the propagation of so called sausage waves, while with m = 1 it governs the propagation of the kink waves (Edwin & Roberts, 1983).As we have already seen, the wave frequency, ω, is Doppler-shifted inside the jet.The two quantities κ i and κ e , whose squared magnitudes are given by Eq. ( 31) are termed wave attenuation coefficients.They characterize how quickly the wave amplitude having its maximal value at the interface, r = a, decreases as we go away in both directions.Depending on the specific sound and Alfvén speeds in a given medium, as well as on the density contrast, η = ρ e /ρ e , and the ratio of the embedded magnetic fields, b = B e /B e , the attenuation coefficients can be real or imaginary quantities.In the case when both κ i and κ e are real, we have a pure surface wave.The case κ i imaginary and κ e real corresponds to pseudosurface waves (or body waves according to Edwin & Roberts terminology (Edwin & Roberts, 1983)).In that case the modified Bessel function inside the jet, I 0 , becomes the spatially periodic Bessel function J 0 .In the opposite situation the wave energy is carried away from the flux tubethen the wave is called leaky wave (Cally, 1986).The waves, which propagate in spicules and X-ray jets, are generally pseudosurface waves, that can however, at some flow speeds become pure surface modes.
For the kink waves one defines the kink speed (Edwin & Roberts, 1983) which is independent of sound speeds and characterizes the propagation of transverse perturbations.
Our study of the dispersion characteristics of kink and sausage waves, as well as their stability status will be performed in two steps.First, at given sound and Alfvén speeds inside the jet and its environment and a fixed flow speed U, we solve the transcendental dispersion Eq. ( 34) assuming that the wave angular frequency, ω, and the wave number, k z , are real quantities.
In the next step, when studying their stability/instability status, we assume that the wave frequency and correspondingly the wave phase velocity, v ph = ω/k z , become complex.Then, as the imaginary part of the complex frequency/phase velocity at a given wave number, k z , and a critical jet speed, U crt , has some non-zero positive value, one says that the wave becomes unstable -its amplitude begins to grow with time.In this case, the linear theory is no longer applicable and one ought to investigate the further wave propagation by means of a nonlinear theory.Our linear approach can determine just the instability threshold only.
In the next two section we numerically derive the dispersion curves of kink and sausage waves running along spicules and X-ray jets, respectively.
146
Topics in Magnetohydrodynamics
Dispersion diagrams of MHD surface waves in spicules
Before starting solving the wave dispersion relation (34), we have to specify some input parameters, characterizing both media (the jet and its surrounding).Bearing in mind, as we have already mention in the beginning of Sec. 2, the mass density of the environment is much less (50-100 times); thus we take the density contrast -the ratio of equilibrium plasma density outside to that inside of spicule -to be η = 0.02.Our choice of the sound and Alfvén speeds in the jet is c si = 10 km s −1 and v Ai = 80 km s −1 , respectively, while those speeds in the environment are correspondingly c se ∼ = 488 km s −1 and v Ae = 200 km s −1 .All these values are in agreement with the condition for the balance of total pressures at the flux tube interface -that condition can be expressed in the form which yields (Edwin & Roberts (1983) The two tube speeds (look at Eq. ( 32)) are c Ti = 9.9 km s −1 and c Te = 185 km s −1 .The kink speed, associated with the kink waves, in our case (see Eq. ( 35)) is 84 km s −1 .
It is obvious that dispersion Eq. ( 34) of either mode can be solved only numerically.Before starting that job, we normalize all velocities to the Alfvén speed v Ai inside the jet thus defining the dimensionless phase velocity V ph = v ph /v Ai and the Alfvén-Mach number M A = U/v Ai .
The wavelength is normalized to the tube radius a, which means that the dimensionless wave number is K = k z a.The calculation of wave attenuation coefficients requires the introduction of three numbers, notably the two ratios β = c 2 s /v 2 A correspondingly in the jet and its environment, and the ratio of the background magnetic field outside to that inside the flow, b = B e /B i , in addition to the density contrast, η.We recall that the two βs are 1.2 times smaller than the corresponding plasma betas in both media -the latter are given by the expressions β i,e = 2 βi,e /γ.
The value of the Alfvén-Mach number, M A , naturally depends on the value of the streaming velocity, U. Our choice of this value is 100 km s −1 that yields M A = 1.25.With these input values, we calculate the dispersion curves of first kink waves and then sausage ones.
Kink waves in spicules
We start by calculating the dispersion curves of kink waves assuming that the angular wave frequency, ω, is real.As a reference, we first assume that the plasma in the flux tube is static, i.e., M A = 0.The dispersion curves, which present the dependence of the normalized wave phase velocity on the normalized wave number, are in this case shown in Fig. 5.One can recognize three types of waves: a sub-Alfvénic slow magnetoacoustic wave (in k , as well as high-harmonic super-Alfvénic waves.Second, the slow magnetoacoustic wave (c Ti in Fig. 5) is replaced by two, now, super-Alfvénic waves, whose dispersion curves (in orange and cyan colours) are collectively labelled c Ti .These two waves have practically constant normalized phase velocities equal to 1.126 and 1.374, respectively, which are the (M A ∓ c 0 Ti )-values, where c 0 Ti is the normalized magnitude of the slow magnetoacoustic wave at M A = 0. Unsurprisingly, one gets a c l k -labelled curve, which is the mirror image of the c h k -labelled curve.That is why this curve is plotted in green and, as can be seen, it is now 148 Topics in Magnetohydrodynamics a forward propagating wave that has, however, a lower normalized phase velocity than that of its sister c h k -labelled dispersion curve.Moreover, there appears to be a family of generally backward propagating waves (below the c l k -labelled curve) plotted in blue colour that can similarly be considered as a mirror image of the high-harmonic super-Alfvénic waves.
The most interesting waves especially for the Type II spicules seems to be the waves labelled c k .It would be interesting to see whether these modes can become unstable at some, say critical, value of the Alfvén-Mach number, M A .To study this, we have to assume that the wave frequency is complex, i.e., ω → ω + i γ, where γ is the expected instability growth rate.Thus, the dispersion equation becomes complex (complex wave phase velocity and real wave number) and the solving a transcendental complex equation is generally a difficult task (Acton, 1990).
Before starting to derive a numerical solution to the complex version of Eq. ( 34), we can simplify that equation.Bearing in mind that the plasma beta inside the jet is very small (β i ∼ = 0.02) and that of the surrounding medium quite high (of order 7), we can treat the jet as a cool plasma and the environment as a hot incompressible fluid.We point out that according to the numerical simulation of spicules by Matsumoto & Shibata (Matsumoto & Shibata, 2010) the plasma beta at heights greater than 2 Mm is of that order (0.03-0.04) -look at Fig. 4 in their paper.For cool plasma, c s → 0; hence the normalized wave attenuation K, while for the incompressible environment c s → ∞ and the corresponding attenuation coefficient is simply equal to k z , i.e., κ e a = K.Under these circumstances the simplified dispersion equation of kink waves takes the form where, we recall, that κ i a = 1 − (V ph − M A ) 2 1/2 K, and the normalized wave phase velocity, V ph , is a complex number.We note that this simplified version of the dispersion equation of the kink waves closely reproduces the dispersion curves labelled c k in Figs. 5 and 6.
To investigate the stability/instability status of kink waves we numerically solve Eq. (37) using the Müller method (Muller, 1956) for finding the complex roots at fixed input parameters η = 0.02 and b = 0.35 and varying the Alfvén-Mach number, M A , from zero to some reasonable numbers.Before starting any numerical procedure for solving the aforementioned dispersion equation, we note that for each input value of M A one can get two c k -dispersion curves one of which (for relatively small magnitudes of M A ) has normalized phase velocity roughly equal to M A − 1 and a second dispersion curve associated with dimensionless phase velocity equal to M A + 1.These curves are similar to the dispersion curves labelled c l k and c h k in Fig. 6.The results of the numerical solving Eq. ( 37) are shown in Fig. 7.For M A = 0, except for the dispersion curve with normalized phase velocity approximately equal to 1, one can find a dispersion curve with normalized phase velocity close to −1 -that curve is not plotted in Fig. 7. Similarly, for M A = 2 one obtains a curve at V ph = 1 and another at V ph = 3, and so on.With increasing the magnitude of the Alfvén-Mach number kink waves change their structure -for small numbers being pseudosurface (body) waves and for M A 4 becoming pure surface modes.Another effect associated with the increase in M A , is that, for instance at M A 6, the shapes of pairs of dispersion curves begin to visibly change as can be seen in Figs.7 and 8.The most interesting observation is that for M A 8 both curves begin to merge and at M A = 8.5 they form a closed dispersion curve.The ever increasing of M A yields yet smaller closed dispersion curves -the two non-labelled ones depicted in Fig. 8 correspond to M A = 8.8 and 8.85, respectively.All these dispersion curves present stable propagation of the kink waves.However, for M A 8.9 we obtain a new family of wave dispersion curves that correspond to an unstable wave propagation.We plot in Fig. 8 four curves of that kind that have been calculated for M A = 8.9, 8.95, 9, and 9.05, respectively.The growth rates of the unstable waves are shown in Fig. 9.The instability that arises is of the Kelvin-Helmholtz type.We recall that the Kelvin-Helmholtz instability, which is named after Lord Kelvin and Hermann von Helmholtz, can occur when velocity shear is present within a continuous fluid, or when there is a sufficient velocity difference across the interface between two fluids (Chandrasekhar, 1961).In our case, we have the second option and the relative jet Fig. 9. Growth rates of unstable kink waves propagating along the flux tube at values of M A equal to 8.9, 8.95, 9, and 9.05, respectively.velocity, U, plays the role of the necessary velocity difference across the interface between the spicule and its environment.
The big question that immediately springs to mind is whether one can really observe such an instability in spicules.The answer to that question is obviously negative -to register the onset of a Kelvin-Helmholtz instability of kink waves travelling on a Type II spicule one would need to observe jet velocities of the order of or higher than 712 km s −1 !If we assume that the density contrast, η, possesses the greater value of 0.01 (which means that the jet mass density is 100 times larger than that of the ambient medium) and the ratio of the background magnetic fields, b, is equal to 0.36 (which may be obtained from a slightly different set of characteristic sound and Alfvén speeds in both media), the critical Alfvén-Mach number at which the instability starts is even much higher (equal to 12.6) -in that case the corresponding jet speed is U crt = 882 km s −1 -too high to be registered in a spicule.The value of 882 was computed under the assumption that the Alfvén speed inside the jet is 70 km s −1 .
We note that very similar dispersion curves and growth rates of unstable kink waves like those shown in Figs. 8 and 9 were obtained for cylindrical jets when both media were treated as incompressible fluids.In that case, dispersion Eq. ( 37) becomes a quadratic equation that provides solutions for the real and imaginary parts of the normalized wave phase velocity in closed forms, notably (Zhelyazkov, 2010; 2011) where and the discriminant D is Obviously, if D 0, then We note that our choice of the sign of √ D in the expression of Im(V ph ) is plus although, in principal, it might also be minus -in that case, due to the arising instability, the wave's energy is transferred to the jet.
It is interesting to note that for our jet with b = 0.35 and η = 0.02 the quadratic dispersion Eq. ( 38) yields a critical Alfvén-Mach number for the onset of a Kelvin-Helmholtz instability equal to 8.87, which is lower than its magnitude obtained from Eq. ( 37).With this new critical Alfvén-Mach number, the required jet speed for the instability onset is ∼ =710 km s −1 .The most astonishing result, however, is the observation that the dispersion curves and the corresponding growth rates, when kink waves become unstable, -look at Figs. 10 and 11 -are very similar to those shown in Figs. 8 and 9.It is worth mentioning that for the Fig. 10.Dispersion curves of kink waves derived from Eq. ( 38) for relatively large values of M A .same η = 0.02, but for b = 1 (equal background magnetic fields), the quadratic equation yields a much higher critical Alfvén-Mach number (=11.09), which means that the critical jet speed grows up to 887 km s −1 .This consideration shows that both the density contrast, η, and the ratio of the constant magnetic fields, b, are equally important in determining the critical Alfvén-Mach number.Moreover, since Eq. ( 37) and its simplified form as quadratic Eq. ( 38) yield almost similar results (both for dispersion curves and growth rates when kink waves become unstable) firmly corroborates the correctness of the numerical solutions to the complex dispersion Eq. (37).Fig. 11.Growth rates of unstable kink waves calculated from Eq. ( 38) at values of M A equal to 8.87, 8.9, 8.95, and 9, respectively.
Sausage waves in spicules
The dispersion curves of sausage waves both in a static and in a flowing plasma shown in Figs. 12 and 13 are very similar to those of kink waves (compare with Figs. 5 and 6).The latter curves were calculated from dispersion Eq. ( 34) with azimuthal mode number m = 0 for the same input parameters as in the case of kink waves.The main difference is that the c k -labelled green dispersion curve is replaced by a curve corresponding to the Alfvén wave inside the jet.We note that the dispersion curve in Fig. 13 corresponding to a normalized phase velocity 0.25 is labelled v l Ai because it can be considered as the one dispersion curve of the (1.25 ∓ 1)-curves that can be derived from the dispersion equation.As in the case of kink waves, the dispersion curve corresponding to the higher speed has the label v h Ai .Here we also get the two almost dispersionless curves collectively labelled c Ti (in the same colours, orange and cyan, as in Fig. 6) with normalized wave phase velocities equal to 1.126 and 1.374.When examining the stability properties of sausage waves as a function of the Alfvén-Mach number, M A , we use the same Eq.( 37) while changing the order of the modified Bessel functions from 1 to 0. As in the case of kink waves, we are interested primarily in the behaviour of the waves whose phase velocities are multiples of the Alfvén speed.The results of numerical calculations of the complex dispersion equation are shown in Fig. 14.It turns out that for all reasonable Alfvénic Mach numbers the waves are stable.This is unsurprising because the same conclusion was drawn by solving precisely the complex dispersion equation governing the propagation of sausage waves in incompressible flowing cylindrical plasmas (Zhelyazkov, 2010;2011).In Fig. 14 almost all dispersion curves have two labels: one for the (M A − 1)-labelled curve at given M A (the label is below the curve), and second for the (M ′ A + 1)-labelled curve associated with the corresponding (M ′ A = M A − 2)-value (the label is above the curve).This labelling is quite complex because for all M A we find dispersion curves that overlap: for instance, the higher-speed dispersion curve (i.e., that associated with 154 Topics in Magnetohydrodynamics the (M A + 1)-value) for M A = 0 coincides with the lower-speed dispersion curve (i.e., that associated with the (M A − 1)-value) for M A = 2.In contrast to the kink waves, which for M A 4 are pure surface modes, the sausage waves can be both pseudosurface and pure surface modes, or one of the pair can be a surface mode while the other is a pseudosurface one.For example, all dispersion curves for M A = 0 and 8 correspond to the pseudosurface waves while the curves' pair associated with M A = 4 describes the dispersion properties of pure surface waves.For the other Alfvén-Mach numbers, one of the wave is a pseudosurface and the other is a pure surface.However, there is a 'rule': if, for instance, the higher-speed wave with M A = 10 is a pseudosurface mode, the lower-speed wave for M A = 12 is a pure surface wave.We finish the discussion of sausage waves with the following conclusion: with increasing the Alfvén-Mach number M A the initially independent high-harmonic waves and their mirroring counterparts begin to merge -this is clearly seen in Fig. 14 for M A = 12 -the resulting dispersion curve is in red colour.A similar dispersion curve can be obtained, for example, for M A = 10; the merging point of the corresponding two high-harmonic dispersion curves moves, however, to the right -it lies at k z a = 1.943.It is also evident that in the long wavelength limit the bottom part of the red-coloured dispersion curve describes a backward propagating sausage pseudosurface wave.Another peculiarity of the same dispersion curve is the circumstance that for the range of dimensionless wave numbers between 0.7 and 1.23, one can have two different wave phase velocities.Which one is detected, the theory cannot predict.
Dispersion diagrams of MHD surface waves in soft X-ray jets
The geometry model of solar X-ray jets is the same as for the spicules -straight cylinder with radius a.Before starting the numerical calculations, we have to specify, as before, the input parameters.The sound and Alfvén speed that are typical for X-ray jets and their environment are correspondingly c si = 200 km s −1 , v Ai = 800 km s −1 , c se = 120 km s −1 , and v Ae = 2300 km s −1 .With these speeds the density contrast is η = 0.13.The same η (calculated from a slightly different set of sound and Alfvén speeds) Vasheghani Farahani et al. (Vasheghani Farahani et al., 2009) used in studying the propagation of transfer waves in soft X-ray coronal jets.Their analysis, however, is restricted to the long-wavelength limit, |k|a ≪ 1 in their notation, while our approach considers the solving the exact dispersion relation without any limitations for the wavelength -such a treating is necessary bearing in mind that the wavelengths of the propagating along the jets fast magnetoacoustic waves might be of the order of X-ray jets radii.We remember that the soft X-ray coronal jets are much ticker than the Type II spicules.
With our choice of sound and Alfvén speeds, the tube velocities in both media (look at Eq. ( 32)), respectively, are c Ti ∼ = 194 km s −1 and c Te = 119.8km s −1 .The kink speed (see Eq. ( 35)) turns out to be rather high, namely ∼ =1078 km s −1 .To compare our result of the critical jet speed for triggering the Kelvin-Helmholtz instability with that found by Vasheghani Farahani et al. (Vasheghani Farahani et al., 2009), we take the same jet speed as theirs, notably U = 580 km s −1 , which yields Alfvén-Mach number equal to 0.725.(For simplicity we assume that the ambient medium is static, i.e., U e = 0.) Thus, our input parameters for the numerical computations are η = 0.13, βi ∼ = 0.06, βe ∼ = 0.003, b = 1.035, and M A = 0.725.
We note that b = 1.035 means that the equilibrium magnetic fields inside and outside the X-ray coronal jet are almost identical.Moreover, due to the relatively small plasma betas, β e = 0.0033 and β i = 0.075, respectively, the magnetic pressure dominates the gas one in both media and the propagating waves along X-ray jets should accordingly be predominantly transverse.
Kink waves in soft X-ray coronal jets
The dispersion diagrams of kink waves propagating along a static-plasma (U = 0) flux tube are shown in Fig. 15.They, the dispersion curves, have been obtained by numerically finding the solutions to dispersion Eq. ( 34) with mode number m = 1 and input data listed in the introductory part of this section with M A = 0.The dispersion curves are very similar to those for spicules (look at Fig. 5).Here, there is, however, one distinctive difference: the c Te -labelled dispersion curve (blue color) lies below the curve corresponding to the tube velocity inside the jet (magenta coloured line labelled c Ti ).The dispersion curves of the high-harmonic super-Alfvénic waves (red colour) lye, as usual, above the green curve associated with the kink speed.What actually does the flow change when is taken into account?The answer to this question is given in Fig. 16.As in the case with spicules, the flow duplicates the c Ti -labelled dispersion curve in Fig. 15.The two, again collectively labelled c Ti dispersion curves, are sub-Alfvénic waves having normalized phase velocities equal to 0.482 and 0.968 in correspondence to the (M A ∓ c 0 Ti )-rule.All the rest curves have the same behaviour and notation as in Fig. 6.The only difference here is the circumstance that the lower-speed c k -curve lies below the zero line, i.e., it describes a backward propagating kink pseudosurface wave.This is because the Alfvén-Mach number now is less than one.We note also that the c Te -labelled dispersion curve is unaffected by the presence of flow.
The most intriguing question is whether the c k -labelled wave can become unstable at any reasonable flow velocity.Before answering that question, we have, as before, to simplify dispersion Eq. (34).Since the two plasma betas, as we have already mentioned, are much less that one, we can treat both media (the X-ray jet and its environment) as cool plasmas.In this case, the simplified dispersion equation of kink waves (in complex variables!)takes the form where We numerically solve this equation by varying the magnitude of the Alfvén-Mach number, M A , using as before the Müller method and the dispersion curves of both stable and unstable kink waves are shown in Fig. 17.In this figure, we display only the most interesting, upper, part of the dispersion diagram, where one can observe the changes in the shape of the dispersion curves related to the corresponding c k -speeds.First and foremost, the shape of the merging dispersion curves (labelled 4, 4.1, 4.2, and 4.23 in Fig. 17) is distinctly different from that of the similar curves in Fig. 8. Here, the curves, which are close to the dispersion curves corresponding to an unstable wave propagation (the first one is with label 4.25)a r e semi-closed in contrast to the closed curves in Fig. 8.The wave growth rates corresponding to 4.3,4.35,and 4.4 are shown in Fig. 18.As can be seen, the shape of those curves is completely different to that of the wave growth rates shown in Figs. 9 and 11.We note that all dispersion curves for M A 4 correspond to pure surface kink waves.
It is clear from Figs. 17 and 18 that the critical Alfvén-Mach number, which determines the onset of a Kelvin-Helmholtz instability of the kink waves, is equal to 4.25 -the corresponding flow speed is 3400 km s −1 , that is much higher than the value we have used for calculating the dispersion curves in Fig. 16.The critical Alfvén-Mach number evaluated by Vasheghani Farahani et al. (Vasheghani Farahani et al., 2009), is 4.47, that means the corresponding flow speed must be at least 3576 km s −1 .If we use our Eq.( 39) with the same ρ e /ρ i = 0.13, but Fig. 17.Dispersion curves of kink waves propagating along a flux tube modelling X-ray jets for relatively large values of M A .
Fig. 18.Growth rates of unstable kink waves propagating along a flux tube modelling X-ray jets at values of M A equal to 4. 25, 4.3, 4.35, and 4.4, respectively.with a little bit higher B e /B i = 1.1132, we get that the critical jet speed for triggering the Kelvin-Helmholtz instability in a soft X-ray coronal get would be 4.41v Ai = 3528 km s −1 .It is necessary, however, to point out that the correct density contrast that can be calculated from Eq. ( 36) with c si = 360 km s −1 , v Ai = 800 km s −1 , c se = 120 km s −1 , and v Ae = 2400 km s −1 (the basic speeds in Vasheghani Farahani et al. paper) is ρ e /ρ i = 0.137698, which is closer to 0.14 rather than to 0.13.The solving Eq. ( 39) with the exact value of the density contrast (=0.1377) and the same B e /B i as before (=1.1132) yields a critical flow speed equal to 4.31v Ai = 3448 km s −1 .All these calculations show that even small variations in the two ratios ρ e /ρ i and B e /B i lead to visibly different critical Alfvén-Mach numbers -our choice of the sound and Alfvén speeds gives the smallest value of the critical M A .According to the more recent observations (Madjarska, 2011;Shimojo & Shibata, 2000), the soft X-ray coronal jets can have velocities above 10 3 km s −1 and it remains to be seen whether a speed of 3400 km s −1 can 158 Topics in Magnetohydrodynamics trigger the onset of a Kelvin-Helmholtz instability of the kink surface waves running along the jets.
Sausage waves in soft X-ray coronal jets
The dispersion diagram of sausage waves in a static-plasma flux tube should be more or less similar to that of the kink waves under the same circumstances.Here, however, the green curve in Fig. 15, associated with the kink speed c k , is now replaced by a dispersionless line related to the Alfvén speed -see the green curve in Fig. 19.Another difference is the number of the red-coloured high-harmonic super-Alfvénic waves -here it is 3 against 2 in Fig. 15.The dispersion diagram of the same mode in a flow with M A = 0.725 (U = 580 km s −1 ) is also predictable -the presence of the flow is the reason for splitting the green v Ai -labelled curve in Fig. 19 into two sister curves labelled, respectively, v l Ai and v h Ai -look at Fig. 20.Observe that the normalized speeds of those two waves are, as expected, equal to M A ∓ 1 -in our case the lower-speed Alfvén wave is a backward propagating one.The two sub-Alfvénic waves, whose dispersion curves are in orange and cyan colours and collectively labelled c Ti have practically the same normalized phase velocities as the corresponding curves in Fig. 16.We note that one of the aforementioned curve is slightly decreasing (the orange curve) whilst the other, cyan-coloured, curve is slightly increasing when the normalized wave number k z a becomes larger -the same holds for the analogous waves in spicules.One can see in Fig. 20 a symmetry between the upper and bottom parts of the dispersion diagram -the 'mirror line' lies somewhere between the orange and cyan dispersion curves.
The 'evolution' of the green v Ai -labelled curves in Fig. 20 with the increase in Alfvén-Mach number is illustrated in Fig. 21.It is unsurprising that the sausage surface waves in soft X-ray coronal jets (like in spicules) are unaffected by the Kelvin-Helmholtz instability.Similarly as in Fig. 14, we have an overlapping of the dispersion curves associated with different Alfvén-Mach numbers.The labelling of dispersion curves in Fig. 21 is according to the previously discussed in Sec.4.2 rule, namely each horizontal dispersion curve possesses two labels: one for the (M A − 1)-curve at given M A (the label is below the curve), and second for the (M ′ A + 1)-curve associated with the corresponding (M ′ A = M A − 2)-value (the label is above the curve).Interestingly, even for the relatively low M A = 1 both the lower-and the high-speed curves describe pure surface sausage waves.The same is also valid for the dispersion lines corresponding to Alfvén-Mach numbers equal to 4 and 5.The lower-speed Alfvénic curve at M A = 6 is related to a pseudosurface sausage wave while the higher-speed one (with normalized phase velocity equal to 7) corresponds to a pure surface mode.(With M A = 2 we have just the opposite situation.)At M A 7 all waves are pseudosurface ones.Each choice of the Alfvén-Mach number indeed requires separate studying of the wave's proper mode.Apart from Alfvénic waves and the pair of sub-Alfvénic modes (orange and cyan curves in Fig. 20), there appear to be families of high-speed harmonic waves (with red and blue colours of their dispersion curves), which also change with the increase of M A .Initially being independent, with the growing of the Alfvén-Mach number, they change their shapes and one may occur to observe the merging of, for instance, the first curves of each family as this has been shown in Fig. 14 (see the red curve there).Here, however, the situation is über-complicated -instead of merging we encounter a new phenomenon, notably the touching of two dispersion curves -see the green and red curves in Fig. 21 (calculated for M A = 5), and with more details in Fig. 22.The tip of the horizontal spike lies at k z a = 1.395.Another peculiarity of this complex curve is the inverted-s shape of the red curve between the dimensionless wave numbers 1.28 and 1.29.Across that range, at a fixed k z a, one can 'detect' four different normalized wave phase velocities.Similar sophisticated dispersion curves might also be obtained for M A = 4orM A = 6.Maybe nowadays the sausage mode is not too interesting for the spacecrafts' observers but, who knows, it can sometime become important in interpretation observational data.
Conclusion
We now summarize the main findings of our chapter.We have studied the dispersion properties and the stability of the MHD normal modes running along the length of Type II spicules and soft X-ray coronal jets.Both have been modelled as straight cylindrical jets of ideal cool plasma surrounded by a warm/hot fully ionized medium (for spicules) or as flux tubes of almost cool plasma surrounded by a cool medium (for the X-ray jets).The wave propagation has been investigated in the context of standard magnetohydrodynamics by using linearized equations for the perturbations of the basic quantities: mass density, pressure, fluid velocity, and wave magnetic field.The derived dispersion equations describe the well-known kink and sausage mode influenced by the presence of spicules' or X-ray jets' moving plasma.The streaming plasma is characterized by its velocity U, which is directed along the background magnetic fields B i and B e inside the jet and in its environment.An alternative and more convenient way of specifying the jet is by defining the Alfvén-Mach number: the ratio of jet speed to the Alfvén speed inside the jet, M A = U/v Ai .The key parameters controlling the dispersion properties of the waves are the so-called density contrast, η = ρ e /ρ i , the ratio of the two background magnetic fields, b = B e /B i , and the two ratios of the squared sound and Alfvén speeds, βe = c 2 se /v 2 Ae and βi = c 2 si /v 2 Ai .How does the jet change the dispersion curves of both modes (kink and sausage waves) in a static-plasma flux tube?The answers to that question are as follows: • The flow shifts upwards the specific dispersion curves, the kink-speed curve for kink waves and Alfvén-speed curve for sausage waves, as well as the high-harmonic fast waves of both modes.The sub-Alfvénic tube speed inside the jet, c Ti , belongs to two waves with normalized phase velocities equal to M A ∓ c Ti /v Ai .One also observes such a duplication of the c k -o rv Ai -speed curve of kink or sausage waves.Below the lower-speed c k -o r v Ai -curve there appears to be a set of dispersion curves, which are a mirror image of the high-harmonic fast waves.We note that the flow does not affect the c Te -speed dispersion curve associated with the tube velocity in the environment.
• For a typical set of characteristic sound and Alfvén speeds in both media (the jet and its environment) at relatively small Alfvén-Mach numbers both modes are pseudosurface waves.With increasing M A , some of them become pure surface waves.For kink waves, this finding is valid for M A 4.
• The kink waves running along the jet can become unstable when the Alfvén-Mach number, M A , exceeds some critical value.That critical value depends upon the two input parameters, η and b; the increase in the density contrast, ρ e /ρ i , decreases the magnitude of the critical Alfvén-Mach number, whilst the increase in the background magnetic fields ratio, B e /B i , leads to an increase in the critical M A .For our choice of parameters for Type II spicules (η = 0.02 and b = 0.35) the value of the critical M A is 8.9.This means that the speed of the jet must be at least 712 km s −1 for the onset of the Kelvin-Helmholtz instability of the propagating kink waves.Such high speeds of Type II spicules have not yet been detected.For the soft X-ray coronal jets, due to the greater density contrast (η = 0.13) and almost equal background magnetic fields (b = 1.035), the critical Alfvén-Mach number is approximately twice smaller (=4.25), but since the jet Alfvén speed is 10 times larger than that of spicules, the critical flow speed, U crt , is much higher, namely 3400 km s −1 .Such high jet speeds can be in principal registered in soft X-ray coronal jets.
A rough criterion for the appearance of the Kelvin-Helmholtz instability of kink waves is the satisfaction of an inequality suggested by Andries & Goossens (Andies & Goossens, 2001), which in our notation reads This criterion provides more reliable predictions for the critical M A when b ≈ 1 (Zhelyazkov, 2010).In particular, for a X-ray jet with η = 0.13 and b = 1.035 the above criterion yields M A > 3.87, which is lower than the numerically found value of 4.25.
• The onset of the Kelvin-Helmholtz instability for kink surface waves running along a cylindrical jet, modelling a Type II spicule, is preceded by a substantial reorganization of wave dispersion curves.As we increase the Alfvén-Mach number, the pairs of highand low-speed curves (look at Fig. 8) begin to merge transforming into closed dispersion curves.After a further increase in M A , these closed dispersion curves become smallerthis is an indication that we have reached the critical M A at which the kink waves are subjected to the Kelvin-Helmholtz instability -the unstable waves propagate across the entire k z a-range having growth rates depending upon the value of the current M A .W e note that this behaviour has been observed for kink waves travelling on flowing solar-wind plasma (Zhelyazkov, 2010;2011).
Topics in Magnetohydrodynamics
For the X-ray jets, the dispersion curves' reorganization, because the environment has been considered as a cool plasma, is different -now, at high enough flow speeds, the merging lower-and higher-speed c k -dispersion curves take the form of semi-closed loops (see Fig. 17).As we increase the flow speed (or equivalently the Alfvén-Mach number), the semi-closed loops shrink and at some critical flow speed the kink wave becomes unstable and the instability is of the Kelvin-Helmholtz type.We note that the shapes of the waves' growth rates of kink waves in spicules and soft X-ray coronal jets are distinctly differentcompare Figs. 9 and 18.
• We have found that the sausage waves are unaffected by the Kelvin-Helmholtz instability.This conclusion was also previously drawn for the sausage modes in flowing solar-wind plasma (Zhelyazkov, 2010;2011).
As we have seen, very high jet speeds are required to ensure that the Kelvin-Helmholtz instability occurs for kink waves propagating in Type II spicules associated with a subsequent triggering of Alfvén-wave turbulence, hence the possibility that this mechanism is responsible for chromospheric/coronal heating has to be excluded.However, a twist in the magnetic field of the flux tube or its environment may have the effect of lowering the instability threshold (Bennett et al., 1999;Zaqarashvili et al., 2010) and eventually lead to the triggering of the Kelvin-Helmholtz instability.According to Antolin & Shibata (Antolin & Shibata, 2010), a promising way to ensure spicules'/coronal heating is by means of the mode conversion and parametric decay of Alfvén waves generated by magnetic reconnection or driven by the magneto-convection at the photosphere.However, spicules can be considered as Alfvén wave resonant cavities (Holweg, 1981;Leroy, 1981) and as Matsumoto & Shibata (Matsumoto & Shibata, 2010) claim, the waves of the period around 100-500 s can transport a large amount of wave energy to the corona.Zahariev & Mishonov (Zahariev & Mishonov, 2011) state that the corona may be heated through a self-induced opacity of high-frequency Alfvén waves propagating in the transition region between the chromosphere and the corona owing to a considerable spectral density of the Alfvén waves in the photosphere.Another trend in explaining the mechanism of coronal heating is the dissipation of Alfvén waves' energy by strong wave damping due to the collisions between ions and neutrals (Song & Vasyli ūnas, 2011;Tsap et al., 2011).In particular, Song & Vasyli ūnas, by analytically solving a self-consistent one-dimensional model of the plasma-neutral-electromagnetic system, show that the damping is extremely strong for weaker magnetic field and less strong for strong field.Under either condition, the high-frequency portion of the source power spectrum is strongly damped at the lower altitudes, depositing heat there, whereas the lower-frequency perturbations are nearly undamped and can be observed in the corona and above when the field is strong.
The idea that Alfvén waves propagating in the transition region can contribute to the coronal heating was firmly supported by the observational data recorded on April 25, 2010 by NASA's Solar Dynamics Observatory (see Fig. 2).As McIntosh et al. (McIntosh et al., 2011) claim, "SDO has amazing resolution, so you can actually see individual waves.Previous observations of Alfvénic waves in the corona revealed amplitudes far too small (0.5 km s −1 ) to supply the energy flux (100-200 W m −2 ) required to drive the fast solar wind or balance the radiative losses of the quiet corona.Here we report observations of the transition region (between the chromosphere and the corona) and of the corona that reveal how Alfvénic motions permeate the dynamic and finely structured outer solar atmosphere.The ubiquitous outward-propagating Alfvénic motions observed have amplitudes of the order of 20 km s −1 and periods of the order of 100-500 s throughout the quiescent atmosphere (compatible with recent investigations), and are energetic enough to accelerate the fast solar wind and heat the quiet corona." Notwithstanding, as we have already mentioned in the end of Sec.5.1, the possibility for the onset of a Kelvin-Helmholtz instability of kink waves running along soft X-ray coronal jets should not be excluded -at high enough flow speeds, which in principal are reachable, one can expect a dramatic change in the waves' behaviour associated with an emerging instability, and subsequently, with an Alfvén-wave-turbulence heating.
In all cases, the question of whether large coronal spicules can reach coronal temperatures remains open -for a discussion from an observational point of view we refer to the paper by Madjarska et al. (Madjarska et al., 2011).
Fig. 4 .
Fig. 4. Geometry of a spicule flux tube containing flowing plasma with velocity U.
Fig. 5 .
Fig. 5. Dispersion curves of kink waves propagating along the flux tube at M A = 0.magenta colour) labelled c Ti (which is actually the normalized value of c Ti to v Ai ), an almost Alfvén wave labelled c k (the green curve), and a family of super-Alfvénic waves (the red dispersion curves).We note that one can get by numerically solving Eq. (34) the mirror images (with respect to the zeroth line) of the c k -labelled dispersion curve, as well as of the fast super-Alfvénic waves -both being backward propagating modes that are not plotted in Fig.5.The next Fig.6shows how all these dispersion curves change when the plasma inside the
Fig. 7 .
Fig. 7. Dispersion curves of kink waves propagating along the flux tube at various values of M A .
Fig. 12 .
Fig. 12. Dispersion curves of sausage waves propagating along the flux tube at M A = 0.
Fig. 13 .
Fig. 13.Dispersion curves of sausage waves propagating along the flux tube at M A = 1.25.
Fig. 14 .
Fig. 14.Dispersion curves of sausage waves propagating along the flux tube at various values of M A .
Fig. 15 .
Fig. 15.Dispersion curves of kink waves propagating along a flux tube modelling X-ray jet at M A = 0.
Fig. 16.Dispersion curves of kink waves propagating along a flux tube modelling X-ray jet at M A = 0.725.
Fig. 19 .
Fig. 19.Dispersion curves of sausage waves propagating along a flux tube modelling X-ray jet at M A = 0.
Fig. 20 .
Fig. 20.Dispersion curves of sausage waves propagating along a flux tube modelling X-ray jet at M A = 0.725.
Fig. 21 .
Fig. 21.Dispersion curves of sausage waves propagating along a flux tube modelling X-ray jets at various values of M A .
Fig. 22 .
Fig. 22. Zoomed part of the dispersion diagram in Fig. 21 where two dispersion curves of super-Alfvénic sausage waves (at M A = 5) are touching each other. | 16,111.2 | 2012-03-09T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
An Inter- and Intra-Subject Transfer Calibration Scheme for Improving Feedback Performance of Sensorimotor Rhythm-Based BCI Rehabilitation
The Brain Computer Interface (BCI) system is a typical neurophysiological application which helps paralyzed patients with human-machine communication. Stroke patients with motor disabilities are able to perform BCI tasks for clinical rehabilitation. This paper proposes an effective scheme of transfer calibration for BCI rehabilitation. The inter- and intra-subject transfer learning approaches can improve the low-precision classification performance for experimental feedback. The results imply that the systematical scheme is positive in increasing the confidence of voluntary training for stroke patients. In addition, it also reduces the time consumption of classifier calibration.
INTRODUCTION
Brain-computer interface (BCI) is developed as an extrinsic pathway for human-machine interaction in a reliable way (Birbaumer, 2006). It is effective for the disabled to control external devices by neural activities (Buch et al., 2008). Stroke patients with motor disabilities in particular, are able to perform BCI tasks for clinical rehabilitation (Meng et al., 2016). In this treatment, sensorimotor rhythm changes are used as neurological modulation for active intervention (Mane et al., 2019).
During rehabilitation, the patients are requested to attempt or to imagine performing a movement. Then, motor attempt (MA) or motor imagery (MI)-BCI systems will output a synchronized sensory biofeedback (e.g., robotic arm recovery) by a trained classifier based on a prior dataset (Pillette et al., 2020). In the intervention, the functional motor is significantly enabled by neurophysiological activity (Xu et al., 2014). This is an ongoing process of brain plasticity and functional recovery (Remsik et al., 2019). Recent studies have reported the improvement of limb movement for stroke patients using long-term sensorimotor rhythm (SMR)-BCI interventions (Ramos-Murguialday et al., 2013;Pichiorri et al., 2015;Bundy et al., 2017).
Nevertheless, BCI rehabilitation is limited by poor-efficiency recognition algorithms and model-personalized variability (Grosse-Wentrup et al., 2011). Relevant work has proved that BCI decoding accuracy was insufficient for rehabilitation outcomes (Mane et al., 2020). Moreover, the failures of BCI feedback also reduce the confidence of trainees subtly (Foong et al., 2019). Hence, various improvements of pattern recognition and model calibration should be made to enhance SMR-BCI performance in the clinical application.
Conventionally, SMR features are effectively extracted by a time-frequency analysis for the healthy (Pfurtscheller and Da Silva, 1999). For instance, the event-related desynchronization (ERD) amplitudes oscillated in the µ rhythm was detected during the motor imagery task for pattern recognition (Huang et al., 2012;Saha et al., 2017). Furthermore, the common spatial pattern (CSP) algorithm was proposed for feature extracting in the spatial domain (Wang et al., 2006;Arvaneh et al., 2013). It was efficient for mining the significant difference of two-type motor tasks. However, the degeneration of neural activation due to post stroke, has a negative impact on BCI performance (Shu et al., 2018). Thus, the signal characteristics are much lower than those of healthy individuals during motor tasks (De Vries et al., 2013;Caria et al., 2020). Therefore, increasing the precision of SMR-BCI using a mathematical methods is meaningful for BCI intervention.
To solve this problem, transfer learning (TL), which applies the dataset in source domains for compensating insufficient labeled data in a target domain, has been proposed for MI-BCIs (Samek et al., 2013b;Azab et al., 2018). This technology is developed in several ways, such as instance selection (Wu, 2016;Hossain et al., 2018), feature calibration (Samek et al., 2013a;Zhao et al., 2019) and classification domains (Vidaurre et al., 2010;He and Wu, 2019). For instance, for selection, active learning is typically presented for selecting training data from intra-or inter-subject labeled trials (Hossain et al., 2018). The target of this approach is to increase the informative trials of the new subject by adding sufficient existing labeled trials that were close to prior dataset. In the feature domain, transfer calibration approaches mainly concentrate on regulating the covariance matrix estimation and optimization function for improving the performance of CSP models. For example, researchers regularized the CSP filter by the average of the common feature space from other subjects (Kang et al., 2009). Moreover, the efficiency of domain adaptation has been verified for MI-BCI in the classification domain (Vidaurre et al., 2010). Kobler et al. constructed a Restricted Boltzmann Machine (RBM) based on public baseline data and applied it for the MI-BCI task (Kobler and Scherer, 2016). Recently, multi-task learning has been presented in the relevant experiment (Jayaram et al., 2016;Gao et al., 2019), where the weight parameters of intersubjective classifiers were learned jointly for minimizing the dissimilarities between these existing classification models and the target model. However, BCIs controlled by these approaches have been proven to be only valid for healthy individuals. None of them are experimentally evaluated for BCI rehabilitation.
This paper first proposes a transfer calibration scheme to improve the rehabilitation outcomes of SMR-BCI. First, we utilize a transfer learning algorithm, whose effects have been verified in the task of MI-BCI (Azab et al., 2019), to validate the reliability for stroke patients. Then, we discuss the respective applicability between intra-and inter-subjective conditions. The results show that our proposed approaches improved the lowprecision classification performance. Accordingly, we generalize the scheme of transfer calibration for SMR-BCI intervention. This methodology applies for other transfer learning algorithms to increase the precision of BCI feedback.
Experimental Paradigm and Subjects
Seven stroke patients aged between 30 and 65, recruited from the Department of Rehabilitation Medicine of Huashan Hospital participated in our experiments ( Table 1). All of them were naive to BCI and provided consent to be involved in the study. The inclusion criteria for this study were as follows: (1) unilateral motor dysfunction diagnosed by computer tomography or magnetic resonance imaging (MRI); (2) first onset stroke patient; (3) the time since stroke onset was more than 4 weeks and less than 6 months; (4) the assessment of cognitive functions: Mini-Mental State Examination score > 25. The exclusion criteria were listed below: (1) unstable medical conditions; (2) severe vision problems; (3) the intervention treatment by other brain stimulations during the study period.
In the experiment, BCI intervention was performed in three sessions a week for each patient. And it lasted 1 month, with a total of 12 sessions. One session contained three runs, each run had 30 trials for each mental task (motor attempt or idle state) performed in a random order. Subjects underwent two experimental tasks. In the task of motor attempt (MA), patients were required to attempt motion of wrist extension with affected hands continually, but not to have compensatory movements. In the other task of idle state (IS), they needed to do nothing but rest ( Figure 1A).
The experimental paradigm is shown in Figure 1B. In one trial, the duration time was about 11 s. The patient was asked to sit in front of a computer screen, with arms resting on a desk. A white arrow presented on the center of the screen from 0 to 3 s. The patient was instructed to keep still and rest. Then, an alternative indicator (a red square or a red rectangle) was displayed in the center of the screen, representing a task (MA or IS). After the command disappeared, the subject was required to perform the corresponding task for 5 s until the white cross disappeared. Finally, the rest interval was adopted randomly to relax.
Evaluation of BCI Performance
BCI performance was evaluated by classification accuracy for two mental tasks. As shown in the figure of experimental paradigm, the subject conducted the mental task from 4.5 s to 9.5 s. Hence, EEG signals were extracted from [4.5 9.5] s of each single trial. And the features were filtered with the common spatial pattern (CSP) method, whose log-variance of the first and last three components were selected as output vectors. Then, a transfer learning scheme was used to improve the classification efficiency. The Logistic Regression (LR) -based classifier was used as the baseline approach. Its classification parameters w t were calibrated using the following function, where H, f t , l t and . 2 denote the cross-entropy, feature vectors extracted from the EEG dataset, label vector, and 2norm functions, respectively. Conventionally, the parameters were supposed be trained by large prior data. Transfer learning algorithms had been proposed for improving the efficiency of parameter calibration of new subjects without subject-specific data (Hossain et al., 2018). In the framework of logistic regression -based transfer learning (LRTL), a regularization term penalizing dissimilarities R t was used for transferring the prior distribution of the existing classification parameters into the calibration of the present training of new target subjects or sessions. In this opinion, the classification parameters were calculated as follows, and the R t was decided by estimating the similarity between feature distribution of existing models and that of current few training data, where µ and t were, respectively, obtained as Here, T denoted transpose of the matrix, the functions of diag and trace were defined as the diagonal elements and the sum of the diagonal elements of a matrix. Furthermore, Kullback-Leibler (KL) divergence was added to solve the problem of weight distribution between existing models and the target model. It was supposed to give larger weights to more similar distributions and smaller weights to less similar distributions. KL divergence of two EEG sets (E 0 , E 1 ) were represented as the following form, The accuracies of transfer learning approaches were higher than that of baseline approach. The accuracies of transfer learning approaches were higher than that of baseline approach.
where det and K denoted the determinant function and the dimension of the data, respectively. Thus, the t was updated as below, where α and µ was computed as Here, the divergence was calculated by averaging the KL divergences calculated for each class separately. The details of weighted LRTL (wLRTL) approach and the above other algorithms can be reviewed in Azab et al. (2019).
In this study, we discussed the transfer scheme for several subjective conditions of prior knowledge. Inter-and Intra-subject transfer learning approaches were both used to evaluate the methodological effectiveness. Inter-subject transfer calibration trained this classifier with prior experimental trials from other subjects while intra-subject transfer calibration performed this work using its own existing dataset. The target of our research was to find which kind of transfer strategies could be made to improve the online single-trial accuracy of BCI rehabilitation. The right bio-feedback (e.g., robotic arm) was able to raise the subjective confidence and patience to improve the therapeutic effect.
In this experiment, the performances of first sessions were unavailable for intra-subject transfer learning algorithms on account of no prior dataset. We used the sequential collecting of source data for transfer calibration. That meant that the target session (e.g., Session 5) would be trained by all prior collections (e.g., Session 1, Session 2, Session 3, Session 4). It was consistent with the real condition of model training. Meanwhile, the collection of source data was picked from all other subjects under the inter-subject condition. Specifically, the dataset of each subject which obtained the best performance in all sessions was used for transfer learning. Moreover, 5-fold cross validation was conducted for each approach. All 31 channels of the EEG data were selected for pattern classification. The 45 trials of MA and IS tasks (45 trials per class) were randomly divided into five sets. Four sets were used to train the classifier and the other set was tested to evaluate the performance.
The Experimental Performance of Intra-Subject Based Transfer Learning Approach
In our study, the precision of pattern recognition was considered as the most important index for BCI rehabilitation. Figure 2 lists the classification results of the above intra-subject transfer learning approaches (LRTL, wLRTL), as well as the baseline approach (LR). The average classification accuracies of all patients were higher than 60% for three different algorithms, except for P6. And it was indicated that the pattern of motor attempt could be distinctive from that of an idle state without motor attempt. However, a paired t-test with Bonferroni correction showed that no discriminatory differences were presented between transfer learning approaches and the baseline approach (LR vs. LRTL: p = 0.0277; LR vs. wLRTL: p = 0.0613; LRTL vs. wLRTL: p = 0.6085). This result suggests that transfer calibration did not significantly improve the BCI performance.
Furthermore, we analyzed the performance of low-precision (≤ 60%) sessions for all patients ( Table 2). Paired t-test with Bonferroni correction showed that the classification results of transfer learning approaches were significantly greater than that of the baseline approach (LR vs. LRTL: p = 0.0001; LR vs. wLRTL: p = 0.0001; LRTL vs. wLRTL: p = 0.0563). It was meaningfully revealed that transfer calibration could improve the BCI performance induced by poor model training.
Inter-Subject Based Transfer Learning Approach for MI-Based BCI Rehabilitation
Similarly, Table 3 lists the classification results of all three algorithms under the inter-subject condition. Paired t-test with Bonferroni correction showed that no discriminative differences were presented between transfer learning approaches and the baseline approach (LR vs. LRTL: p = 0.0488; LR vs. wLRTL: p = 0.1207; LRTL vs. wLRTL: p = 0.1744). This result suggested that the non-significance was consistent with those under the intra-subject condition.
Additionally, low-precision sessions were extracted for further analysis (Figure 3). In this case, the statistical analysis indicated that the accuracies of transfer learning approaches were significantly higher than that of the baseline approach (LR vs. LRTL: p = 0.0001; LR vs. wLRTL: p = 0.0001; LRTL vs. wLRTL: p = 0.0210). It was revealed that transfer calibration improved the BCI performance under the inter-subject condition.
DISCUSSION
This study proposed a novel transfer calibration scheme to improve low-precision performance for BCI rehabilitation. This scheme of transfer learning could be used for new subjects without the training dataset, as well as replacing the poor training model-whose accuracy was close to the chance level. Cerebral activities were also observed to clarify the benefit of transfer calibration for feature selection.
Improvement of Low-Precision Performance for BCI Rehabilitation
As we know, the critical issue of BCI rehabilitation revolves around how to promote the biofeedback effect for active intervention (Ko et al., 2019). It was positive for rehabilitation outcomes, enhancing cortical activity for neural recovery, and increasing confidences of voluntary training (Zhang et al., 2020). Hence, the biofeedback of low-precision trials was negative for patients. Furthermore, most of the sessions used by the baseline classifier achieved the effective performance (> 60%) for each patient, except for S6. It was indicated that very few non-effective results of experimental sessions were unreliable for evaluating this neural treatment. Improved performance of transfer calibration could eliminate the confusion caused by the precision fluctuation.
Moreover, we presented the comparison of transfer learning algorithms between inter-subject (IERS) and intra-subject (IRAS) conditions (Figure 4). Statistical analysis was used to compare the classification accuracies among transfer conditions (LRTL: IERS vs. IRAS: p = 0.3138; wLRTL: IERS vs. IRAS: p = 0.3501;). This result suggested that both of them were reliable for improving low-precision performance of SMR-BCI tasks. However, the advantage of inter-subject transfer calibration was employed for new subjects without training.
The fluctuation of classification for LR resulted from the differences of brain activity between consecutive sessions. It was deduced that changes of cerebral activities were caused by neural self-rehabilitation for patients. As a result, BCI-based intervention was effective for stroke rehabilitation to some extent. Nevertheless, the methodology needs to be further clarified for high-efficiency treatment.
The efficiency of transfer learning has been verified by motor imagery -based BCI tasks for healthy people. However, cerebral impairment of stroke patients would influence the training effect due to the weak neural activities. Therefore, our classification result was lower than those of the above state-of-art BCI systems (Azab et al., 2019(Azab et al., , 2020He and Wu, 2019). Nevertheless, the improvement of low-precision performance was conductive to treatment for the impatient. Compared to the feedback of the random level, the subject was subjectively motivated by positive feedback of right detection. It is meaningful for long-term continuous rehabilitation.
Transfer Calibration Scheme for SMR-Based BCI Intervention
In our study, an available scheme of transfer calibration was proposed for model selection in the online task of SMR-BCI rehabilitation (Figure 5). We summarized several rules as stated below: (1) Instant self-training was necessary for new subjects. The classification model based on current dataset was reliable for BCI tasks. (2) If the patient was frustrated by tedious training, intra-or inter-subject transfer calibration could be used to reduce the calibration time.
(3) Furthermore, we could train another model when sufficient trials were finished in the task. If the precision of current model was superior to prior transfer model, the alternative could be automatically performed by our control system. (4) If the model based on the current training dataset FIGURE 5 | The transfer calibration scheme of classifier model selection for SMR-BCI rehabilitation. For a new subject, intra-or inter-subject transfer calibration could be used for reducing the calibration time. Meanwhile, we could train another model when sufficient trials were finished in the task. If the precision of current using model was superior to that of the spare model, the alternative could be automatically performed by control system. performed poorly on the experiment, intra-or inter-subject transfer calibration was worth trying in order to replace the under-performing model. (5) For transfer calibration, the volume of EEG data was a crucial factor for model selection between intra-subject and inter-subject conditions. Sufficient training data was an essential precondition for transfer calibration. Specifically, the only option for new subjects without prior experience was inter-subject transfer calibration.
This scheme of transfer calibration was feasible for improving the poor performance of SMR-BCI recognition. And it also reduced the time consumption of model calibration. It was inferred that this scheme was suitable for other transfer learning algorithms.
Limitation of Current Work
In this study, some issues should be noted and considered in our future work. First, the scheme of transfer calibration needed to be verified for a large amount of stroke patients. It will be addressed in future studies. Second, these patients performed these experiments for 3 months. BCI training over longer time periods should be observed to evaluate performances of the patients in different stages of post-stroke time. Moreover, the number of electrodes was supposed to be reduced by data analysis. It could reduce the time consumption of BCI rehabilitation. Thus, future studies should be conducted to solve these problems to improve the performance of online SMR-BCI rehabilitation.
CONCLUSIONS
This paper proposed an effective scheme of transfer calibration for SMR-BCI rehabilitation. The inter-and intra-subject transfer learning approaches could improve the low-precision classification model for BCI feedback. The results imply that this systematical scheme is positive in increasing confidence of voluntary training for stroke patients. It also reduced the time consumption of model calibration.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Ethics Committee of Huashan Hospital. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
LC and HW designed the methodology of data process and performed the data analysis. LC and SC organized the data and wrote the manuscript. LC, ZX, CF, and JJ reviewed and edited the manuscript. All authors read and approved the submitted manuscript. | 4,487 | 2021-01-28T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Iterated nonexpansive mappings in Hilbert spaces
In [T. Dominguez Benavides and E. Llorens-Fuster, Iterated nonexpansive mappings, J. Fixed Point Theory Appl. 20 (2018), no. 3, Paper No. 104, 18 pp.], the authors raised the question about the existence of a fixed point free continuous INEA mapping T defined on a closed convex and bounded subset (or on a weakly compact convex subset) of a Banach space with normal structure. Our main goal is to give the affirmative answer to this problem in the very special case of a Hilbert space.
Introduction and preliminaries
Let C be a weakly compact convex subset of a Banach space. In [1], the following concept of iterated nonexpansive mappings (INE in short) was stated: for all x ∈ C.
It is not surprising that an INE mapping does not have to have fixed points even if it is defined on a subset of a finite-dimensional Hilbert space (see, for instance, [2, Example 1.1]). Thus, it seems natural to raise the question of whether the same mapping must have a fixed point provided it is continuous. Clearly, according to the Klee result (see [3]), in this case we are considering a noncompact domain C, so the space must have infinite dimension (but still being a Hilbert space or a Banach one with normal structure). A Banach space is said to have normal structure if each convex subset C which contains more than one point has a point x ∈ C which is not a diametral one of C, i.e., the following condition holds: sup{ x − y : y ∈ C} < diam C (see, for instance, [4]).
The negative answer to this question was given in [2,5], where the authors presented an example of a class of fixed point free mappings which are INE and continuous on any closed convex bounded but noncompact subset of a Banach space. The same is true when the set is convex and weakly compact (and noncompact with respect to the norm topology). The common denominator of these mappings was the fact that all of them satisfy therefore, they were not asymptotically regular. Let us remind that the selfmapping T : C → C is asymptotically regular if for each x ∈ C the sequence T n x − T n+1 x tends to 0. This condition can be generalized to the case of mappings which have the so-called almost fixed point sequence. A sequence (x n ) in C is called an almost fixed point sequence (a.f.p.s. for short) for the mapping T on C whenever x n −T (x n ) → 0 . It is well known that if the selfmapping T : C → C is nonexpansive then T has an a.f.p.s. in C. Combining this fact with the normal structure of a space leads to the existence of fixed points (see, for instance, [6, Theorem 4.1] and [7, Theorem 2.7]). Iterated nonexpansive mappings which have a.f.p.s. are called INEA for short. Since the assumption of the existence of a.f.p.s. seems to play a crucial role, one may ask whether there is any fixed point free continuous INEA self-mapping of a closed convex bounded (or weakly-compact convex) subset C of a Banach space into C. Here we suppose additionally that the Banach space is a Hilbert one or Banach with normal structure.
As it was mentioned before, our main goal is to give an example of a continuous and INEA mapping T defined on a closed convex bounded subset (more precisely, on a closed unit ball) of the Hilbert space into itself for which the set of fixed points is empty. To do it, let us take the Hilbert space l 2 and let B be its closed unit ball. Further, we will apply two kinds of geometry. The first one is Cartesian geometry based on the standard base of l 2 denoted by {e n : n ∈ N}. Then let ·, · mean the inner product in l 2 . Moreover, we denote the unit sphere by S. This set will be very often considered with spherical geometry based on the spherical metric ρ, i.e.,
ρ(A, B) = arccos A, B
for a pair of two elements A, B ∈ S. By the angle between two curves c andc (c(0) =c(0)) on the sphere with respect to spherical geometry we mean the Alexandrov angle, defined by lim s,t→0 + ∠ c(0) (c(s),c(t)).
This limit always exists (see, for instance, [8, p. 16]). More details about spherical geometry can be found in [8,9]). Further, we will use the same denotation conv S (A) for any nonempty set A ⊂ S. Now, still with respect to spherical geometry on conv S ({e 1 , e 2 , e 3 }), let us notice that a = 3 −1/2 (e 1 + e 2 + e 3 ) ∈ conv S ({e 1 , e 2 , e 3 }). Moreover, if we join points a and t 1 and points t 1 and e i , i = 1, 2, with the geodesic segments, then the angle between the segments is equal to π/2. Notice that the same is true for t 2 and e i , i = 2, 3; hence, we can join points t 1 and t 2 with the curve
Example
where S(a, r) is a sphere in l 2 with the radius with respect to the norm. The angle between the curve and the segments [a, t 1 ] or [a, t 2 ] is also equal to π/2, so the curve is tangent to [e 1 , e 2 ] and [e 2 , e 3 ] at the points t 1 and t 2 , respectively ( Fig. 1). Now, we repeat our construction on each three-dimensional set conv S ({e n , e n+1 , e n+2 }) and we obtain a smooth curve of infinite length joining all points t n , n = 1, 2, . . .. Let us notice that points of the curve between t n and t n+1 can be treated as points of the circle centered at a pointã = √ 2 3 √ 3 (e n + e n+1 + e n+2 ) with a radius of r = 11 − 4 √ 3 9 (now with respect to Cartesian geometry). Then, the angle between radiuses [ã, t n ] and [ã, t n+1 ] is independent of n and smaller than π. Let us denote this angle by α and for all t ∈ [0, ∞) we define ϕ(t) as a point of the curve. If t = n α + τ then ϕ(t) is located between t n and t n+1 in such a way that ∠ã(t n , ϕ(t)) = τ .
Step 2-definition of the map on the curve Figure 2. The construction of the map Now, we would like to define the map T : ϕ → ϕ. Let us divide the curve between points t 1 and t 2 into 128 equal parts and let α 0 = α/128. Then, for points of the form ϕ(t), t ∈ [0, α − α 0 ], we take T (ϕ(t)) = ϕ(t + α 0 ). So far, the map T is an isometry.
To extend the map T on the whole curve, we need to make some calculations because our map must be INEA. To do it, first, let us choose α 1 in such a way that Next we will show that so far Simultaneously, where s ∈ [0, 1). To prove (1), it is sufficient to notice that On the other hand, Therefore, Now, using the angle α 1 we define Let us take the same point x as above (see Fig. 2) and notice that We would like to show that 2r sin Let us denote ∠ t2 (T (x), T (T (x))) = β. In the sequel, we will show that Let us notice that, on account of the cosine law, we get Simultaneously, So, it is sufficient to prove the following inequality We know that (see (2) and (3)) sin The angle between the arcs t 1 t 2 and t 2 t 3 is equal to π. Since these two arcs are the subsets of two different two-dimensional spaces, the angle between segments [T (x), t 2 ] and [t 2 , T (T (x))] (with respect to Cartesian geometry) is greater than in the case when all points are located on one two-dimensional space. The same situation can be observed in Fig. 3, where Next, if all points are on one two-dimensional space, then the angle between the metric segments is greater than the angle between two metric segments joining points on the same circle and having the same length 2r sin α 0 2 greater than T (x) − t 2 and t 2 − T (T (x)) . See also points a, a , b and b (Fig. 3). Thus, the angle ∠ t2 (T (x), T (T (x))) is not smaller than β . To estimate β , let us consider the triangle of the sides of length equal to 2r sin α 0 2 , 2r sin α 0 2 and 2r sin 2α 0 2 (all vertices are located on a circle with radius r). Therefore, which completes the proof of (4).
To define the map on {ϕ(t) : t ∈ (2α − α 1 , 2α)}, let us choose Since α 1 < α 0 one may repeat considerations from the previous part to show that T is still INEA. We can also define the map on {ϕ(t) : t ∈ (α, 3α − α 2 )} as a movement along the curve. Repeating all steps with α n+1 = α n · α 1 α 0 , n∈ N one may extend the map T on the whole curve γ. Let us notice that T is also continuous.
Step 3-neighborhood of the curve In this step, we consider the neighborhood of the curve. Mainly, let us consider the set We will see that this set is closed. Indeed, let us take any Cauchy sequence (x n ), x n ∈ U . Let x n →x ∈ B. Since t n − t m ≥ 1 >> α 0 for n = m, without loss of generality we may assume that there is a sequence (τ n ) such that τ n ∈ [mα, (m +1)α] and x n − ϕ(τ n ) ≤ α m . Then, there is a convergent subsequence (denoting again by (τ n )) such that τ n →τ . Ifτ ∈ (mα, (m+1)α], then the same holds for almost all τ n , so x − ϕ(τ ) as the limit is also not greater than α m . Ifτ = mα, then andx is also an element of U .
Step 4-definition of the hyperplanes Now, for each point x ∈ ϕ there is a unique hyperplane We will show that two hyperplanes do not intersect inside U as long as they are determined by points which are not located too far from each other. Let us fix a point x = ϕ(t 0 ) and let x = ϕ(t 0 + τ ), where τ ∈ (0, 9α 0 ). Let us also assume that t 0 ∈ [mα, (m + 1)α). Claim: For all possible positions of x, x , T (x) and T (x ), the angle between vectors x T (x) and x T (x ) is not greater than τ .
• Case 1. First, we assume that all points are located on the curve between t m and t m+1 . So, the aforementioned vectors x T (x) and x T (x ) span one two-dimensional space and it is sufficient to consider only points on this space. Clearly, the intersection of H x (or H x ) with this space is a line-see Fig. 4.
where p is the projection of x onto the set of common points of H x and H x . Clearly, p also belongs to the same two-dimensional space.
Since ∠ x (p, T (x)) = ∠ x (p, T (x )) = π/2, we get that the angle between vectors x T (x) and x T (x ) is equal to the angle ∠ã(x, x ), i.e., is equal to τ . • Case 2. Now, we assume that the three points x, x , T (x) are located on the curve between t m and t m+1 and T (x ) is between t m+1 and t m+2see Fig. 5. Without loss of generality we may assume that τ < α m . Letã be the center of the circle containing t m and t m+1 whileb is the center of the circle containing t m+1 and t m+2 . Then, there is a number s ∈ (0, 1) such that ∠ã(x , t m+1 ) = (1 − s)α m and ∠b(t m+1 , T (x )) = s α m+1 .
Let us choose T on the same circle as t m and t m+1 (Fig. 5) for which Since the curve is smooth and T (x ) does not belong to the same twodimensional space as the rest of the points, the following inequalities hold Clearly, the angle between vectors x T (x) and x T is equal to the sum of angles ∠ T (x) (x, x ) and ∠ x (T (x), T ). Moreover, this sum is smaller than τ , because However, from (6) and the fact that T (x ) does not belong on the same space as the rest of points it follows that the angle between vectors x T (x) and x T (x ) is smaller than the angle between vectors x T (x) and x T and so smaller than τ . The proofs for the cases where τ ≥ α m or three points x , T (x) and T (x ) are between t m+1 and t m+2 go with the same patterns. • Case 3. Now we assume that x and x are located between t m and t m+1 while T (x) and T (x ) are between points t m+1 and t m+2 -see Fig. 6 Let us fix T and T on the circle containing x and x in such a way that and whereã andb are defined in the same way as in Case 2.
We want to show that To do it, let us notice that Hence, the inequality (7) follows directly from the sine law.
In a similar way one may see that Moreover, since all four points do not belong to the same two-dimensional space, the angle between vectors x T (x) and x T (x ) is smaller than the sum of ∠ T (x) (x, x ) and ∠ x (T (x), T (x )). This completes the proof for Case 3. The case where two points x and T (x) are between t m and t m+1 is slightly easier. Now, we consider the projection of x onto H x . First, we want to estimate the angle ∠ x (x , T (x)). We may consider three cases as it was done in the previous part when we studied the angle between vectors but here we do not need to make the estimation so precise; therefore, only notice that this angle is smaller than the sum of the angle between the vector x x and the curve at the point x and the angle between the vector x T (x) and the curve at the same point x.
In both cases, the angles are of the largest measure if all points x, x and T (x) are located between points t m and t m+1 . Hence, using denotations from Fig. 7, we get Since ∠ã(x, T (x)) ≤ α m and ∠ã(x, x ) = τ ≤ 9α 0 , it follows finally that Now we find the projection of x onto the hyperplane H x and denote this by p x . Since H x is determined by the vector x T (x) and we obtain two estimations: Let p be the projection of p x onto the set H x ∩ H x . Clearly, this set is closed and convex, so the projection is a single, i.e., is well defined. Since vectors x T (x) and p x x are parallel, we can calculate the measure of the angle ∠ x (p x , T (x )) in the following way: where γ denotes the angle between vectors x T (x) and x T (x ) (Fig. 8).
Next, we will show that almost all points of the set U (more precisely, all points from V ) satisfy the following condition: Let us consider the Cauchy sequence of points (x n ) such that x n ∈ U and all of them satisfy this property. As it was shown the limit point x 0 = lim x n belongs to U and we will prove that x 0 also satisfies (P ).
Since (x n ) is a Cauchy sequence, we may take a subsequence with x n − x m ≤ α 0 . Then there must be | 4,072.6 | 2021-09-16T00:00:00.000 | [
"Mathematics"
] |
The Impact of Phospholipid-Based Liquid Crystals’ Microstructure on Stability and Release Profile of Ascorbyl Palmitate and Skin Performance
The drug delivery potential of liquid crystals (LCs) for ascorbyl palmitate (AP) was assessed, with the emphasis on the AP stability and release profile linked to microstructural rearrangement taking place along the dilution line being investigated by a set of complementary techniques. With high AP degradation observed after 56 days, two stabilization approaches, i.e., the addition of vitamin C or increasing AP concentration, were proposed. As a rule, LC samples with the lowest water content resulted in better AP stability (up to 52% of nondegraded AP in LC1 after 28 days) and faster API release (~18% in 8 h) as compared to the most diluted sample (29% of nondegraded AP in LC8 after 28 days, and up to 12% of AP released in 8 h). In addition, LCs exhibited a skin barrier-strengthening effect with up to 1.2-fold lower transepidermal water loss (TEWL) and 1.9-fold higher skin hydration observed in vitro on the porcine skin model. Although the latter cannot be linked to LCs’ composition or specific microstructure, the obtained insight into LCs’ microstructure contributed greatly to our understanding of AP positioning inside the system and its release profile, also influencing the overall LCs’ performance after dermal application.
Introduction
Long-term exposure of mammalian skin to ultraviolet radiation (UVR) induces oxidative stress by generating reactive oxygen species (ROS), acknowledged as the single major factor responsible for (premature) skin photoaging and various skin disorders and diseases, including skin cancer [1,2].Excessive sun exposure compromises the integrity of various skin layers and induces a proinflammatory state in addition to DNA damage caused either by direct absorption of UVB by DNA skin cells or indirectly by the generation of ROS induced by UVA [3,4].For preventing photoaging and skin cancer, it is, therefore, most important to reduce sun exposure and support the endogenous network of antioxidants offering skin protection against free radical damage [5,6].Several studies have supported the benefit of the daily use of sunscreens to prevent solar skin damage, as reviewed by A. Jesus et al. [7].In addition, dermal application of antioxidants to neutralize free radicals is an effective shielding strategy against the adverse effects of UVR [8,9].
Ascorbyl palmitate (AP) is an amphiphilic derivative of ascorbic acid widely used as an antioxidant-active substance in pharmaceutical and cosmetic formulations to combat skin photoaging, as reviewed previously [10].Antioxidants formulated in classical skin care products have some common problems in their efficacy relating to their physicochemical and biopharmaceutical properties, e.g., low solubility, poor permeability, and instability [11].Therefore, the implementation of novel carrier systems for efficient delivery and protection of antioxidants that can maximize their potential role in prophylaxis and therapy has been extensively investigated [12][13][14][15].While novel drug delivery systems were initially developed to enhance the stability and solubility of incorporated active ingredients, nowadays, a key focus is on controlled drug release, enabling maintenance of therapeutic drug concentrations for a prolonged period of time and increased penetration of actives into the skin to reach cutaneous cells [16].Various dermal delivery systems have previously been studied for providing stability and effectiveness of AP and other antioxidants, i.e., microemulsion and nanoemulsion, solid lipid nanoparticles and nanostructured lipid carriers, and liposomes [17][18][19][20][21].The stability of AP was reported to depend on loading concentration, its location in the microemulsion in addition to the oxygen dissolved in the system, and light exposure [22].Liquid crystals (LCs) with a lamellar structure seem to be among the most promising systems for dermal delivery of antioxidants to reduce the burden of UVR-induced skin disorders.They offer ideal consistency and thermodynamic stability in addition to great solubilization and sufficient stabilization potential while enabling modulation of their release and absorption characteristics [23,24].The pronounced similarity of lamellar LCs with the intercellular lipid matrix of the stratum corneum and their skin-hydrating properties originating from interlamellar water that is less prone to evaporation when applied to the skin present additional benefits compared to other carriers [25,26].The advantages presented that are linked to the wide drug delivery potential of lamellar LCs are fundamentally related to their microstructure.The latter is a result of the specific arrangement of hydrates and solvates of surfactants in the presence of water and/or oil phases, i.e., altering polar and nonpolar layers in the case of lamellar LCs [15,27,28].In addition to lamellar, hexagonal and cubic LC structures can also be formed upon hydration or solvation of surfactants [27,[29][30][31].In keeping with this, lyotropic LCs may show diverse behavior regarding the stability and release of the solubilized drug; therefore, an insight into the structural characteristics of lyotropic LCs (with and without the incorporated drug) is of crucial importance to assess their drug delivery potential [32][33][34].
Clinical relevance for the treatment of aged skin was previously identified for lamellar LCs composed of lecithin and Tween 80 [35].With diverse research fields involving a combination of the two amphiphiles, their intermediate structures and structural pathways remain a subject of research interest [36].In addition, cubic and two lamellar mesophases were identified in the AP/water binary system depending on concentration and temperature [37], which increases the chance for microstructural changes of lamellar LCs upon incorporation of amphiphilic AP.Moreover, LCs are exposed to (moderate) temperature changes and some dilution upon application, which are both possible drives for phase conversions taking place.According to the literature, pH, light, magnetic field, additives, and the type of amphiphilic are also recognized as stimuli leading to the microstructural rearrangement of LCs [38][39][40].Phase transitions can influence the drug diffusion and release profile as well as the overall skin performance of LCs, so the knowledge of microstructural rearrangements driven by temperature, water content, and drug loading is important for the development of dermally applicable drug delivery systems from this point of view [40].In contrast to the simplicity of their preparation, the characterization of the lyotropic LC microstructure is far from trivial and requires a combination of several techniques.
The aim of the present study is to evaluate the lamellar LCs as a dermal delivery system for AP.Physiologically compliant lamellar LCs for dermal delivery of AP composed of isopropyl myristate (IPM)/Tween 80/lecithin/water were previously developed by our group [13] and studied for phase behavior and structural features as a function of temperature and water content in lyotropic LCs, positioned on the same dilution line as the pseudoternary phase diagram [41].As an extension of previous work, the drug delivery potential of developed lamellar LCs for AP was assessed in the present study.Systems located along the dilution line were thus investigated regarding their ability to stabilize AP and drug release characteristics in addition to the pig's ear skin performance.As the amphiphilic moiety AP was expected to contribute to microstructural transitions of LCs along the dilution line, structural alterations possibly taking place due to its incorporation were investigated by polarization microscopy, small-angle X-ray scattering (SAXS), differential dynamic calorimetry (DSC), and rheology analysis.Obtaining insight into structural transitions is expected to contribute greatly to our understanding of drug positioning inside the system and its release profile, influencing overall skin performance.
Results and Discussion
While dermal formulations intended for active skin care and/or therapy usually comprise active ingredients incorporated in conventional, either semi-solid or liquid formulations, an innovative approach is to formulate an advanced delivery system in order to utilize its unique advantages that would support drug action and together facilitate patient-friendly treatment and improved therapeutic outcome(s).As lyotropic LCs and, in particular, lamellar lyotropic LCs are considered the most suitable system, their drug delivery potential for AP (i.e., stability, release profile, skin performance) was explored in relation to their microstructure.The formation of lamellar phases is governed by suitable self-assembly of hydrated or solvated amphiphiles; therefore, all constituents must be carefully selected.In our case, the formation of lipid bilayers is favored by lecithin's critical packing parameter, i.e., 0.5 to 1, while assembly into spherical micelles is characteristic of Tween 80 due to its considerably lower critical packing parameter being 0.07.While the microstructure of unloaded LCs positioned on the same dilution line was studied in our previous research [41], the incorporation of AP is also expected to influence their arrangements due to its amphiphilic character.To assess phase transitions possibly relevant for the drug delivery potential of AP, a combination of complementary characterization techniques was used.
Ascorbyl Palmitate Stability
Although AP is widely used in topical formulations as a more stable oil-soluble derivative of vitamin C, it was reported that the molecule is still susceptible to hydrolysis taking place in finished products as solutions and emulsions, even when employed in suitable gel-like emulsions with high viscoelastic properties that may improve its chemical stability [42].Lamellar LCs have previously been suggested as suitable carriers for dermal delivery of AP, also from a stability point of view [23,43].
To accurately assess the stability of AP in tested formulations, we first optimized and validated the high-performance liquid chromatography (HPLC) method, initially developed in our previous study [43].Prior to injection, tested AP-loaded LCs were diluted with methanol to obtain an AP concentration of approximately 40 mg/L.The data on the repeatability of the AUC for AP obtained upon two subsequent injections of the same sample revealed the need for stabilizing the AP in prepared methanol solutions.Its stability was improved by the addition of ascorbic acid at a concentration of 200 mg/L.With 96.2% of nondegraded AP upon 24 h of storage (compared to 65.9% in reference methanol solution), ascorbic acid was confirmed as the most efficient for stabilizing AP in HPLC samples among tested antioxidants and solvents (i.e., ascorbic acid, EDTA, BHT, methanol, and acetonitrile) (data are presented in Supplementary Materials).The HPLC method was then validated to confirm that it is accurate, reproducible, and sensitive within the specified analysis range.The specificity of the developed procedure was confirmed, as no other component of the tested samples or solvents had the same retention time as AP (i.e., approximately 4.8 min).The RSD of the AUC for tested samples prepared in five parallels was below 1%, thereby confirming the repeatability of the method, whereas its accuracy tested in three parallels was confirmed to be 100.3%, with a low RSD.The standard curve was linear with a correlation coefficient (r 2 ) of 0.9996, over the range of 5-500 mg/L for AP.The HPLC method validation data are presented in Supplementary Materials.
The stability of AP in prepared LC formulations was tested during 8 weeks of storage under controlled conditions.As evident from Table 1, AP stability is affected by water content in LCs.AP is more stable in samples with the lowest water ratio (LC1-AP-LC5-AP) than in more diluted samples (LC6-AP-LC8-AP).The AP stability decreased with increasing water content over the 28 days of the study.After 4 weeks, the amount of nondegraded AP in the sample LC1-AP (with the lowest water content) was 52%, while in the most diluted LC8-AP with the highest water content, only 29% of AP was detected.A possible explanation is a higher amount of interlamellar water, allowing the dissolution of more oxygen, which leads to the degradation of AP positioned at the interface between the polar hydrophilic heads of lecithin and Tween 80 and the interlamellar space.The stability of AP is further affected by the viscosity of the tested formulations.Although LC1-AP and LC2-AP contain a higher amount of water as self-microemulsifying drug delivery systems (SMEDDSs, i.e., the anhydrous system used for comparison), AP was more stable in LCs due to their higher viscosity (approximately 47 (LC1-AP) and 19 (LC2-AP) Pa*s compared to 8 Pa*s (SMEDDS) at 25 • C, respectively), limiting the diffusion of oxygen and thus AP oxidative exposure and degradation [43].Likewise, AP was less stable in water-in-oil microemulsions (W/O MEs), being less viscous as compared to LCs.The trend was similar after 56 days of storage.The AP degradation during the first 28 days followed the first-order kinetic (the Pearson's coefficients value was above 0.98 with the exception of LC1-AP (0.959) and LC2-AP (0.974), and the degradation rate constant increased with higher water content (Figure 1).Although the stability of AP in LCs is higher, as reported by Špiclin et al. [22], who determined 19% of remaining AP in oil-in-water microemulsions and ~13% in water-in-oil microemulsions, both loaded with 1% AP and stored at 22 ± 1 • C for 28 days, it is inferior to the study of Üner et al. [21].They reported that 48% (nanostructured lipid carriers), 59% (solid lipid nanoparticles), and 50% (nanoemulsion) of nondegraded AP were detected in samples loaded with 1% AP after 3 months of storage at 40 • C. As for our study, below 10% of AP was nondegraded after 56 days, yet again with the exception of LC1-AP and LC2-AP, with 14.8% and 10.6% of AP nondegraded, respectively.Therefore, aiming to improve the AP stability in LCs, they were co-loaded with 1% AP and 1% vitamin C. The results of the preliminary study indicate that for LC1-AP, the percentage of nondegraded AP increased from 13 to 36% after 56 days of storage as a result of the stabilizing effect of vitamin C. As LC1 with a lower water content shows a high solubilization capacity for AP, it could be loaded in higher concentrations, which was identified as beneficial as well (after 56 days of storage of LC1-AP loaded with 5% AP, the amount of nondegraded AP was ~33%).Both approaches tested were proven promising to further address AP (in)stability in LCs' as well as in other (phospho)lipid-based formulations.
In Vitro Release Profile of Ascorbyl Palmitate
Due to specific microstructure and specific rheological characteristics, lamellar LCs are recognized as better alternatives to conventional emulsion systems, not only in terms of stability but also in terms of controlled release and moisturizing ability.Alternation of LC microstructure can occur due to dilution with physiological fluids, i.e., with water on the skin surface in cases of dermal application, which consequently implies different release rates.Information on the diffusion of an active ingredient from the vehicle can be provided by in vitro release studies through the artificial hydrophobic membrane, which depends on the physical-chemical properties of components, the internal structure of the vehicle, and the interaction between the drug and the vehicle [44][45][46].To assess the drug release profile, testing conditions were optimized with regard to the pore size of the acetate cellulose membrane (i.e., 0.2 µm vs. 0.45 µm) and composition of the release medium (i.e., methanol/ultrapure water ratio of 85/15, 70/30, and 50/50 with ascorbic acid added as a stabilizer in 200 mg/L concentration).The release profiles of AP from LCs 1-8 through the artificial membrane with higher pore size into the medium with the highest methanol content are shown in Figure 2. AP release was characterized by two parameters: the amount released after 8 h and the rate of drug release.The highest amount of AP was released from LC1-AP and LC2-AP (~18% after 8 h).As they differ in microstructure, as confirmed by structural characterization, this indicates the importance of water content between layers.Namely, water content determines the state of interlamellar water as a result of diverse interactions with amphiphilic molecules, i.e., lecithin and AP, in the case of AP-loaded LC.The similarity of both profiles was also proposed by the values of dissolution profile difference factor f 1 (i.e., 8) and similarity factor f 2 (i.e., 89).For two dissolution profiles to be considered similar, f 1 should be between 0 and 15, whereas f 2 should be between 50 and 100 [47].
The intermediate amount of AP was released from samples LC4-AP, LC7-AP, and LC3-AP.For samples LC3-AP and LC4-AP, the most distinct Maltese crosses typical of lamellar mesophases were observed by polarized light microscopy in addition to LC5-AP.So, parallel movements of layers that ease the AP diffusion most likely ease the AP release from samples LC3-AP and LC4-AP.
The lowest amount of AP was released from most diluted samples, LC5-AP, LC6-AP, and LC8-AP, with the exception of LC7-AP.This effect can be attributable to the pronounced swelling of lecithin, resulting in increased viscosity and increased interlayer spacing due to higher water content (also indicated by a reduction in Maltese crosses), altogether hindering the AP diffusing from the system.AP release profiles for LC5-AP and LC6-AP are also most similar to LC8-AP, having the lowest values of difference factor f 1 (6 and 5, respectively) and highest values of similarity factor f 2 (98 for both pairs) among all profiles.
The AP release kinetics were analyzed using zero-and first-order kinetics as well as the Korsmeyer-Peppas, Higuchi, and Hixon-Crowell models (Table 2).The calculated Pearson's coefficients (in the range of 0.9640-0.9939)indicate the best fit for the Higuchi model, suggesting AP release by diffusion from all samples, with the exception of LC3-AP and LC7-AP.This agrees with Martiel et al. reporting caffeine release profiles from cubic and lamellar LCs fitting to the Higuchi model [34].LC3-AP and, in particular, LC7-AP show a slightly better fit with the Korsmeyer-Peppas model and first-order kinetic, respectively.In the case of LC3-AP, the value of diffusion exponent n was greater than 0.89, suggesting the supercase II transport release mechanism, whereas a concentration-dependent release mechanism was proposed for LC7-AP.This is in line with the AP release profiles for LC3-AP and LC7-AP that stood out from other samples positioned on the same dilution line.
In Vitro Skin Performance
The excellent skin performance of the lamellar LCs is largely related to the similarity of their microstructure to the intercellular lipid matrix of the stratum corneum [48].The biological acceptability of the lamellar LCs was confirmed on isolated keratinocytes [35], while in the present study, the performance of LCs was evaluated by measuring their influence on barrier function and hydration level of pig's ear skin in vitro.
Application of all LCs resulted in lower transepidermal water loss (TEWL) values measured 30 min and 90 min after they were removed from the skin (Table 3).More precisely, after 30 min between 1.2-fold (LC1) and 1.04-fold (LC7), lower TEWL values were observed as compared to the basal measurements, though not statistically significant.A similar trend was also observed after 90 min, where LC1 and LC5 performed best.This allows us to confirm the barrier-strengthening effect of LCs; nevertheless, it cannot be linked to their composition (e.g., water content) or specific microstructure, as the differences among systems were not significant.In addition to intact barrier function, proper hydration of the epidermis is important to support epidermal homeostasis and maintain skin health [49].In agreement with decreased TEWL, improved skin hydration was determined for all LCs at both measurement time points.After 30 min, the best skin hydration values compared to basal measurements were observed for LC1 (1.9-fold increase) and LC2 (1.6-fold increase (p < 0.05), whereas the lowest effect was observed for LCs with higher water content (LC5-LC8).After 90 min, the skin-hydrating effect of LCs is less pronounced; nevertheless, the observed trend is still visible.The prolonged moisturizing ability of LCs, especially for LC1-LC4, among which a significant, 1.7-fold (p < 0.05) improvement was observed for LC3, can be explained by their internal structure, with bulk water present in the interlamellar space together with loosely or intermediately bound water of the second hydration layer, as confirmed by DSC analysis, which results in prolonged release.This is in line with the lower water loss rate of LC emulsion observed by Bing et al., who reported improved moisturizing properties in addition to the slow release and promoted penetration effect of LC emulsion as compared to conventional emulsion [50].In addition, lipophilic components of LCs present an emollient effect and decrease TEWL, thereby supporting water retention within the stratum corneum.In this regard, further clinical assessment for a final comprehensive appraisal of tested LCs would be applicable, especially as recently beneficial short-and long-term effects of hempseed or flaxseed oil-based lamellar LCs on the skin barrier function of healthy adult subjects were reported [26].Data are given in g/hm 2 for TEWL and in Corneometer ® CM 825 arbitrary units (a.u.) for skin capacitance, i.e., skin hydration (n = 4; data are shown as mean ± standard deviation (SD)).# Average change from baseline in absolute (g/hm 2 for TEWL and a.u.for skin hydration) and relative (%) values.p-value *-change from baseline after 30 min.p-value **-change from baseline after 90 min.
Structural Characterisation of LCs
The development and detailed structural evaluation of lecithin-based lamellar liquid crystals positioned on the same dilution line were reported in our previous research [41].This study involves the evaluation of stability, release profile, and skin performance to conceptually upgrade our earlier work aiming to develop an advanced dermal formulation to combat skin photoaging.In this regard, structural characterization of AP-loaded LCs was performed in order to support and correlate the obtained results with the microstructure of the samples investigated.As the microstructure of lyotropic LCs is temperaturedependent, structural characterization was performed at specific targeted temperatures to support the stability and skin performance studies.More precisely, samples were tested at 32 • C and 37 • C, mimicking dermal delivery in addition to ambient conditions representing the storage temperature.
Polarized Light Microscopy Investigations
With AP incorporated in lyotropic LCs as an active ingredient with amphiphilic character, especially when considering its self-assembly property [37], an alternation of microstructure could take place.Visualization and preliminary microstructure identification at 25 • C of AP-loaded LC1-LC8 were performed using cross-polarized light microscopy.Clearly seen Maltese crosses confirmed the lamellar microstructure for AP-loaded LC samples (Figure 3) apart from LC1-AP.Even though not pronounced, fan-shaped structures imply hexagonal mesophases for LC1-AP that are most likely locally distributed.Considering that the following dermal application formulation is being subjected to physiological dilution with water passing through the skin as TEWL, it could be postulated that the LC1-AP system would likewise possess lamellar structure as observed for all other AP-loaded LCs.
SAXS Analysis
The SAXS measurements were performed for AP-loaded LCs at three predetermined temperatures, with corresponding SAXS spectra shown in Figure 4. Ordered lamellar microstructure is reflected in scattering vector q in the ratio q 1 :q 2 = 1:2 observed for all AP-loaded LCs apart from the samples with either the lowest or the highest water content, i.e., LC1-AP or LC8-AP, respectively.As for the samples that exhibited lamellar structure, a strong intensity of SAXS scattering peaks was observed.As their intensity or width remained practically unchanged at temperatures relevant for in vitro release testing, it is reasonable to conclude that the AP release takes place in the ordered bilayer structure.A small and wider peak with low intensity was randomly detected before the first scattering peak for LC2-AP and LC3-AP (and only at 37 • C for LC4-AP), most likely arising from the co-existence of micellar aggregates.The different microstructure of LC1-AP being anticipated based on prelaminar visualization was indeed confirmed by five distinctive scattering peaks with ratios inconsistent for either hexagonal or cubic arrangements and remaining consistent for all temperatures tested.On the other hand, no distinct peaks were observed for LC8-AP, indicating a lack of lamellar arrangement; nevertheless, observed Maltese crosses by polarized light microscopy could indicate local areas of formed lamellae.Additionally, the interlayer spacing d was calculated according to the results presented in Table 4.If the spacing for LC1-AP structures remained practically unaltered at temperatures tested, a continuously increasing interlayer spacing ranging from 7.74 nm for LC-AP2 to 11.24 nm for LC7-AP at 25 • C was observed along the dilution line, coinciding with the "lamellar" swelling law [38,51].The same trend was followed by increasing temperatures, yet repeated distances were slightly increased.Lack of LC structural organization in the case of LC8-AP (therefore, no repeated distance could be determined) implies that systems undergo phase transition due to dilution, with 50% (m/m) water being the maximum amount that still holds lamellar arrangement.This is in line with DSC results with higher amounts of bulk water detected for more diluted systems and, in particular, with elevated AP release from LC7-AP with presumably more loose lamellas due to excess free water between the polar heads of lecithin and AP molecules.As reported previously, different types of water have been detected in surfactant-based microstructures like lyotropic LCs, as their behavior is sensitive to the presence of adjacent interfaces of varying types [52,53].Different physicochemical characteristics of strongly bound (nonfreezable) water of the first hydration layer strongly interacting with the polar heads of surfactants, loosely or intermediately bound (freezable) water of the second hydration layer, and the free water present in the bulk layer influence not only the thermal behavior of LCs (e.g., intermediately bound water crystallizes at a lower temperature as bulk water and evaporates more slowly) but also their skin performance (e.g., the duration of moisturization effect) [50].The amount of bulk water in LCs is expected to increase along the dilution line by increasing the distance between the lamellae.The incorporation of drugs with amphiphilic characters like AP may show the opposite effect, though.The state of water in prepared systems was likely linked to phase changes in lyotropic LCs, and their performance was thus determined by DSC analysis.
The DSC scans of samples were performed with cooling/heating rates of 2, 5, and 10 K/min, and the DSC thermograms obtained by the intermediate temperature increasing/decreasing rate are presented in Figure 5.This scanning rate was identified as optimal with regard to sensitivity and selectivity and also enabled the best comparison with data obtained for unloaded LC counterparts presented in our previous study [41].In the cooling curves of all LC samples (Figure 5a), the solidification of IPM is clearly visible, as indicated by the "triple exothermic peak" between −7 • C and −17 • C, most probably indicating the solidification of different polymorphs [53].The other exothermic event visible in cooling curves of samples with water content above 25-30% (m/m) is its freezing around −24 • C (in LC7-AP and LC8-AP with 50 and 55% (m/m) water) and −43 • C (in LC3-AP with 30% (m/m) water).In LC2-AP with 25% (m/m) water, its crystallization was only visible at a cooling rate of 2 K/min (−47 • C), while in LC1-AP, it could not be detected.The cooling curves of AP-loaded samples thus confirmed the presence of nonfreezable interlamellar water (the first hydration layer) and two types of freezable water, presenting the second hydration layer that keeps the degree of freedom necessary to form ice-like hydrogen bonds and bulk water.With increasing amounts of water, the crystallization peak of immediate bund water increases in area and shifts towards higher temperatures (visible in samples LC2-AP to LC5-AP) until the freezing peak of free water can be seen in samples LC6-AP to LC8-AP.Presumption on the co-existence of bound water with bulk water in samples LC6-AP to LC8-AP was to some extent confirmed by measurement of pure double distilled water, where a similar peak at approximately −21 • C was also observed (at cooling rate 5 K/min), indicating freezing of supercooled water.Regarding the state of water present in AP-loaded lyotropic LCs, the DSC heating curves are, for the most part, in agreement with conclusions made based on DSC cooling curves.In agreement with Kodama and Aoki [54], the ice obtained from freezable interlamellar water begins to melt at temperatures as low as around −40 • C (for samples with the lowest water content) or −30 • C (for samples containing above 30% (m/m) water) and continues to melt up to above 0 • C, whereas the ice derived from bulk water melts in a narrow temperature range around 0 • C (visible in LC5-AP to LC8-AP).
It was expected for AP to distribute into bilayers due to its amphiphilic nature, resulting in a decreased amount of freezable water in LCs in the presence of AP as compared to their unloaded counterparts [42].Namely, AP molecules are expected to present additional polar headgroups interacting with water molecules in an intrabilayer of lamellar LCs or rod-like aggregates (i.e., micelles) in the separation zone of the hexagonal phase (as seen for sample LC1-AP).Based on the DSC cooling curves, the proposed hypothesis could be partially confirmed for sample LC2-AP, for which the water freezing peak can only be detected by the lowest cooling rate of −2 K/min (not in all parallels, though), and samples LC3-AP and LC4-AP, for which the water freezing peak (presented as T onset ) was shifted towards lower temperatures upon incorporating AP (from approximately −32 • C to −38 • C for LC3-AP and from approximately −29 • C to −37 • C for LC4-AP), while this was not the case in samples with water content above 40% (m/m).
Rheological Behavior
While rotational measurements present an important quality control tool for all dermally applicable pharmaceutical systems by giving information on their flow properties and features under applied stress [55], more specific information on the network structure of the liquid crystal phases can be obtained from dynamic strain sweep measurements and oscillatory shear frequency sweep measurements.All samples tested had relatively high consistency and exhibited a strong decrease in viscosity with the growing shear rate at all temperatures tested, confirming shear-thinning behavior typical of pseudoplastic systems (viscosity flow curves for LC1-AP to LC8-AP are a part of the Supplementary File).In lamellar LCs, such behavior originates from their smectic structure, where parallel layers slide over each other with relative ease during shear [56][57][58].
At the lowest measured shear stress (2 s −1 ), the viscosity of LCs decreases with increasing temperature (Table 5).While the gradual increase in water content along the dilution line was linked with increased viscosity at 25 • C for unloaded samples (with the exception of the sample with the highest water content) [41], the viscosity dependence of LCs on water content is not so straightforward in the presence of AP.The sample with the lowest water content (LC1-AP) that was identified as hexagonal mesophase shows considerably higher viscosity than subsequent samples (from LC2-AP to LC4-AP at 25 • C or up to LC6-AP at higher temperatures) and is also the only AP-loaded sample that is more viscous than its unloaded counterpart [41].Some microstructural phenomena can additionally be observed in LC4-AP (at all temperatures tested), LC6-AP (only at 25 • C), and LC8-AP (at all temperatures tested), all having a lower viscosity than the previous sample on the dilution line.Similar phenomena observed in the unloaded counterpart of LC8-AP were related to a phase transition into the micellar phase taking place due to an increase in water content as the main drive, associated with pronounced dilution leading to swelling of the structures due to the incorporation of water between lamellas [41].The most pronounced impact of AP incorporation on the rheological behavior of LCs was observed for LC4-AP, which is considerably less viscous than LC3-AP.Opposite results were observed in unloaded counterparts, with LC4 being more than twice as viscous as LC3 [41].In general, the viscosity of samples LC4-AP and LC6-AP was also most affected by the temperature increase.In agreement with dilute lamellar phases typically occurring in relatively narrow ranges of temperature, temperature increases the long-range undulating repulsion between layers, and thus, the free energy of the system can be increased by the mechanism of decreased bending modulus [39].In agreement with the well-known fact that the lamellar liquid crystalline phase shows lower values of the rheological functions than other lyotropic phases detected in nonionic surfactant/water phase diagrams [59], systems LC2-AP to LC6-AP may be classified as lamellar LCs.The linear viscoelastic properties of LCs were determined by means of frequency sweeps inside the linear viscoelastic region to obtain more information on the network structure of LCs.As observed in Figure 6, almost constant values of the storage modulus G ′ , showing only a slight increase with increased frequency, and a clear minimum in the loss modulus G ′′ can be detected for systems LC3-AP to LC7-AP (partially also in LC2-AP and LC8-AP with only a slightly pronounced minimum in G ′′ ).At the same time, the complex viscosity η* drops linearly as a function of frequency for all systems tested.The presented rheological behavior is typical of lamellar phases and other gel-like structure systems that are also characterized by higher values of storage modulus in a wide range of frequencies [60][61][62].Contrary to other LCs tested, different rheological behavior was observed for sample LC-AP1, which had the lowest water content among all samples.Both dynamic moduli (G ′ and G ′′ ) enhanced with increasing frequency, with storage modulus G ′ having a greater slope than loss modulus G ′′ (for LC1-AP).The observed rheological pattern is representative of the hexagonal LC phases [63], usually showing traits of the general Maxwell model, in which the values of the dynamic moduli increase with increasing frequency with different slopes [60].The obtained data correspond well with the SAXS analysis of LC-AP1.
According to the results presented in Figure 7, the loss modulus G ′′ is practically independent of water content, while the storage modulus G ′ increases with a higher water ratio.The ratio between G ′ and G ′′ (tan δ) below 1 is suggestive of an elastic gel structure [63].As presented in Table 5, the elasticity of LC systems increases with increasing water-to-surfactant ratios (from LC2-AP to LC8-AP), as seen from the decrease in tan δ at a frequency of 100 Hz.Based on the values of the tan δ samples, LC3-AP to LC4-AP and LC5-AP to LC7-AP show the most comparable characteristics.As expected, tan δ was found to be significantly larger for LC1-AP (0.333) and LC2-AP (0.497) than that for the lamellar phase (between 0.166 for LC3-AP and 0.070 for LC7-AP or 0.044 for LC8-AP), which implies a more viscous gel structure and a completely different rheological pattern, i.e., altogether, a different microstructure.In agreement with the aforementioned data and DSC results, the tan δ values of LCs were not influenced in a straightforward way upon incorporation of AP.While systems with intermediate water content (LC5-AP and LC6-AP) seem least affected, the most pronounced changes were observed in systems with the lowest (LC1-AP and LC2-AP) and highest water content (LC8-AP).
Sample Preparation
Representative samples of AP-loaded LCs, whose composition is presented in Table 6, were further characterized.Samples were prepared by mixing appropriate amounts of IPM, Tween 80, and lecithin to form a homogeneous mixture in which 1% (m/m) of AP was dissolved.Water was added afterwards during continuous stirring to form lyotropic LCs.For LC systems without AP incorporated, the samples were prepared to keep the same mass ratio between the components as presented (Tween 80/lecithin/IPM/water). * Mass ratio of Tween 80 to lecithin is 1/1.
For the AP stability study, two additional samples were tested, i.e., SMEDDS and W/O ME.SMEDDS was obtained by blending the IPM and surfactant mixture (Tween 80/lecithin) at mass ratio 7/3, while in case of W/O ME IPM (25.25%) and surfactant mixture with butanol as cosurfactant (Tween80/lecithin/butanol at mass ratio 1/1/2; 58.90%) were homogenously mixed and then diluted with bidistilled water (14.85%).For both samples, AP (1%) was dissolved in an oil-surfactant mixture.
For skin performance testing, the LC samples were prepared without AP incorporated (LC1-LC8) following the same procedure and keeping the mass ratio between the components (Tween 80/lecithin/IPM/water), as reported in Table 6.
Stability Study
The tested samples were stored for 8 weeks at 40 • C, 75% relative humidity, and protected from light in glass containers.At predetermined time points (0, 1, 7, 14, 28, 47, and 56 days), 100 mg of each sample was diluted to 25 mL with methanol containing ascorbic acid at a 200 mg/L concentration as a stabilizer.The concentration of AP in the tested samples was determined by HPLC analysis.Measurements were performed by the Agilent 1200 series HPLC system.The stationary phase was a 125 × 4 mm column packed with 5 µm Nucleosil C18, and the mobile phase was a mixture of -methanol-acetonitrile-0.02 M phosphate buffer of pH 3.5 (75/10/15).The volume of injection was 20 µL, the flow rate was 1.5 mL/min, and the wavelength of UV detection was 254 nm.All analyses were performed at 25 ± 1 • C.
In Vitro Drug Release Study
AP release through a hydrophilic cellulose acetate membrane was determined with a Franz diffusion cell (n = 4) with a diffusion area of 0.785 cm 2 at 25 • C. A total of 9 mL of receptor medium (methanol and ultrapure water mixed at different ratios) was used, and 400 mg of tested AP-loaded LC1-LC8 was applied on the donor side.A total of 800 µL aliquots of the receptor medium were collected at predetermined time intervals (30 min, 1 h, 2 h, 4 h, 6 h, 7 h, and 8 h) and replaced by fresh medium.If the samples were turbid, they were diluted with receptor medium prior to the HPLC analysis that was applied to determine the AP content of the collected samples.Drug release was expressed by the amount of released AP (%) as a function of time.
Skin Performance Testing
Skin performance testing was evaluated in vitro on pig's ear skin mounted on Franz diffusion cells (n = 4).During the experiments, the temperature was kept at 22 ± 1 • C and the relative humidity between 40 and 60%.The receptor chamber was filled with physiological fluid (0.9% aqueous solution of NaCl), and the pig's ear skin was placed between the donor and receptor compartments on the stratum corneum side.Prior to the application of LCs, the skin was allowed to temperate for 1 h; namely, the temperature of the receptor compartment was kept at 37 • C, establishing a temperature gradient with ambient room temperature, resulting in a skin temperature of 32 • C.Then, approximately 20 mg of LC sample was accurately weighed and transferred on the skin into the donor compartment for 60 min under non-occlusive conditions.Almost 60 min after the application of LCs, they were gently removed from the skin surface, which was whipped with a cotton stick.The skin performance was assessed at ambient room conditions for 30 min and 90 min after LCs were removed from the skin surface.TELW was measured by an open chamber device (Tewameter ® TM 300 from Courage + Khazaka, GmbH, Germany).Prior to measuring the probe, it was preheated to 32 • C by the Probe Heater ® PR 100 (Courage + Khazaka, GmbH, Germany).Individual measurements lasted for 60 s, with one reading collected per second.The average of 10 consecutive readings with the lowest SD represented the TEWL value used for further analysis.TEWL values are presented as absolute values (g/hm 2 ).
Afterward, skin hydration was assessed by the Corneometer ® CM 825 (Courage + Khazaka, GmbH, Germany) at the same time points.The results are given in Corneometer ® CM 825 arbitrary units (a.u.) as a mean value of six subsequent measurements for each Franz diffusion cell.
For TEWL and skin hydration, statistical analysis was carried out using an independent sample Student's t-test at the 0.05 level of probability.
Structural Characterization
The microstructure of AP-loaded LCs was evaluated using a set of techniques typically used for structural characterization of LC systems and previously reported by our group [23,41].
Polarizing Light Microscopy
The structure of the AP-loaded LCs was examined with a microscope with polarization using a Physica MCR 301 rheometer (Anton Paar, Graz, Austria) at 25 • C. The magnification used was 20×.
Small-Angle X-ray Scattering An evacuated Kratky compact camera system (Anton Paar, Graz, Austria) with a block collimating unit attached to a conventional X-ray generator (Bruker AXS, Karlsruhe, Germany) equipped with a sealed X-ray tube (Cu-anode target type) operating at 35 kV and 35 mA and producing Ni-filtered Cu Kα X-rays with a wavelength of 0.154 nm was used for SAXS measurements.The AP-loaded LCs were transferred to a standard quartz capillary placed in a thermally controlled sample holder centered in the X-ray beam, with measurements performed at 25, 32, and 37 • C. The scattering intensities were measured with a linear position-sensitive detector (PSD 50 m, M. Braun, Garching, Germany), detecting the scattering pattern within the entire scattering range simultaneously.For each AP-loaded LC, five SAXS curves with a sampling time of 15,000 s were recorded and subsequently averaged.
The interlayer spacing d was calculated according to the equation: where q 1 is the scattering vector magnitude of the first reflection.
Differential Scanning Calorimetry
The DSC thermograms of AP-loaded LCs were recorded in duplicates or triplicates using a differential scanning calorimeter (DSC1 STARe System, Mettler Toledo, Switzerland).Approximately 10 mg of the sample was weighed precisely into a small aluminum pan.The empty, sealed pan was used as a reference.Samples were cooled from 20 • C to −60 • C (cooling rates: 2, 5, and 10 K/min), kept at −60 • C for 15 min, and then heated back to 20 • C (heating rates: 2, 5, and 10 K/min) under a stream of nitrogen at 50 mL/min.The DSC thermograms of the individual components (Tween 80, lecithin, IPM, and water) plus their binary and ternary mixtures were presented in our previous study [41].
Rheological Measurements
Rheological evaluation of AP-loaded LCs was performed using a Physica MCR 301 rheometer (Anton Paar, Graz, Austria) and the cone-plate measuring system CP50-2 with a conical disc diameter of 49.961 mm and a cone angle of 2.001 • .
Rotational tests were performed at 25.0 ± 0.1 • C, 32.0 ± 0.1 • C, and 37.0 ± 0.1 • C, and the shear rate was increased from 2 to 100 s −1 .The viscosity was calculated according to the following equation: η = τ/γ˙ (2) where τ is the shear stress and γ˙is the shear rate.
Oscillatory tests were performed at a constant temperature of 25.0 ± 0.1 • C to define the storage and loss moduli, which are calculated according to Equations (3) and (4): G′ = (τ / γ) × cos δ (3) where τ is the shear stress, γ is the deformation, and δ is the phase shift angle, together with complex viscosity calculated according to Equation ( 5): where ω is the angular frequency.
In order to determine the linear viscoelastic region, the stress sweep measurements were first performed at a constant frequency of 10.0 s −1 .Afterward, the oscillatory shear measurements were carried out as a function of frequency (0.1-100 s −1 ) at a constant amplitude (10%) chosen within the linear region.
Conclusions
In the present study, phospholipid-based LCs were evaluated as dermal delivery systems for AP.It has been established that loading AP at 1% (m/m) results in its incorporation within preformed structures.While the systems with the lowest and highest water content, i.e., LC1-AP and LC8-AP, exhibited distinctive structural characteristics, lamellar microstructure was observed for LC2-AP to LC7-AP.As confirmed by SAXS, DSC, and rheology analysis, their bilayer features were responsive to increasing water content, which was finally reflected in diverse AP stability and release profiles.In general, samples with the lowest water content (LC1-AP and LC2-AP) exhibited the best AP stability (up to 52% and 10 to 15% of nondegraded AP upon 28 and 56 days of storage, respectively) and faster API release (~18% in 8 h) as compared to the most diluted sample, LC8-AP (29% and 4% of nondegraded AP upon 28 and 56 days of storage, respectively, and up to 12% of AP released in 8 h).Then again, overall high AP instability in LCs, being below 10% after 56 days (with the exception of LC1-AP and LC2-AP), was improved by testing two stabilization approaches, i.e., the addition of vitamin C and increasing AP concentration.
Even though no straightforward relationship between microstructure and LCs' skin performance can be given, LCs' skin barrier-strengthening and hydrating properties are apparent, with up to 1.2-fold lower TEWL and up to 1.9-fold higher skin hydration values as measured on a porcine skin model in vitro.The prolonged moisturizing ability of LCs can, to some extent, be linked to their internal structure, with bulk water present in the interlamellar space together with loosely or intermediately bound water of the second hydration layer, which results in prolonged release.
While the LC platform at this point also comprises novel systems, namely liquid crystalline nanoparticles with great potential for targeting various skin disorders, bulk lyotropic LCs with semi-solid consistency and, when developed, simple and fast production and thermodynamic stability, continue to be of high biomedical relevance as dermal delivery systems.The results obtained within the presented study provide solid ground for the utilization of lamellar LC-based formulations for skin delivery, tailored regarding microstructure in order to attain appropriate stability and release of incorporated actives as well as skin performance.
Figure 2 .
Figure 2. The release profiles of AP from LCs.
Figure 3 .
Figure 3. Polarized light microscopy photomicrographs of representative LC samples; encircled are fan-shaped textures (LC1-AP), while white arrows point to Maltese crosses observed for LC2-AP to LC8-AP.
Table 4 .
Interlayer spacing d (calculated from the first peak top of the small-angle X-ray scattering (SAXS) curves) of the tested LC1-LC8 at given temperatures.Sample d (nm)
Table 1 .
The stability of ascorbyl palmitate (AP; %) incorporated in liquid crystals (LCs), selfmicroemulsifying drug delivery systems (SMEDDSs), and water-in-oil microemulsion (W/O ME) is presented as the mean value ± standard deviation (SD).
Table 2 .
Pearson's coefficients (r 2 ) values for the AP release from LC1-AP to LC8-AP fitted to zero-order and first-order kinetic, and Higuchi, Korsmeyer-Peppas, and Hixon-Crowell models.
Table 3 .
Absolute transepidermal water loss (TEWL; first row for each LC) values and absolute skin capacitance values (second row for each LC) with changes from baseline 30 min and 90 min after a 60 min treatment with LC1-LC8.
Table 6 .
The composition of tested AP-loaded LCs lies on the same dilution line (m/m%). | 10,487.4 | 2024-07-01T00:00:00.000 | [
"Materials Science",
"Medicine",
"Chemistry"
] |
Baby Skyrmions in AdS
We study the baby Skyrme model in a pure AdS background without a mass term. The tail decays and scalings of massless radial solutions are demonstrated to take a similar form to those of the massive flat space model, with the AdS curvature playing a similar role to the flat space pion mass. We also numerically find minimal energy solutions for a range of higher topological charges and find that they form concentric ring-like solutions. Popcorn transitions (named in analogy with studies of toy models of holographic QCD) from an n layer to an n + 1-layer configuration are observed at topological charges 9 and 27 and further popcorn transitions for higher charges are predicted. Finally, a pointparticle approximation for the model is derived and used to successfully predict the ring structures and popcorn transitions for higher charge solitons.
Introduction
The Skyrme model [1] is a non-linear theory of pions in (3 + 1) dimensions, admitting soliton solutions called Skyrmions. It has been derived as a low-energy effective field theory of QCD in the large colour limit, and has also been used in holographic models such as the Sakai-Sugimoto model [2,3] where Yang-Mills Chern-Simons instantons in a (4 + 1)-dimensional bulk spacetime are dual to (extended) Skyrmions on the boundary. A key feature of these models is that the bulk spacetime is AdS-like, possessing a conformal boundary and a finite, negative scalar curvature.
Solitons in pure Anti de-Sitter spacetimes are also of interest. It has been shown that Skyrmions with massless pions in hyperbolic space are closely related to Skyrmions with massive pions in Euclidean space [4,5]. Since constant time slices of AdS spacetimes are hyperbolic we may expect similar results in pure AdS. In addition, monopoles and monopole walls have been studied in AdS [6,7], motivated as a magnetic version of holographic superconductors.
The baby Skyrme model [8] is a (2 + 1) dimensional analogue of the Skyrme model. Baby Skyrmions have recently been used to study low-dimensional models of the Sakai-Sugimoto model in the context of dense QCD [9,10]. In these toy models a series of phasetransitions were observed in which infinite chains of solitons split into multiple layers with increasing density. These were dubbed popcorn transitions, and the extra layers were found to be separated in the holographic direction.
JHEP09(2015)009
Here we investigate the baby Skyrme model in a pure AdS background and study the resulting soliton and multi-soliton solutions. The low dimensionality of the model makes full numerical field computations viable, and we find that the curvature of the spacetime allows us to find soliton solutions even without a pion mass term. Multi-solitons beyond topological charge B = 3 are found to take the form of ring-like structures, with popcornlike phase transitions to multi-layered rings occurring as the topological charge increases. Inspired by methods used to study aloof baby Skyrmions [11] we will derive a point-particle approximation to study these phase transitions for higher topological charges.
AdS spacetime in (+ 1) dimensions
AdS is the maximally symmetric spacetime with Minkowskian signature and constant negative curvature, and in this paper we will be interested in the (2 + 1)-dimensional case. The metric, given in sausage coordinates, can be written Here, L is the AdS radius (related to the cosmological constant Λ via Λ = −1/L 2 ) and r = x 2 + y 2 ∈ [0, 1) is a radial coordinate. These coordinates are useful numerically as they are global coordinates over a finite range. Later we will use that the geodesic distance between two points x x x and y y y in this spacetime is given by . (2. 2) The following global coordinates give another useful way of writing the AdS metric: where ρ ≥ 0. These coordinates are useful since the radial coordinate ρ coincides with the geodesic distance from the origin in this model. The global and sausage coordinates are related by r = tanh ρ 2L . Since constant time slices of AdS are hyperbolic space, it will be useful to define a hyperbolic translation in sausage coordinates. A translation sending the origin to some point a a a is given by It should be noted that this is an isometry of only the constant time slices of the spacetime, rather than the full spacetime, since the time component of the metric is dependent on r.
As a final note, the Ricci scalar curvature of AdS can be calculated as R = −6/L 2 . In the limit L → ∞ this curvature vanishes and we recover flat space.
JHEP09(2015)009 3 The AdS baby Skyrme model
We will be interested in studying soliton solutions of the baby Skyrme model, given by the action The field φ φ φ = (φ 1 , φ 2 , φ 3 ) is a three component unit vector. The first term is that of the O(3)-sigma model, and the second term is the baby Skyrme term with constant coefficient κ 2 . The third term is a potential term containing the pion mass parameter m (named in analogy with the pion mass term in the full Skyrme model), and a constant unit vector n n n.
Greek indices run over spacetime coordinates t, x and y, and later we will use Latin indices to run over the purely spatial coordinates. The symmetry of the action is O(3) for m = 0 and O(2) for m = 0. The associated static energy of this model is which yields the equations of motion For finite energy we then require φ φ φ → n n n as r → 1. Without loss of generality we can choose n n n = (0, 0, 1), and in the massless (m = 0) case this choice of boundary value breaks the O(3) symmetry of the model to O(2). We can identify points on the boundary r = 1 and treat the pion fields as maps φ φ φ : S 2 → S 2 , giving rise to an associated winding number and topological charge which we identify with the baryon number of the configuration. By noting the inequalities we obtain the Bogomolny bound E BS ≥ 4π|B|.
Radial baby Skyrmions in AdS
We begin by discussing some properties of radially symmetric solitons in our model, working in global coordinates (2.3). Due to the principle of symmetric criticality and the symmetries JHEP09(2015)009 of both AdS and the action, we would expect static B = 1 solitons in our model to posses radial symmetry and be centred at the origin ρ = 0. Static, radially symmetric configurations with topological charge B are given by the hedgehog ansatz where f (ρ) is some profile function satisfying f (0) = π, f (∞) = 0. ψ is some constant internal phase which we can set to zero here due to symmetry, although internal phase differences will become important when we consider multi-solitons. Substituting into (3.2) and performing the coordinate transformation yields the static energy We can numerically find the profile functions f (ρ) for different values of B by minimising (4.2) using a modified gradient flow method.
In other baby Skyrme models it has been found that radial solitons can be wellapproximated by flat-space instantons of the O(3)-sigma model. This approximation has enabled an investigation of how the soliton sizes µ scale with the baby Skyrme parameter κ. Unfortunately O(3)-sigma instantons in AdS do not have a convenient closed form expression, due to the presence of the non-constant time component of the metric, preventing us from analytically exploring this relationship. We can nevertheless perform a numerical investigation; since the profile function interpolates between π and 0 we can define the size of the soliton as µ : f (µ) = π/2.
Using this definition it is straightforward to numerically find how µ scales with κ and L for different values of B. In the massless case we find the leading-order dependence is µ ∼ √ κL for small κ/L; looking at (4.2) we see that nonlinear effects will dominate when κ/L is large. Comparing this to the scaling of baby Skyrmions with mass parameter m > 0 in flat space, µ ∼ κ/m, we can see that the curvature of AdS space can be interpreted as adding an effective pion mass.
Finally, we can calculate the leading-order decay of the soliton tails near the boundary of our space. Assuming a radial ansatz, and using the fact that f (ρ) → 0 as ρ → ∞, we can linearise the equations of motion to obtain In the limit ρ → ∞ we can obtain the asymptotic tail decay as independently of κ or B. This relation also holds in the special case m = 0. It is interesting to note that, unlike baby Skyrme models in flat space, the addition of a mass term is not required for the soliton tail to decay exponentially. This is in contrast with baby Skyrmions in flat space which have large-radius asymptotic tail decays
JHEP09(2015)009
In fact, these flat-space tail decays can be obtained from the linearised asymptotic equations of motion by carefully taking the limit L → ∞, as expected.
Multi-solitons in AdS
As we shall see, for topological charges B > 3, radial baby Skyrmions in AdS no longer provide global minima for the energy functional (3.2). Analytic investigation of higher charge solitons is difficult, but we are able to perform full numerical minimisations to seek out local and global energy minima. The numerical results in this section were obtained by performing a modified gradient flow method on the energy (3.2) with parameter values κ = 0.1, L = 1, m = 0 on a grid with 501 × 501 gridpoints in sausage coordinates. Derivatives were calculated used fourth-order finite difference approximations. In addition, the numerical results were verified by applying a fourth-order Runge-Kutta method to the full dynamical equations of motion.
In order to find local energy minima we investigated a range of different initial conditions for our minimisation algorithms. Initial conditions were primarily generated using the product ansatz: if we project a field φ φ φ onto the Riemann sphere by then we can generate a field configuration composed from two fields W 1 , W 2 by writing Positions of solitons can be identified with points where φ φ φ = (0, 0, −1) or equivalently W = 0, and W = ∞ whenever φ φ φ = (0, 0, 1), the boundary value. Our composed field has zeroes at the zeroes of W 1 and W 2 , so we see that the product ansatz gives us a field with the correct topological properties as the superposition of two individual fields. Combining this with the hyperbolic translations (2.4) allows us to generate initial conditions that resemble collections of radial baby Skyrmions placed in different positions on our grid. We also used perturbed radial fields as initial conditions. Figure 1 shows colour contour plots of φ 3 for numerically found global energy minima for topological charges 1 ≤ B ≤ 20, with energies given in table 1. For the pion mass term used in this paper, radial solutions are preferred up to charge B = 3, in contrast to baby Skyrmions in flat space which only have radial energy minima for B ≤ 2.
For 4 ≤ B ≤ 7 the baby Skyrmions form regular polygons, with soliton positions at the vertices. The relative phase differences between neighbouring soliton positions is π or π ± π/B for even or odd charges respectively.
At charge B = 8 the ring structure deforms due to the centralising force induced by the AdS metric. For B ≥ 9 the solutions form multi-layered concentric rings, and the central layers for 9 ≤ B ≤ 16 resemble (normally slightly deformed) radial solutions. These deformations appear to match the symmetries of the outer rings. We denote multi-layered ring structures as {n 1 , n 2 , n 3 , . . . }, where n i denotes the topological charge in the ith ring, counting out from the origin. The transition from a single ring to a multi-ring structure with increasing baryon number is reminiscient of the popcorn transition observed in low-dimensional analogues of the Sakai-Sugimoto [9,10] This is a potentially difficult task to accomplish numerically. As higher charge solutions are investigated the number of local energy minima increases drastically. This requires a very large number of initial conditions to be tested in order to gain confidence that a global energy minimum has indeed been found.
JHEP09(2015)009
In the following sections we will formulate an approximation to our system in which the solitons are modelled by point particles in a gravitational potential with an inter-soliton interaction. We will use this model to predict the baryon numbers at which further popcorn transitions occur, and use the predictions as a guide to finding further global minima in the full model.
AdS baby Skyrmions as point particles
The numerical results from the previous section are reminiscent of the results of circle packings within a circle [12]: finding the minimal area circle within which you can pack B congruent circles. Solutions tend to be arranged as rings of points separated by at least the diameter of the circles (∼ 2µ for our baby Skyrmions). However, using this as an approximation to the AdS baby Skyrme model presents some problems. Firstly, the popcorn transitions occur too early, the first two at B = 7 and B = 19. This is likely due to the malleable and overlapping nature of the baby Skyrmions. This problem still occurs even when the circle packings are formulated in hyperbolic space. However it does suggest that a point-particle approximation could be able to qualitatively predict the form of solutions, if a better representation of the soliton interactions can be found.
JHEP09(2015)009
In order to derive a better point-particle approximation it will be necessary to obtain numerical approximations to the effective gravitational potential and inter-soliton interaction. We will make use of the hyperbolic translations (2.4), and assume that the energies of radial fields translated in this way can approximate the energies of constituent parts of multi-soliton configurations located at different points on our grid. It should be clarified that such fields are not solutions to the equations of motion since the hyperbolic translations are not isometries of AdS.
The gravitational potential
We begin by deriving an approximate gravitational potential from the metric (2.1). The geodesic equations associated with this metric are given by where primes denote differentiation with respect to proper time. In the non-relativistic limit x , y t we can writë where we have implicitly defined the gravitational potential Φ(r). Integrating along with the condition Φ(0) = 0 gives To fit this potential to the AdS baby Skyrme model we are required to multiply it by some constant factor α. We obtain a numerical approximation for the gravitational potential by evaluating the energies of translated B = 1 radial fields and subtracting off the energy of the true B = 1 solution. We then fit α to this data by performing a leastsquares fit. We fit only within the radius r = 0.6 because full numerical local minima, even for high charges, do not lie much beyond this radius, and the hyperbolic translated B = 1 fields become less accurate as approximations near the edge of the disc. In addition, we investigated different radii to fit our data to, and found that choosing values larger than 0.6 resulted in point-particle approximations that did not successfully estimate the full numerical results. Figure 2 shows the analytic potential for α = 64.3 compared to the numerical approximations with κ = 0.1, L = 1. The curves are in close agreement for radii in the range r ∈ [0, 0.6], but diverge as r increases further, as expected.
The inter-soliton interaction
We can obtain a numerical approximation for the inter-soliton interaction in a similar way: we can numerically calculate the energy of a product ansatz of two translated B = 1 solitons, subtract off the potential energies associated with each component soliton (according to the numerical approximation above) and the energy of two B = 1 solutions, and plot the resulting energy as a function of the geodesic separation of the positions of the solitons. These static approximations are shown as red curves in figure 4, where the upper curve represents the inter-soliton energy of a pair of solitons in phase, and the lower curve represents solitons out of phase. Relative phase differences are calculated with respect to the geodesics that the particles lie on (see figure 3), so that a pair of soliton field with internal phases ψ a and ψ b have a relative phase difference χ = χ(ψ a , ψ b ). In keeping with results found in other spacetimes, we find that pairs of solitons at large separations are in the maximally repulsive channel when they are in phase (χ = 0), and in the maximallFor higher topological chargesy attractive channel when they are out of phase (χ = π).
With this numerical data as a guide we can fit the out-of-phase interaction using a Morse potential of the form where ρ is the geodesic separation between the solitons, D is the depth of the potential at its minimum, ρ e is the separation at which the potential is minimised, and a is a parameter controlling the width of the potential. These parameters can be fit by performing a leastsquares fit with the numerical data. Since the product ansatz is only valid for well-separated solitons, we fit the data in the region where the separation between the solitons is greater JHEP09(2015)009 In order to introduce a dependence of the potential on the relative phase difference of the two solitons, χ, we assume that the solitons are at their most attractive when they are out of phase, and their most repulsive when they are in phase i.e.
JHEP09(2015)009 7 Baby Skyrmion rings and shells
We now seek energy minima of the point-particle approximation above for various values of B. For a configuration of B solitons with disc coordinates x x x a and internal phases ψ a (where 1 ≤ a ≤ B) we seek to minimise the energy where r a ≡ |x x x a |, Φ(r) and U χ (ρ) are the potentials given above and d(x x x a , x x x b ) is the geodesic distance (2.2) between points x x x a and x x x b . We minimise the point-particle energy using a multi-start stochastic hill-climbing method, where a randomly generated initial condition is allowed to relax iteratively. For each topological charge we used randomly generated initial conditions, and the minimum energy configurations found are presented in figure 5. Solutions were verified using a finite temperature annealing method. Particles are coloured according to their phases (see figure 3).
We find that the point-particle approximation we have derived also favours configurations that form concentric ring-like structures. Furthermore, the qualitative forms of the energy minima predicted by the point-particle approximation are very close to the forms of the full numerical solutions found in figure 1, although the results of the point-particle approximation are much more symmetric. This is to be expected since solitons in the full model have a finite size, and so can overlap and interact in more complicated ways.
For charges B ≤ 7 the approximation accurately predicts not only the ring structure of the configurations, but also the alignment of internal phases. For even and odd charges the particles have internal phase differences of π and π ± π/B respectively, as observed in the full model. At B = 8 the approximation gives the correct distribution of phases, although it does not reproduce the deformed ring structure.
Furthermore, the point-particle approximation correctly captures the first popcorn transition at B = 9, and closely estimates the qualitative forms of the ring structures for all B ≤ 20. The point-particle minima disagree with the full numerical results at charges B = 11, 13, 15, 16, 19 and 20. This may be due to the point-particle approximation not taking into consideration the radial forms of the B = 2 and 3 solitons, or may be a result of assuming the solitons can be approximated by particles with zero size. However, even when the exact forms do not agree, the difference is only by the position of a single particle.
These results suggest that the point-particle approximation may be a useful tool in qualitatively estimating the forms taken by AdS baby Skyrmions for higher charges. Performing further numerical minimisations of (7.1) allows us to predict a second popcorn transition at charge B = 27. In fact, by using the predicted forms as a guide for choosing initial conditions, full numerical energy minimisations reveal the popcorn transition to three layers around charge B = 27, 28, although it is difficult to say exactly when the transition occurs due to the presence of two local minima with similar energies (shown in figure 6, with energies given in table 2). The point-particle approximation predicts a third and fourth popcorn transition at charges B = 54 and B = 95 (see figure 7). Since these charges are very large it would be difficult to perform an extensive search for the global minima in the full numerical model, even with the guidance provided by the point-particle approximation. However, the previous success of the model would indicate that the true popcorn transitions would, indeed, be near these points.
Finally, investigation of the results predicted by the point-particle approximation for still higher charges may be able to provide clues as to the lattice structure preferred by the baby Skyrmions in the infinite charge limit. The energy minimum for charge B = 200 can be seen in figure 8. While clear rings can still be observed in the outermost layers, the centre of the configuration is heavily deformed and may indicate an emerging lattice structure.
Conclusions
We have investigated the static solitons and multi-solitons of the massless baby Skyrme model in a (2 + 1)-dimensional Anti de-Sitter spacetime. We have found that the spacetime curvature acts by adding an effective mass to the model which allows us to find static solutions to the equations of motion without a mass term. Solitons for topological charges 1 ≤ B ≤ 3 were found to have a radially symmetric form, while higher-charge multi-solitons were found to form concentric ring-like solutions. As B increases, a series of transitions occur where the minimal energy solutions take the form of concentric rings with increasing numbers of layers, a phenomenon reminiscent of the baryonic popcorn transitions studied recently in the context of holographic dense QCD. In order to investigate these transitions further a point-particle approximation was JHEP09(2015)009 derived which was able to qualitatively estimate the forms of minimal energy solutions for a wide range of topological charges, as well as accurately predict the charges at which further popcorn transitions occur.
The point-particle approximation derived may indicate an emerging lattice structure for baby Skyrmions in AdS in the limit B → ∞. The minimum energy configuration found for B = 200 displayed clear rings towards the edge of the space, although near the origin the structure appeared significantly less ring-like, and seemed suggestive of an emergent lattice formation. Further investigation into this area would be required to make any further claims.
The O(3)-sigma model stabilised by a baby Skyrme term has previously been studied in an AdS-like spacetime as a low-dimensional toy model of holographic QCD [9], specifically the Sakai-Sugimoto model. It has been argued that a better low-dimensional analogue of this model would involve stabilising an O(3)-sigma model using a vector meson term, and such a model has been studied in a parameter regime where the two toy models are similar [10], although investigation of a more interesting parameter regime proved difficult. It would therefore be interesting to study the vector meson model in pure AdS to see if a parameter regime could be found to give qualitatively different results to the baby Skyrme model.
Finally, the natural extension to this paper is to study the full (3 + 1)-dimensional Skyrme model in AdS. We have demonstrated that a multi-ring like structure exists in two dimensions for AdS baby Skyrmions and this property may translate to the higher dimensional model. This could manifest as spherical multi-shells with polyhedral symmetry groups and hence be approximated by multi-shell rational maps and it would be very interesting to study AdS Skyrmions to see if such configurations give lower energy solutions. | 5,751.2 | 2015-09-01T00:00:00.000 | [
"Physics"
] |
Enhancing Aspect Term Extraction with Soft Prototypes
Aspect term extraction (ATE) aims to extract aspect terms from a review sentence that users have expressed opinions on. Existing studies mostly focus on designing neural sequence tag-gers to extract linguistic features from the to-ken level. However, since the aspect terms and context words usually exhibit long-tail distributions, these taggers often converge to an inferior state without enough sample exposure. In this paper, we propose to tackle this problem by correlating words with each other through soft prototypes . These prototypes, generated by a soft retrieval process, can introduce global knowledge from internal or external data and serve as the supporting evidence for discovering the aspect terms. Our proposed model is a general framework and can be combined with almost all sequence tag-gers. Experiments on four SemEval datasets show that our model boosts the performance of three typical ATE methods by a large margin.
Introduction
Aspect term extraction (ATE) is a fundamental subtask in aspect-based sentiment analysis. Given a review sentence, ATE aims to extract all aspect terms that users have expressed opinions on. For example, from the review "The Bombay style bhelpuri is very palatable.", ATE aims to extract "bhelpuri".
ATE has been widely studied in the last twenty years. Early researches are devoted to design rulebased (Popescu and Etzioni, 2005) and feature engineering-based (Li et al., 2010) methods. With the development of deep learning techniques, recent researches mostly regard ATE as a sequence labeling task and focus on developing various types of neural models (Liu et al., 2015;Xu et al., 2018;Ma et al., 2019) to generate a tag sequence for the review. Though achieving impressive progress, current sequence taggers mentioned still face a serious challenge: the taggers may converge to an inferior state due to the lack of samples for tail words. As shown in Figure 1, about 80% aspect terms and context words (i.e., non-aspect terms) appear no more than five times in the commonly-used SemEval datasets. Without enough sample exposure, neural models can hardly achieve an optimal performance (He et al., 2018;Chen and Qian, 2019).
To tackle this challenge, correlating samples with each other may offer helping hands. For example, if we correlate the rare aspect term "bhelpuri" with a frequent one like "food", there will be more abundant samples for "bhelpuri" than ever. The problem then becomes how to build such a tokenlevel correlation. Retrieving synonyms is an intuitive approach to this problem, but it has two limitations. Firstly, synonyms only exist for a small number of words in the vocabulary. This will make the correlations incomplete. Though we can calculate the nearest neighbors for a certain word based on the pre-trained word embeddings, it is not guaranteed that they have a similar semantic meaning. Secondly, in ATE, the existence of an aspect term depends on whether there are opinions on it. That is to say, we need to build a dynamic correlation for a certain word based on its entire contexts rather than the word itself. Indeed, if the retrieval is con- ducted based on an individual token, the above two limitations always exist.
In this paper, we propose a soft retrieval method to build the token-level correlation for both aspect terms and context words. Rather than conducting a hard retrieval for individual tokens, we turn to retrieve the tokens' counterparts according to their contexts. As shown in Figure 2, after conducting the soft retrieval, we can obtain a generated sample strictly corresponds to the input sample in every position. We name the generated sample "soft prototype" since it is actually a simplified prototype that can build a reference point for guiding the tagging process for the input sample.
We resort to the language models (LMs) to implement the soft retrieval and generate high-quality soft prototypes. As a self-supervised task, language modeling needs no extra annotations and can absorb data-specific global knowledge. Moreover, LMs tend to generate frequent outputs , which exactly meets our needs for correlating a rare word in the input sample with a frequent one in the soft prototype. Specifically, we first pretrain bi-directional LMs using the given training samples on ATE datasets. Alternatively, we can take advantage of large-scale unlabeled data like Yelp and Amazon reviews to pre-train LMs. Then, after fixing the pre-trained LMs, we can infer each token's prototype according to its contexts for both the training and testing samples.
We regard the generated soft prototypes as the supporting evidence for tagging aspect terms, and design a simple and effective gating mechanism to fuse the knowledge embedded in both samples before sending them to a sequence tagger. The soft prototypes can be combined with almost all existing sequence taggers. To demonstrate the effectiveness of our proposed model, we conduct experiments on four SemEval datasets by adding the generated soft prototypes on three existing sequence taggers. The results prove that our soft prototypes significantly boost the performance of their original counterparts.
Related Work
Aspect Term Extraction Early researches for ATE mainly involve pre-defined rules (Hu and Liu, 2004;Popescu and Etzioni, 2005;Wu et al., 2009;Qiu et al., 2011) and hand-craft features (Li et al., 2010;Liu et al., 2012Liu et al., , 2013. With the development of deep learning techniques, neural methods have become the mainstream. ATE can be viewed as either a supervised or an unsupervised task. For unsupervised ATE, the commonlyused neural methods are based on topic models (He et al., 2017;Liao et al., 2019). For supervised ATE, the researchers focus on developing various types of neural sequence taggers (Liu et al., 2015;Wang et al., 2016;Yin et al., 2016;Wang et al., 2017;Li and Lam, 2017;Xu et al., 2018;Ma et al., 2019). A recent trend is towards the unified framework Luo et al., 2019;He et al., 2019;Hu et al., 2019;Chen and Qian, 2020), where the interactive relations between ATE, opinion term extraction (OTE), or aspect-level sentiment classification (ASC) are exploited to enhance the overall performance. Xu et al. (2019) post-train BERT on domain-specific data to boost its sequence labeling performance. Li et al. (2020) propose to generate additional datasets for improving the performance of ATE.
In this paper, we focus on the supervised scenario. Different from the aforementioned supervised models, we develop a novel model to enhance ATE. By automatically generating and utilizing soft prototypes, we correlate samples with each other, which greatly enhances the learning process of sequence taggers. Moreover, the decoupling of soft prototypes from taggers makes our model flexible and general, i.e., it can be combined with almost all neural sequence taggers.
Prototypes in Neural Networks
The idea of prototypes (or templates) originates from information retrieval (IR) approaches for sentence matching tasks like response generation (Ji et al., 2014;Hu et al., 2014). They aim to retrieve a related sample from the dataset as the counterpart of the input sample. More recently, several studies shed new light in this domain by deeply fusing prototypes with neural networks. Many of them use the taskdependent metrics , common metrics such as Jaccard similarity (Gu et al., 2018;Cao et al., 2018;, or existing tools like Lucene (Cao et al., 2018) to retrieve prototypes, and then input the prototypes into a neural model for generating outputs. follows another line, where the prototype (the target words related to a source word in machine translation) is generated using a pretrained Seq2Seq model.
The approach of generating words via LMs is inspired by a recent study (Kobayashi, 2018). However, the method in Kobayashi (2018) is developed for text classification and is not suitable for the ATE task here. Concretely, their method randomly replaces a small percentage (typically 10%) of original training words with the generated ones and then discard the original words. This operation may work well for text classification tasks which only require sentence-level information. For token-level tasks like ATE, the original words are however necessary for tagging each token correctly. Moreover, the small percentage of replacement implies that the generated knowledge cannot be fully incorporated into the new sample. In contrast, we generate a prototype for each word in the sentence, and then deeply fuse the original word with its corresponding prototype to make good use of their embedded knowledge for ATE.
To the best of our knowledge, we are the first to introduce the retrieval method to handle the data deficiency problem in ATE. To this end, we propose a new approach to generate and utilize soft prototypes that can build the token-level correlation for aspect terms and context words.
Methodology
In this section, we first illustrate the overall framework for enhancing ATE with soft prototypes. We then detail the generation and utilization of soft prototypes. Lastly, we describe the objective function and the training procedure.
The Overall SoftProto Framework
Aspect term extraction (ATE) aims to extract aspect terms from a review sentence that users have expressed opinions on. Given a sentence S = {w 1 , w 2 , ..., w n }, we formulate ATE as a sequence labeling task that aims to predict a tag sequence Y = {y 1 , y 2 , ..., y n }, i.e., learning the mapping S → Y , where y ∈ {B, I, O} denotes the beginning of, inside of, and outside of an aspect term.
To incorporate soft prototypes to ATE, we slightly modify the traditional learning process. Formally, rather than directly learning the map- ping from S to Y , we additionally introduce a soft prototype P for each S and learn the new mapping [S, P] → Y . Given S, the soft prototype P is automatically generated by a soft retrieval mechanism, and can serve as the supporting evidence to discover the aspect terms. As shown in Figure 3, we summarize the above processes into the SoftProto framework that mainly consists of three modules: • A prototype generator is used for conducting the soft retrieval process and generating the corresponding soft prototype P for S. • A gating conditioner is used for merging S's representation and P into the fused vectors F. • A sequence tagger is used for predicting the tag sequence Y based on F. Next, we will illustrate each module in detail.
Prototype Generator
To efficiently implement the soft retrieval and generate high-quality soft prototypes, we resort to the language models (LMs) to build a prototype generator. Specifically, we first pre-train two LMs, where − − → LM and ← − − LM is the forward and backward language model parameterized by − − → θ LM and ← − − θ LM , respectively. Then we infer soft prototypes based on the pre-trained LMs.
One can use either the ATE training set or other unlabeled external data like Yelp reviews to pretrain LMs, and we will examine the effects of these two types of data in the experiments. The details of pre-training LMs and inferring soft prototypes are as follows.
Pre-training Language Models As shown in Figure 4(a), given S, the forward − − → LM computes the probability of S by modeling the probability of token w i conditioned on the history (w 1 , .., w i−1 ): In the pre-training process, − − → LM tries to maximize the log likelihood of the forward direction: The 1 Bombay 2 style 3 bhelpuri 4 is 5 very 6 palatable 7
trainable LMs
Bombay 2 style 3 bhelpuri 4 is 5 very 6 output:w i input:w 1: i-1 (a) Pre-training a forward language model. Similarly, the backward ← − − LM tries to maximize the log likelihood of the backward direction: After the pre-training process converges, we can fix − − → θ LM and ← − − θ LM , and infer a soft prototype P conditioned on S, − − → θ LM , and ← − − θ LM for each sample in the training and testing sets in ATE 1 .
Generating Soft Prototypes After getting − − → θ LM and ← − − θ LM , we then infer the soft prototype P. We still take the forward − − → LM as the example. As shown in Figure 4(b), for generating the forward prototype vector − → p i for word w i , we feed the prefix sentence {w 1 , w 2 , ...., w i−1 } to the fixed − − → LM and collect the output probability distribu- To suppress noise, we do not directly select the word o 1 i with the largest output probability. Instead, we preserve the words {o 1 i , o 2 i , ...., o K i } with K-largest output probabilities, and normalize their probabilities to sum 1 as the weighted scores {s 1 i , s 2 i , ...., s K i }. We call the selected words as "oracle words" . Then we map these words with a pretrained embedding lookup table E and obtain their word vectors Finally, we aggregate the oracle words by their weighted scores to calculate − → p i for word w i : Similarly, we can calculate the backward prototype vector ← − p i . To consider the context information in both directions, we use the average of − → p i and ← − p i as the final prototype vector p i for word w i . We then 1 Note that the testing ATE samples are not used for pretraining the LMs, thus there is no data leakage in this process. regard the set of prototype vectors {p 1 , p 2 , ...., p n } as the soft prototype P for the sentence S 2 .
Gating Conditioner
For better discovering the aspect terms, we need to leverage the supporting evidence embedded in the soft prototype P. Intuitively, we have two schemes to incorporate the soft prototypes into ATE: inside or outside the sequence tagger. We choose the latter because we want to decouple the soft prototypes from the sequence taggers, such that we can make the prototypes suitable for all types of taggers. Hence, we introduce an additional upstream module named the gating conditioner to fuse the soft prototype P with the original sentence S.
The soft prototype P provides two kinds of information : (1) P itself has embedded data-specific knowledge that can serve as supporting evidence.
(2) P also helps to refine the original representation of S. Accordingly, the gating conditioner is developed to conduct two types of operations on P. We first map S = {w 1 , w 2 , ..., w n } with the pretrained embedding lookup table E and obtain the corresponding word vectors X= {x 1 , x 2 , ..., x n }. Then, we conduct two types of operations on X and P to obtain the fused vectors F: where σ is the Sigmoid function, W and b are trainable parameters, ⊕ and denotes the concatenation and element-wise multiplication operation, respectively. In Eq. 5, the concatenation of P and X makes the representation more discriminative than before. Moreover, the gating mechanism can help select the important dimensions and further refine the representation. The generated fused vectors F= {f 1 , f 2 , ..., f n } then act as the enhanced representation for S = {w 1 , w 2 , ..., w n }.
Sequence Tagger
The sequence tagger aims to extract high-level semantic features from the low-level tokens, and predicts a tag sequence Y for the review S based on these features. In order to investigate the influence of soft prototypes, we need to control variables in SoftProto. Therefore, we choose three existing sequence taggers as our basic models, including BiLSTM (Liu et al., 2015), DECNN (Xu et al., 2018), and Seq2Seq4ATE (Ma et al., 2019). Readers can refer to the original paper for more details or Section 4.2 for a quick glance. Please note that the only difference between an original sequence tagger and its variant enhanced by our proposed SoftProto is the representation of S. In other words, by comparing the performance of a sequence tagger and its enhanced variant, we can observe that how ATE benefits from soft prototypes.
For training SoftProto, we simply compute the cross-entropy loss L: where n is the length of S, J is the category of labels, y i andŷ i are the predicted tags and ground truth labels. We then train all parameters with back propagation.
ATE Datasets
To evaluate the effectiveness of SoftProto for ATE tasks, we conduct extensive experiments on four datasets from SemEval 2014 (Pontiki et al., 2014), 2015 (Pontiki et al., 2015) and 2016 (Pontiki et al., 2016). These datasets contain review sentences from the restaurant and laptop domains with annotated aspect terms. All of them have a fixed train/test split, and we further randomly hold out 150 training samples as the validation set for tuning hyper-parameters. The statistics of four ATE datasets are summarized in Table 1 3 . Details for Pre-training Language Models As mentioned in section 3.2, we use two types of data to pre-train the LMs: (1) The ATE training sets. In this setting, we directly use the same training/validation samples of each SemEval dataset to pre-train its own LMs. Hence, there are four groups of pre-trained LMs (including − − → LM and ← − − LM ) for four datasets, respectively. We denote this setting as SoftProtoI (I for internal knowledge). (2) The unlabeled external data. In this setting, we additionally collect 100,000 training and 10,000 validation samples from Yelp Review (Zhang et al., 2015) and Amazon Electronics (McAuley et al., 2015) datasets, respectively. LMs pre-trained on Yelp serve as the prototype generator when training and evaluating SoftProto on {Res14, Res15, Res16} datasets, while those pre-trained on Amazon are used for the Lap14 dataset. We denote this setting as SoftProtoE (E for external knowledge). For pretraining the LMs, we adopt the Fairseq 4 toolkit (Ott et al., 2019) and the basic transformer decoder LM architecture (Vaswani et al., 2017) Parameter Settings The only hyperparameter in our SoftProto is the number K of oracle words when generating soft prototypes. We use a grid search to select K in the range [1,10] based on the validation performance, and consequently set K={10, 7, 10, 7} for four datasets, respectively. For other parameters, including the pre-trained word embedding, epoch number, optimizer selection, learning rate, and batch size, we inherit the default settings from the original papers (Liu et al., 2015;Xu et al., 2018;Ma et al., 2019). Models achieving the maximum F1-scores on the validation set are used for evaluation on the testing set. We report the averaged F1 scores over 5 runs with random initialization. We run all methods in a single 2080Ti GPU.
Compared Methods
We choose two kinds of baselines. The first is the SemEval winners for corresponding datasets. In order to discern the impacts of soft prototypes on pure ATE task, we do not choose the hybrid models as the base taggers. Instead, we adapt Soft-Proto to three pure sequence taggers, including BiLSTM (Liu et al., 2015; which is an RNN-based sequence tagger including a vanilla 4 https://github.com/pytorch/fairseq. 5 In practice, we also tried a self-constructed single-layer LSTM architecture and got a similar performance in language modeling. Since the Fairseq toolkit has already integrated the transformer architecture, we directly use it for convenience. , while other results are the averaged scores of 5 runs with random initialization. The best scores are in bold, and the best baselines are underlined. The subscript denotes the improvement/decrease after enhancing an ATE tagger with a certain method (e.g., BiLSTM + SoftProtoE vs. BiLSTM ). * denotes the statistical significance between the orginal methods and their enhanced counterparts at p < 0.05 level. BiLSTM architecture, DECNN (Xu et al., 2018) which is a CNN-based sequence tagger which uses two types of pre-trained embeddings and stacked convolutional layers to extract context features for tagging aspect terms, and Seq2Seq4ATE (Ma et al., 2019) which is an attention-based sequence tagger and uses a modified encoder-decoder framework to extract aspect terms. We further compare SoftProto with two simple enhancing methods, namely Synonym and Replacement. For Synonym, we substitute the top-K oracle words with top-K nearest synonyms measured by the cosine distance of word vectors while keeping the other settings unchanged. For Replacement, we use the prototype generated by our language models, but replace the training words with the method in Kobayashi (2018). The modified samples are sent to the sequence tagger directly 6 .
Main Results
The comparison results for all methods are shown in Table 2 . Obviously, SoftProto greatly boosts all basic sequence taggers. For example, DECNN achieves an overall best performance among 6 We use a grid search to select the replacement probability and present the best results. Prototype tokens are generated using the LMs pre-trained on the Yelp/Amazon data. baselines, while SoftProtoI and SoftProtoE further achieve {1.28%,0.31%,0.83%,1.49%} and {1.80%,1.35%, 2.09%,2.59%} absolute gains for DECNN on four datasets, respectively. There even exists an amazing 3.30% gain after incorporating SoftProtoE to Seq2Seq4ATE on the Res14 dataset. This strongly demonstrates the effectiveness of proposed soft prototypes for the ATE task. By correlating samples through the soft prototypes, the training of sequence taggers can easily converge to a better state than before.
We also find that the improvements brought by the SoftProto are more remarkable on small datasets (Res15 and Res16) than those on large ones (Res14 and Lap14). This is because there are not enough samples on small datasets to train a well-performed sequence tagger, and the discovery of aspect terms largely relies on the knowledge embedded in the soft prototypes. Moreover, Soft-ProtoE performs much better than SoftProtoI. The reason is that the external unlabeled data from Yelp and Amazon is much bigger and more informative than the original ATE datasets. Accordingly, the pre-trained LMs in SoftProtoE contain more knowledge than those in SoftProtoI and can generate more discriminative soft prototypes.
The performances of Synonym and Replacement are far from satisfactory, and they even result in decreases in some cases. Synonym generates noisy prototypes by only considering the individual tokens, and can hardly handle the unknown (UNK) words. The ineffectiveness of Replacement lies in two issues. Firstly, it simply replaces the original words with the generated ones, which incurs information loss. Secondly, the generated knowledge cannot be fully utilized due to the small percentage of replacement. The inferior results demonstrate that these two methods are not qualified for enhancing the ATE task.
Perplexities of Language Models
In this section, we present the perplexities of language models pre-trained on different datasets. As shown in Table 3, the perplexity is linearly related to the size of datasets. The larger the dataset, the lower the perplexity. Clearly, LMs trained on external Yelp/Amazon datasets have much lower perplexities than original SemEval datasets. Among the SemEval datasets, Lap14 and Res14 have relatively more samples than Res15 and Res16, resulting in relatively lower perplexities. Moreover, language models in forward and backward directions have no significant differences in performance. We will release all pretrained language models in time for encouraging further studies on soft prototypes.
Ablation Study
Without loss of generality, we choose two DECNN +SoftProto models and conduct the ablation study to investigate the effects of different modules in SoftProto. We sequentially remove the forward LM, the backward LM, the concatenation operation, and the gating operation to obtain four simplified variants.
As shown in Table 4, all variants have a performance decrease of the F1-score. The results demonstrate that : (1) Considering both directions in language modeling can generate better soft prototypes.
(2) Both kinds of conditioning operations (i.e., gating and concatenation) can contribute to the utilization of the soft prototypes.
Impacts of Oracle Words
In the prototype generator, the hyper-parameter K controls how many oracle words are taken into account when generating soft prototypes. To investigate the impacts of the oracle words on different datasets, we vary K in the range of [1,10] stepped by 1, and present the results of two DECNN+SoftProto models in Figure 5. Generally, the F1-scores of DECNN have an overall upward trend when more oracle words are introduced. This is explainable since the oracle words actually provide the data-specific knowledge that can be aggregated into the soft prototypes. Moreover, owing to the high confidence of language models trained on Yelp/Amazon datasets, the curves of SoftProtoE are smoother than those of SoftProtoI. The reason is that language models with high perplexities almost inevitably output noisy oracle words and bring about the high variance when generating soft prototypes.
Case Study
To have a close look, we further select six samples from the testing sets for a case study. Due to the space limitation, we only present the results of the best baseline DECNN and its two variants enhanced by SoftProto in Table 5. S1∼S2 are in similar circumstances. DECNN only extracts a single word as the aspect term and
Performance on Tail Aspect Terms
To prove that SoftProto are indeed beneficial for identifying the tail aspect terms, we keep the training sentences unchanged and only preserve the testing sentences containing the tail aspect terms (appearing no more than 3 times in training sentences). We present the performance of DECNN and its two variants enhanced by SoftProto on these sentences in Table 6. Clearly, SoftProto enhances the ability of DECNN in recognizing the tail aspect terms by a large margin.
Prototypes Generation with BERT
Since BERT (Devlin et al., 2019) is pre-trained as a masked language model (MLM), we wonder if it can serve as the prototype generator. Hence, we regard the generation of prototypes as a cloze test. We sequentially mask each word and collect the top-K output words of the MLM as the oracle words. We name this variant SoftProtoB. The setting of K and the usage of the oracle words remain the same as those in SoftProtoI and SoftProtoE, thus the only difference among all these SoftProto variants is the way of pre-training language models. We conduct experiments on two pre-trained BERT models, where SoftProtoB (BASE) is the officially released BERT-Base-Uncased model, and SoftProtoB (PT) is further post-trained on domainspecific data and released by Xu et al. (2019). Since both SoftProtoB and SoftProtoE make use of the external data, they are fair competitors and we list the results of these two variants in Table 7. From the results in Table 7, we can see that the BERT-based models are also qualified for generating the soft prototypes. In general, SoftPro-toB (BASE) generates domain-independent oracle words and achieves limited improvements over the base model, while SoftProtoB (PT) can generate domain-specific oracle words and achieves a comparable performance with SoftProtoE.
Analysis on Computational Cost
Since we use the pre-trained language models, the cost for generating soft prototypes can almost be ignored. To demonstrate that SoftProto does not incur the high computational cost in utilizing soft prototypes, we run three sequence taggers on the Laptop 2014 dataset, and present the trainable parameter number and running time per epoch of each method before and after introducing SoftProto in Table 8. From Table 8, we can conclude that SoftProto is a lightweight framework and does not add much cost on the original sequence taggers.
Conclusion
In this paper, we present a general SoftProto framework to enhance the ATE task. Rather than designing elaborated sequence taggers, we turn to correlate samples with each other through soft prototypes. For this purpose, we resort to the language models for automatically generating soft prototypes and then design a gating conditioner for utilizing them. The performance of SoftProto can be further improved after introducing the large-scale external unlabeled data like Yelp and Amazon reviews. Extensive experiments on four SemEval datasets demonstrate that SoftProto greatly boosts the performance of the typical ATE methods and introduces small computational cost. | 6,465.2 | 2020-11-01T00:00:00.000 | [
"Computer Science"
] |
Ultrasound differential phase contrast using backscattering and the memory effect
We describe a simple and fast technique to perform ultrasound differential phase contrast (DPC) imaging in arbitrarily thick scattering media. Although configured in a reflection geometry, DPC is based on transmission imaging and is a direct analog of optical differential interference contrast. DPC exploits the memory effect and works in combination with standard pulse-echo imaging, with no additional hardware or data requirements, enabling complementary phase contrast (in the transverse direction) without any need for intensive numerical computation. We experimentally demonstrate the principle of DPC using tissue phantoms with calibrated speed-of-sound inclusions.
Ultrasound imaging is generally based on pulse-echo sonography, which excels at revealing reflecting interfaces and point-like structures in a thick sample. However, the quality of ultrasound images becomes degraded when aberrations are present in the sample since these cause acoustic phase distortions or speed-of-sound (SoS) variations that undermine the accurate reconstruction of beamformed images. 1 Longstanding effort has gone into developing techniques to restore image quality by correcting for the effects of aberrations. [2][3][4] It has also been recognized that the aberrations themselves can be of interest, for example, in helping identify soft-tissue pathologies. 5,6 But aberrations are difficult to directly image with ultrasound since they generally do not cause acoustic reflections. Initial attempts at imaging aberrations were based on transmission geometries using multiple receivers 7,8 or a passive backreflector. 9 Much more practical are techniques that use conventional pulse-echo sonography with a single ultrasound probe since these enable imaging in arbitrarily thick samples. 10 For example, 2D phase maps can be obtained from a 1D array, by inferring the local SoS based on the iterative application of a local focusing criterion. 11,12 Alternatively, such maps can be obtained by a method of computed ultrasound tomography based on the numerical inversion of a forward model. 13,14 These approaches are highly promising, although computationally intensive.
We present here a technique to perform phase-contrast imaging directly and in real time, without the need for a forward model or intensive computation. Our technique is a direct analog of optical differential interference contrast (DIC) microscopy, here applied to ultrasound. The principle of DIC is well known. 15 In essence, a DIC microscope is a transmission-based lateral shear interferometer, which, remarkably, can perform phase contrast imaging with completely incoherent illumination. This is achieved by using a beam splitter (a Nomarski prism) to split the incoherent illumination into two identical copies that are laterally sheared relative to one another. Even though the illumination is incoherent, it remains mutually coherent with its twin copy exactly at the shear separation distance. The two copies co-propagate and transilluminate a sample, whereupon they can accumulate phase differences caused by the sample structure. Finally, the copies are recombined with a second beam splitter (a matched Nomarski prism), and their resulting interference is imaged onto a camera with a high resolution microscope, yielding an image of the phase gradients in the sample along the shear axis.
In our case, incoherent illumination (or rather insonification) is obtained by launching a plane wave transmit pulse into a thick sample and relying on random backscattering from the sample to render the return pulse incoherent. 16 To obtain twin copies of this backscattered field that are laterally sheared relative to one another, we rely on a phenomenon called the memory effect, [17][18][19] whereupon by simply steering the tilt angle of the transmit plane wave, the backscattered field becomes laterally displaced while remaining, otherwise, largely unchanged (over a limited range).
An experimental demonstration of the memory effect is shown in Fig. 1. We use a Verasonics Vantage 256 equipped with a GE9L-D linear array probe (192 elements, 0.23 mm pitch, 5.3 MHz center frequency, 4Â sampling frequency) serving as both a transmitter and a receiver. When a transmit plane wave pulse is launched normal to the sample surface [CIRS Model 049 phantom- Fig. 1(a)], the resulting backscattered echo RF(x, t) is incoherent, with an intensity that takes on the appearance of speckle [ Fig. 1(c)-x is the transducer coordinate and t is the time]. Now, consider the effect of electronically tilting the transmit pulse a small angle h [ Fig. 1(b)]. Because of the memory effect, the received speckle pattern not only becomes shifted in x but also becomes delayed in t because of the additional path lengths involved from transmit to scatter and back to receive. The memory effect stipulates that the trajectory of a speckle grain follows a mirrorreflection law, 18,19 schematically depicted in Fig. 1(b). That is, a speckle grain received at coordinate (x 0 , t 0 ) when h ¼ 0 is received at coordinate ðx h ; t h Þ when the transmit angle is tilted, where, from geometrical considerations, where c is the speed of sound in the medium, here 1540 m/s. It should be noted that the memory effect is not exact and that the speckle patterns remain correlated over only a limited tilt range that scales inversely with the acoustic transport length. 19 At first glance, it might appear that this range should be small because this length is long. However, the speckle patterns here are time-resolved (or time-gated), meaning that the effective sample interaction thickness for any given speckle pattern is only a small fraction of a millimeter, significantly extending the correlation range. 20 We found that the above relations remained valid even for tilt angles as large as 612 (the maximum range we explored). We now consider how backscattering and the memory effect enable us to perform an ultrasound analog of optical DIC. As noted above, DIC requires sheared copies of the same incoherent field. In our case, the incoherent field is produced by backscattering and twin copies of this field are sheared by the memory effect [ Fig. 2(a)]. Our technique differs from optical DIC in that the shearing need not be instantaneous, but instead occurs sequentially. Moreover, the phase differences accumulated by the fields need not be measured by interference, which converts phase differences into intensity variations that can be recorded using a camera. Instead, the phases of the fields are measured directly by the transducer, and their differences are evaluated numerically a posteriori. Accordingly, because our technique does not require interference, we refer to it as ultrasound differential phase contrast (US DPC), to distinguish it from optical DIC.
The final correspondence between US DPC and optical DIC is related to imaging. In optical DIC, the numerical aperture (NA) of the ) is insonified indirectly by backscattered speckle, rather than directly by the transmit pulse (orange segment), ensuring that the point is insonified in transmission mode from below rather than reflection mode from above. microscope objective defines the spatial resolution with which sample points are imaged (or, more correctly, the phase differences between pairs of sample points). In US DPC, imaging is performed instead numerically by beamforming the raw RF data received by the transducer array. Because US DPC makes use of plane wave transmits with differing steering angles (at least two), it is naturally compatible with the standard beamforming method of coherent plane wave compounding 21 with no modifications whatsoever. But standard beamforming is designed to perform pulse-echo sonography, which is a single-scattering reflection modality. In particular, standard beamforming assumes that the transmit pulses are directly incident on each reconstructed sample point with the shortest time delay possible [orange segment in Fig. 2(b)]. In our case, US DPC is a transmission modality as opposed to a reflection modality. The insonification arriving at each reconstructed sample point arrives not directly, but rather indirectly by means of backscattering from regions deeper within the sample. For standard beamforming to be tricked into reconstructing indirectly insonified sample points rather than directly insonified sample points, it suffices here to simply add a time delay to the transmit path, as depicted in Fig. 2(b) (thick blue segments). In other words, it suffices to numerically shift the raw RF data by a delay T cos h, where T is a user-defined arbitrary time delay presumed to be reasonably large (more on this below). This delay shift must be performed prior to beamforming.
Our algorithm for DPC is summarized below: (1) Obtain RF n ðx; tÞ using a sequence of N transmit plane-wave pulses of tilt angle h n ðn ¼ 1…NÞ; (2) Numerically delay the raw RF signals: c RF n ðx; tÞ ¼ RF n ðx; t þT cos h n Þ, where T is user defined; Note that step 4 was derived from Eq. (1) and step 5 evaluates the local phase difference between beamform pairs. Also, note that standard pulse-echo imaging involves steps 1 and 3 only (without step 2). We experimentally demonstrate the performance of our DPC algorithm using the CIRS Model 049 phantom. This phantom contains four types of spherical elasticity inclusions 10 mm in diameter and 15 mm below the phantom surface. The inclusions feature differing SoS from the background SoS (Types I-IV: 1530 6 10 m/s, 1533 6 10 m/s, 1552 6 10 m/s, and 1572 6 7 m/s-according to manufacturer specifications). We insonified the phantom with a sequence of 13 plane wave transmit pulses with steering angles h n ranging from -0.15 to 0.15 radians, obtaining a sequence of receive signals RF n ðx; tÞ. Figure 3(a) shows a B-mode image of a type IV inclusion obtained by standard beamforming with coherent plane wave compounding. 21 The inclusion is barely visible since it presents little change in attenuation or scattering relative to the background. We then applied our DPC algorithm to the same set of receive signals, with T arbitrarily set to 800 sampling periods (corresponding to an added echo distance of 58 mm) and with inter-angle separation index m ¼ 1. Figure 3(b) shows a resulting DPC image I n¼7 ðx; zÞ using only a single pair of transmit angles. Figure 3(c) shows the DPC image I ¼ hI n i n after angular compounding over multiple pairs (12 in total). Figure 3(d) shows the effect of additional delay compounding I ¼ hI n i n;T with T ¼ 600; 800; 1000; 1200, involving a 4Â increase in the number of beamforming steps. Manifestly, compounding leads to an improvement in SNR, allowing the inclusion to become more readily apparent. We note that Figs. 3(c) and 3(d) bear close resemblance to optical DIC images, except that they are oriented here in the (x, z) plane rather than the (x, y) plane.
Somewhat more involved is the CIRS Model 049A phantom, which contains stepped cylinder inclusions of differing diameters at two deeper depths (30 and 60 mm below the surface). DPC images of Type IV inclusions (T ¼ 600, no delay compounding) are shown in Fig. 4 for various imaging conditions. Figures 4(b), 4(e), and 4(f) show the effect of increasing m, which leads to a larger angular separation between transmit pairs and, hence, to a larger shear separation Dx between image pairs. The result is a moderate increase in contrast, although at the cost of degraded transverse resolution. Figures 4(b), 4(g), and 4(h) show the effect of decreasing NA when beamforming. As expected, lowering the NA leads to a rapid elongation of the receive PSFs, significantly undermining axial resolution and highlighting the importance of NA.
Finally, we demonstrate the capacity of DPC to discriminate between different SoS values. In summary, we have demonstrated a method to perform US phase imaging. Although based on transmission imaging, the method works in a reflection configuration and makes use of standard beamforming based on plane wave compounding with no additional data requirements and very little additional computation. As such, it can be combined with conventional B-mode imaging while providing complementary phase (or SoS) contrast in real time and essentially for free, enabling sample structures to be revealed that would otherwise be invisible. However, our results come with caveats. The phantoms we utilized were relatively simple, with homogeneous background SoS that was assumed to be known a priori. It remains to be seen how well this technique performs with more complex samples. Moreover, in its current form, US DPC only reveals transverse phase gradients and is not well suited for imaging layered SoS variations such as often encountered in practice. Nevertheless, these preliminary results provide a basis for a promising modality in US imaging and for imaging in scattering media using the memory effect. 22,23 This work was funded by No. NIH R21GM134216. We thank Thomas Bifano and the BU Photonics Center for making a Verasonics machine available to us and Michael Jaeger for insightful discussions.
DATA AVAILABILITY
The supporting data for these findings are available from the corresponding author upon reasonable request. | 2,949.8 | 2021-03-14T00:00:00.000 | [
"Physics"
] |
Gravitational soft theorem from emergent soft gauge symmetries
We consider and derive the gravitational soft theorem up to the sub-subleading power from the perspective of effective Lagrangians. The emergent soft gauge symmetries of the effective Lagrangian provide a transparent explanation of why soft graviton emission is universal to sub-subleading power, but gauge boson emission is not. They also suggest a physical interpretation of the form of the soft factors in terms of the charges related to the soft transformations and the kinematics of the multipole expansion. The derivation is done directly at Lagrangian level, resulting in an operatorial form of the soft theorems. In order to highlight the differences and similarities of the gauge-theory and gravitational soft theorems, we include an extensive discussion of soft gauge-boson emission from scalar, fermionic and vector matter at subleading power.
Introduction
Soft or low energy theorems play a crucial role in understanding quantum field theory.They provide a connection to classical field theory, allow performing resummation of large infrared logarithms, and constrain scattering amplitudes.Soft theorems for the emission of single gauge bosons hold to next-to-leading power in the soft expansion [1,2].Gauge symmetry constrains the form of the next-to-soft terms and guarantees the universality of the soft limit.The soft theorem has been generalised to graviton emission by Weinberg [3] and subsequently by Gross and Jackiw [4,5].Recent developments of spinor-helicity amplitude methods renewed interest in the gravitational soft theorem and led to the discovery [6][7][8][9][10] that the universality of soft graviton emission extends to the first three terms in the soft expansion, that is, to sub-subleading power.A relation between the asymptotic symmetries of [11,12] and the soft theorems has been uncovered [13,14], which extends to subleading power [15,16] as well.
Why are there three universal terms in the soft theorem for gravity but only two for gauge theories?What role do the local symmetries underlying gravity and gauge theory play?And is there a more direct way of deriving the soft theorem from the underlying Lagrangians?In the spinor-helicity formalism, the existence of a third term in the gravitational soft theorem is a consequence of little-group scaling of the spinor-helicity amplitude.Compared to gauge-boson amplitudes, the helicity-two nature of the graviton leads to an additional singular term after soft rescaling of the amplitude, which can be related to the non-radiative amplitude through a recurrence relation.In relativistic quantum field theory, the helicity of the emitted particle is closely related to the gauge symmetry of the theory, and its coupling to the conserved currents.However, the gauge symmetries -non-abelian and diffeomorphism invariance -of the full relativistic Lagrangians are not suited to make the soft theorem manifest.In this work, we approach the above questions from the notion of soft-collinear effective Lagrangians for gauge theory [17][18][19][20] and gravity [21,22] and show that the soft theorem is essentially dictated by the powerful constraints imposed by the emergent soft gauge symmetry on the Lagrangian.This approach allows us to derive the soft theorem in an operatorial form that makes the appearance of the angular momentum operator transparent.It also explains the number of universal terms as a consequence of soft gauge invariance without any calculation.While the result is of course the well-known soft theorem, our approach provides an interesting new perspective on the structure and interpretation of the various terms in the soft theorem, especially for gravity, for which the sub-subleading term was uncovered only using the spinor-helicity formalism.
To be more specific, for gauge theory, the Low-Burnett-Kroll (LBK) theorem [1,2] relates the amplitude A rad , with an additional single soft gauge boson emitted, to the non-radiative amplitude A, stripped of its external polarisation vectors, through the formula where p i is the momentum of the emitter, t a i its non-abelian "colour" charge and u(p i ) its polarisation vector (spinor).The momentum k and ε a (k) refer to the momentum and polarisation vector of the emitted soft gauge boson, respectively.The first term is gaugeinvariant after one sums over all charged external particles and imposes charge conservation i t a i = 0. 1 The second term is manifestly gauge-invariant due to the anti-symmetry of the angular momentum operator where L µν i is the orbital angular momentum operator of particle i, and Σ µν i the spin operator.In contrast, the single-graviton radiative amplitude is related to the non-radiative amplitude as 3) The appearance of an additional universal term suggests that the gauge symmetry of gravity provides stronger constraints than in the ordinary gauge-theory case.The first two terms resemble (1.1) if one replaces the gauge charge by momentum, t a i → −p µ i , the gauge-boson polarisation vector by the graviton-polarisation tensor ε a µ (k) → ε µν (k), and adjusts the coupling constant g → κ/2, very suggestive of the gauge-gravity double copy [23,24], for a comprehensive review, see [25].Indeed, the first two terms of the soft theorem can be constructed this way from the LBK amplitude already at the Lagrangian level in the soft-collinear effective theory [26].
In this work, we suggest an alternative perspective on the terms appearing in the soft theorem for gauge and graviton emission, which emphasises the underlying local symmetries in these two cases.This manifests itself already in the structure of the theorems themselves.In the gauge-theory soft theorem, the next-to-soft term involving the angular momentum operator is manifestly gauge-invariant for every i, while in the case of gravity the first two terms are gauge-invariant only after imposing total momentum and angular momentum conservation, i p µ i = 0 and i J µν i = 0, respectively, of the source.Only the third term is manifestly gauge-invariant for every i.This difference points out that the second term for gravity has a different origin than the corresponding term for gauge theory, despite their similarity in form.Our main objective is to investigate these differences and shed some light on the connection of different terms to gauge symmetry, providing a novel interpretation of the origin of the subleading terms.From this perspective the next-to-next-to-soft term in the gravitational soft theorem should be viewed as the analogue of the next-to-soft term in gauge theory, while the first two terms in gravity are related to the soft gravitational gauge symmetries.We explore and rederive the soft theorem using the effective field theory (EFT) formalism and demonstrate how the soft theorem follows directly from the structure of the subleading Lagrangian and its soft gauge symmetry.For gravity, the soft gauge symmetry consists of local translations and Lorentz transformations of a soft background field, which lives on the light-cones defined by the emitter particles.More importantly, we show that these theorems can be recast into an operatorial statement within the EFT formalism.
Soft theorems encapsulate factorisation between universal long-distance soft radiation and short-distance hard scattering.The separation of long-and short-distance effects is most conveniently formulated in the modern EFT language.A fascinating and advanced EFT construction is known as soft-collinear effective field theory (SCET) [17][18][19][20].SCET constructs an expansion of scattering amplitudes around the light-cone, describing energetic particles, collinear modes, and their interactions with soft modes.The fundamental role which gauge symmetry plays in the construction of the SCET Lagrangian has been recognized early on [20,27].However, although the subleading soft theorem for gauge theories has already been thoroughly discussed in the SCET context in [28] (see also [29]), the powerful implications of soft gauge symmetry have not yet been fully exploited.Beyond the leading term, the soft theorem in gravity has not yet been considered in SCET, since SCET gravity was not constructed beyond the leading power at all.The present work uses the construction of [22], simplified to tree-level, single-emission of soft gravitons, and highlights the many similarities and differences between soft-collinear gauge theory and gravity.
The paper is organised as follows: first, in Section 2, we provide an introduction to the basic formalism and notation in SCET.Although the main topic of this work is gravity, we then review the derivation of the gauge-theory soft theorem ("LBK theorem") for the emission from scalar, spinor, and vector particles in Sections 3 to 5, which serves to illustrate the main ideas.Rather than calculating the amplitude, we perform the derivation at the operatorial level by manipulating the Lagrangian until the angular momentum operator represented on fields becomes manifest.In Section 6 we derive the soft theorem in perturbative gravity.We show that the universality of the first three terms is a direct consequence of the SCET gravity soft gauge symmetry.In particular, the absence of soft source operators up to the next-to-next-to-soft order immediately implies the existence of three universal terms in the gravitational soft theorem.
Invitation
In this section, we set up the effective field theory notation, introduce the notion of the soft symmetry and discuss the matching of the non-radiative amplitude to its SCET representation.These preliminary constructions, in particular the non-radiative matching, are valid for both gauge theory and gravity.For the soft gauge symmetry and soft emission, there are differences, which we point out once relevant, and focus here on gauge theory.
It is important to note that SCET enters merely as a framework to separate the soft physics from the energetic, collinear physics of the particles generated by the hard process.SCET captures the respective soft and collinear limit of the underlying full theory, QCD or perturbative gravity, at the Lagrangian level, and its Feynman rules precisely reproduce the full-theory amplitudes in these limits.While the framework may appear very technical at first, it pays off due to the conceptual clarity of the field theory representation of the physics underlying soft and collinear processes.From the perspective of SCET, soft theorems are tree-level computations within an EFT that has much broader applicability.This means that once we understand the complicated notation, the soft theorem follows almost immediately from the effective soft gauge symmetry and the allowed currents and Lagrangian interactions by a simple computation.
To illustrate this point, we note that an energetic fermion with its large momentum p µ directed along the light-like vector n µ − interacts with a soft gauge boson through the effective Lagrangian [17,19,30] at leading order in the soft expansion.Similarly, for the soft graviton [21], The graviton coupling is given by κ = √ 32πG N in terms of Newton's constant.The structure of the leading term in the soft theorems (1.1), (1.3) is already manifest in the Lagrangians, which couples the energetic particle to the soft gauge field and graviton only proportional to the large momentum p µ ∝ n µ − .The content of the leading term in the soft theorem can now be stated in operatorial form as where the sum over i runs over the energetic particles created in the hard process.At this point, it is essential that soft gauge bosons and gravitons cannot be emitted directly from the hard vertex at this order in the soft expansion, since there are no source operators containing soft fields that would be invariant under the soft gauge symmetry.The entire radiative amplitude originates from the time-ordered product with the universal Lagrangian interaction.This guarantees the universality of the soft theorem, that is, its form is independent of the non-radiative, hard process.We shall show from the general building principles of the soft-collinear effective Lagrangians for gauge fields and gravitons that these considerations extend to the next-to-soft order for gauge-boson emission and to next-to-next-to-soft order for graviton emission.
In the following, we introduce a simplified and decluttered SCET notation that allows the non-expert to follow the discussion and makes our derivation more transparent.This notation is chosen specifically to work with tree-level single-emission processes in the context of soft radiation, and, in general, more care has to be taken.We refer to the literature for the general definitions [20,22,29,31].
Notation and structure of the Lagrangian
We consider processes involving a number of energetic particles, described by collinear fields ψ i , moving in different, well-separated directions, and low-energy radiation described by soft modes ψ s .The different collinear directions are specified by light-like vectors n i− .
One also introduces a corresponding light-like reference vector n i+ , such that n 2 i± = 0 and n i− n i+ = 2.With these vectors, we decompose the collinear momentum p µ i into It is implicitly understood that the transverse component for an i-collinear momentum is defined with respect to the n i± reference vectors.By construction, n i+ p i is the large component of the energetic particle's momentum.Thus, collinear momentum scales as where Q is some generic hard scale (often omitted in the following), and is the SCET power-counting parameter.With this counting, soft momentum components scale as k µ ∼ λ 2 uniformly.A next-to-soft term therefore corresponds to a subleading O(λ 2 ) term, and next-to-next-to-soft to O(λ 4 ).
Computations of scattering amplitudes in SCET require two classes of objects, the Lagrangian interactions, and sources -the N -jet operators.The SCET Lagrangian takes the form where i denotes the different collinear sectors, and s denotes soft fields.Notably, the Lagrangian only contains interactions within the same collinear sector, interactions between collinear particles and the soft background, as well as purely-soft self-interactions.There are no direct interactions between fields from different collinear sectors in the SCET Lagrangian.Such terms, where a hard vertex creates multiple particles belonging to different collinear sectors, are encapsulated in the so-called N -jet operators.Broadly speaking, these N -jet operators correspond to the non-radiative amplitude, while the Lagrangian interactions represent the soft emission from the external legs.However, in SCET this separation is organised in a manifestly gauge-invariant way.Indeed, an important consequence of this structure is that each collinear sector transforms under its own collinear gauge symmetry, and all fields transform under the soft gauge symmetry with transformations [20] i-collinear: where we introduced the soft-covariant derivative with homogeneous λ-scaling, and defined (2.10)
Soft gauge symmetry
We now take a closer look at the above gauge symmetries, which play an important role in our construction, and even more so in gravity.For the following discussion, we focus on the interactions of soft gauge fields with collinear matter fields, i.e. we do not discuss soft matter and collinear gauge fields, which are also present in the theory.The transformations (2.8) do not take the form of the standard gauge transformation in non-abelian gauge theory.We note the following two important properties: • Only the n − A s component of the soft gauge field appears in the covariant derivative (as is the case for the leading Lagrangian (2.1)).Hence, for collinear fields, only this component acts as a gauge field, but not the transverse and n + A s components.Moreover, n − A s appears as a background field in the collinear gauge transformation.
• The soft gauge field and soft gauge transformations in collinear interactions "live" on the light-cone or classical trajectory x µ − of the energetic particle.This important property of the soft gauge symmetry stems directly from the expansion in the parameter λ, since for a systematic and homogeneous power expansion in λ, the soft fields at space-time point x must be multipole-expanded as about x µ − in interactions with collinear fields.This expansion can be performed in a straightforward and systematic fashion [20].
The effective theory allows us to make the soft gauge symmetry manifest, which in turn provides an understanding and an interpretation for the individual terms in the soft theorems.Remarkably, a soft gauge symmetry with very similar properties but a generalised soft-covariant derivative also arises in the gravitational soft-collinear effective theory [22], as explained in Section 6, even though in full Einstein gravity, covariant derivatives are absent for scalar fields, which highlights the different nature of the soft gauge symmetry relative to the original one.For this reason, the soft gauge symmetry is an emergent one in the infrared.
To illustrate the different nature of the components of the soft gauge field, let χ c be a fermionic collinear matter field.Its interaction Lagrangian with soft fields is constructed as a power-series in λ, L = L (0) + L (1) + L (2) + O(λ 3 ) . (2.12) The leading piece takes the form and it contains the soft field only in the covariant derivative n − D s , as it must be, as this is the only soft gauge invariant term at this order.In fact, the soft covariant derivative only appears in this term, while all subleading soft-collinear interactions are expressed as couplings of the field-strength tensor F s µν and its covariant derivatives to the multipole moments of the fluctuations around the classical trajectory.For example, the first subleading interaction takes the form where j a denotes the Noether current, and a denotes an adjoint representation index.We can identify x µ ⊥ n + j a as part of the dipole moment.Both the leading and the subleading term are universal and appear in similar form for matter fields of different spin.The dipole term is relevant for the second, next-to-soft term in the soft theorem (1.1).
N -jet operators
In SCET, the source that generates the energetic particles, or "N -jet operator", depicted in Fig. 1, is represented by an operator that is non-local along the light-cones of the energetic particles.Schematically, we have where [dt] N = N i=1 dt i , t i ∈ [0, ∞) and X enumerates different possible operators.The collinear currents J i (t i ) are displaced along their respective light-cone direction, and constructed from collinear gauge-invariant fields, convoluted with the hard matching coefficients C X .J s (0) is a soft gauge-covariant product of soft fields.However, the soft gluon and graviton fields cannot appear in these N -jet operators up to order O(λ 4 ) and O(λ 6 ), respectively [22,32]. 2 This follows from the fact that the soft covariant derivative can be eliminated by applying the equation of motion for the collinear fields, and further because the field strength tensor F s µν (Riemann tensor in gravity) scales as O(λ 4 ) (O(λ 6 ) for the Riemann tensor).It is this simple consequence of soft gauge symmetry which implies that there is some form of universal soft theorem including a next-to-soft term in the gauge theory soft theorem, and a further universal next-to-next-to-soft term for gravity, as any soft emission up these orders must arise from universal Lagrangian terms, independent of the source for the energetic particles.
To derive the specific form of the soft theorem requires the subleading soft-collinear interaction Lagrangians, which will be discussed in subsequent sections, and the construction of a complete and minimal operator basis, using soft and collinear gauge symmetry [29,31,32].For our purpose -the derivation of the LBK amplitude or soft theorem, respectively -we can restrict ourselves to interaction Lagrangians that describe tree-level single soft-emission processes, and to N -jet operators with a single energetic particle in a given direction.This means that each J i contains a single collinear field Here, χ i is a collinear gauge-invariant matter (scalar, spinor, or vector) field, and A i⊥ is a collinear gauge-invariant gluon field.Subleading currents are constructed from this using the ∂ ⊥ derivative.Thus, the leading-power operator reads At subleading power, we further need the collinear building blocks and so on, up to J A4µ 1 µ 2 µ 3 µ 4 ∂ 4 χ † i (t i ) for the case of gravity.We use χ † i in the building blocks as we employ all-outgoing particle convention.The soft emission is then generated from the time-ordered products of the collinear currents with the soft-collinear Lagrangian interactions with n = 0, 1, 2 (up to n = 4 in gravity).
Non-radiative matching
In this section, we clarify how the non-radiative amplitude A appearing in the soft theorem (1.1) and (1.3) is related to the coefficients of the above N -jet operators.Although the main results, (2.24) and (2.27) below, are rather obvious, their precise formulation appears somewhat cumbersome.This is reminiscent of imposing momentum conservation in the traditional derivation [1,2].The technical nature of these expressions and the derivation is unavoidable in the EFT framework, and the reader should not get distracted by the technical details.The main result is that, roughly speaking, the N -jet operator corresponds to the non-radiative amplitude and its momentum derivatives order-by-order in the λ-expansion.
We first focus on a non-radiative amplitude of energetic spin-0 particles.The generalisation to the fermionic and vectorial matter particle amplitudes is straightforward and explained in the respective later sections.With only a single particle in a given collinear direction, it is always possible to align the reference vectors n µ i− with the collinear particle momentum p µ i , such that p µ i = (n i+ p i ) n µ i− /2.We will adopt this choice for the radiative amplitude.If the emitted soft particle carries away momentum k, one of the lines connecting to the hard vertex will have momentum p i + k, see Fig. 2 below, which is not aligned with n i− .We therefore have to perform the non-radiative matching for the general case where the transverse momenta p i⊥ ∼ O(λ) at the source are non-vanishing.
To connect the SCET operator to the full non-radiative amplitude, we perform tree-level matching, that is, we adjust the coefficients C X (t 1 , . . .t N ) of the N -jet operator (2.15) order by order in λ to reproduce the full amplitude.The latter depends on the scalar products p i • p j .Given the collinear scaling (2.5), these are expanded as Thus, we Taylor-expand the full amplitude in λ, with The leading term in this expansion must equal the matrix element of the leading-power SCET amplitude operator Â(0) defined in (2.17), hence In the last step, we introduced the Fourier transform of the position-space matching coefficient C A0 (t 1 , . . ., t N ).As mentioned before, there are multiple ways to generate next-to-leading power (NLP) currents.The only one relevant here are transverse derivatives acting on the collinear fields in the A0 operator.At O(λ), we have (2.25) The relevant power-suppressed N -jet amplitude operator is (2.26) The coefficient C A1 µ j (t 1 , . . ., t N ) can be obtained as before by matching A (1) with the matrix element of Â(1) : With the help of reparametrisation invariance (RPI) [35,36] of SCET or Lorentz invariance of the full amplitude, one can prove that this relation between the matching coefficients holds to all orders.Thus, the subleading contributions to the non-radiative amplitude are in principle completely determined already from the leading-power matching.To summarise, (2.24) and (2.27) state that, without soft radiation, the matrix element of the N -jet operators Â(i) corresponds order-by-order to the λ-expanded full-theory amplitude A (i) .
3 Soft theorem in scalar QCD In this section, we derive the soft theorem for emission from scalar particles within the SCET of scalar QCD.We further show that the essence of the theorem can be recast in an operatorial statement in the form of a time-ordered product of the non-radiative amplitude and a soft-emission vertex containing the angular momentum operator.To obtain this result, we identify the Lagrangian terms that contribute to the single soft-emission process, and manipulate them directly under the assumption that these terms are evaluated inside a matrix element.In this way, we can cast the interaction vertex in a form that directly yields the soft theorem.Crucially, this simple tree-level computation already demonstrates the universality of the soft theorem, i.e., the radiative amplitude A rad expressed entirely in terms of the non-radiative one, A, without any further calculations.
Set-up
The process we consider consists of N energetic particles moving in well-separated directions, each described by their own collinear Lagrangian, and one soft gluon.The collinear < l a t e x i t s h a 1 _ b a s e 6 4 = " m 9 4 L L H y m 8 y s f K 5 8 q X y t f q p + q 3 6 v f q j 9 n S a x t z z U P t y l P 9 + Q e J 5 P I s < / l a t e x i t > i@ ?p 1 L (1) < l a t e x i t s h a 1 _ b a s e 6 4 = " e u s U + g w x G h a 0 e t a R d W y d W K e W X 9 m s N C q w 4 l b 3 q h + q x 9 X P 8 6 m b G w v P a + v e q H 7 / C 9 w o w 5 s = < / l a t e x i t > suppressed Lagrangian.In the second diagram, both current and Lagrangian are suppressed by a single power of λ.There are no soft building blocks at order λ 2 ; hence, the third diagram does not contribute to the LBK amplitude in SCET.The leading-power emission would correspond to the first diagram with an L (0) insertion.
particles are described by complex scalar fields in some representation of SU (N ).As we do not consider additional collinear emission, we can drop collinear Wilson lines W c and directly work with the manifestly gauge-invariant scalar fields 16).Soft gauge symmetry imposes that there are no soft-gluon building blocks in the N -jet operator basis until O(λ 4 ), where the soft field-strength tensor enters.All building blocks appearing in the N -jet operator (2.15) are thus the collinear fields χ † c , considering the situation of outgoing particles.This means that the radiative amplitude is obtained entirely from time-ordered products of the non-radiative N -jet operator with Lagrangian interactions, visualised as emission off the legs in the first two diagrams in Fig. 2. Notably, diagrams of the third type, containing a soft building block from which the gauge boson can be emitted, are not possible at O(λ 2 ).Such diagrams would introduce processdependence, since the corresponding operators have short-distance coefficients unrelated to those of the non-radiative amplitude.The absence of such operators at leading and first subleading order in λ 2 allows us to focus only on a single collinear direction and the relevant Lagrangian insertion.The final result is then given by the sum over all collinear sectors, already matching the form of (1.1).
To declutter the notation, we omit the indices numbering the external legs, as well as irrelevant variables, e.g., we write C A0 (n 1+ p 1 , . . ., n N + p N ) = C A0 (n + p).As explained in Section 2.5, we align the collinear momentum with the corresponding reference vector, i.e. we have 2 for the external momenta.This choice can always be made without loss of generality.
The Lagrangian is defined as a power series in λ and the leading piece is given by where n − D s is given in (2.9).The relevant parts of the subleading Lagrangian can be expressed in terms of the Noether current3 and soft field-strength tensors F s µν , such that with Here, we introduced the symbol = ∧ , which indicates that we omitted terms that do not contribute to the specific tree-level matrix elements considered here, i.e. with single soft emission, no collinear emissions, and ⊥-component of the external collinear particle momenta set to zero.In particular, this implies that we can drop the collinear Wilson lines W c and do not need to distinguish between the gauge-invariant building block χ c = W † c φ c and the collinear scalar field φ c .The form of the subleading Lagrangian, which contains x µ and ∂ µ explicitly due to the multipole expansion, already resembles the angular momentum operators appearing in the LBK theorem.There are, however, subtleties related to the action of the derivatives, which we explain in the following.
The actual derivation of the soft theorem now reduces to the computation and manipulation of three contributions, namely where k denotes the soft gluon momentum.All other time-ordered products vanish.These three contributions correspond to the insertion of leading-power and subleading power soft gluon emissions in the external legs, as depicted in Fig. 2.
In these computations, the collinear matrix element is proportional to the universal contraction which reproduces the eikonal propagator in the LBK theorem.We included the factor in + ∂ acting on the χ c (x) field to ensure that, for the specific kinematics chosen here, both n − x and x µ ⊥ vanish for the universal contraction term: where the last line vanishes by virtue of δ (2) (q ⊥ ) after integration by parts.
In momentum space, explicit factors of x turn into derivatives with respect to the momentum, and (3.13), (3.14) ensure that these derivatives do not act on the eikonal propagator but only on the hard matching coefficients.This is in line with (1.1), where the angular momentum operator only acts on the non-radiative amplitude.Further, we note that our choice of external p µ ⊥ = 0 allows us to neglect any terms with ⊥-derivative acting on the χ † c (x) field (but not when acting on χ c (x)), since we adopt the outgoing-particle convention, such that χ † c (x) is contracted with the external state, whereas χ c (x) is contracted with the N -jet operator.In summary, the choice of contraction (3.12) guarantees that collinear particles have an eikonal propagator.Thus, the momentum derivatives corresponding to the explicit n − x and x ⊥ appearing in the subleading interactions can be moved past these propagators, and only act on the hard matching coefficient, i.e. on the non-radiative amplitude.
With p µ i⊥ = 0 and p 2 i = 0, which implies n i− p i = 0, the orbital angular momentum appearing in (1.1) simplifies to where we defined the anti-symmetrisation n A (1) . (3.16) We note that A (0) depends only on n i+ p i , hence the derivative with respect to p i⊥µ gives the first non-vanishing contribution only when acting on A (1) .In the following, we derive the operatorial version of these terms directly from the Lagrangian, and show the equivalence to (3.16) subsequently.
Leading-power term
The leading term is given by a time-ordered product (3.9) of the leading current Â(0) with the leading-power Lagrangian (3.2) After integration by parts, we identify the eikonal interaction The radiative amplitude is then obtained from the time-ordered product We see that the leading eikonal term in (3.16) is simply due to the fact that in the eikonal interaction (3.17),only n − A s appears in combination with n + p, consistent with the soft gauge symmetry.
Subleading-power / next-to-soft term
Next, we consider the subleading terms and rearrange each term to show the form of the LBK theorem manifestly.For L χ , a single integration by parts shifts the n + ∂ from the χ † c to the χ c field such that and (3.12) and (3.14) can be applied.This ensures that so there is no contribution at O(λ), consistent with (3.16).
At O(λ 2 ), there are two contributions -one from the time-ordered product T { Â(0) , L χ } and one from T { Â(1) , L χ }, see (3.10) and (3.11) and Fig. 2, corresponding to the terms proportional to A (0) and A (1) , respectively.First, we discuss the contribution from L χ .We integrate-by-parts the ⊥-derivative acting on the χ c field and find The second term vanishes due to the anti-symmetry of F s νµ .We can drop the terms proportional to ∂ ⊥ χ † c (x), which vanish with our choice p µ ⊥ = 0. Hence Similarly, we get Finally, we focus on the L (2c) χ term.First, we use and drop the traceless term in the bracket, which does not contribute when external ⊥momentum is set to zero.We can then replace where we used the equation of motion for the soft gluon and dropped non-linear terms in A s , which only contribute to multiple-emission.This leads to At this point, we integrate-by-parts the n − ∂ derivative and use the equation of motion for the collinear field, It is justified to use the free equation of motion because we neglect terms that contribute to multiple collinear and soft emissions.In the last step, we integrate-by-parts to remove ∂ ⊥ acting on the internal line, and we get This term cancels the last term in (3.23) from L χ , and in summary we find To identify the subleading term in the Lagrangians derived above with (3.16), we give the form of the angular momentum operator (3.15) explicitly in position space: The dots after the first equality represent terms that vanish for p µ ⊥ = 0. We can therefore express the subleading Lagrangians (3.19), (3.28) in terms of the orbital angular momentum operator as The term in L χ contributes in the time-ordered product with Â(1) given in (2.26), as the ∂ ⊥ of the A1 operator acts on the x ⊥ in the Lagrangian.This, in combination with the form of the A1 coefficient given in (2.27) shows that the matrix element yields the p ⊥ -derivative of the non-radiative amplitude A, namely A (1) .
This contribution reproduces the last term in (3.16).
In summary, we find an operatorial version of the next-to-soft term in the LBK theorem where in the second line, we combined the source and Lagrangian terms using (3.20) and = ∧ 0 .The sum Â(0) + Â(1) = Â + O(λ 2 ) represents the non-radiative amplitude expanded up to order λ. 5 We can view (3.33) as a new and fully equivalent representation of the content of the soft theorem: in the absence of collinear radiation, the single soft-emission amplitude follows directly from the time-ordered product of the nonradiative amplitude and an interaction vertex containing the angular momentum operator.
Recovering the LBK amplitude
To verify the operatorial form and recover the LBK amplitude (1.1), we transform the expression to momentum space and evaluate the time-ordered product in the matrix element.
We can obtain the expansion of the non-radiative amplitude in a generic reference frame using (2.24) and (2.27) to be where C A1 can either be computed from the generic non-radiative matching, or from the leading order coefficient and the RPI relation (2.28).With p µ i⊥ = 0, the entire non-radiative amplitude is given by A (0) , and the suppressed terms are only relevant for the angular momentum term, where the derivative ∂ ∂p i⊥ can act on left-over p i⊥ .Evaluating the leading-order expression explicitly, we find where we used the universal contraction (3.12) and (2.24), which relates the leading-power matching coefficient to the non-radiative amplitude.This verifies the operatorial form of the leading LBK theorem (3.18).For the subleading term, we evaluate In the first line of (3.36), the operator L µν is assumed to be represented in position space (3.29), while in the second line, we assume momentum space representation for L µν , see (1.2).Finally, we sum over all collinear directions and recover the radiative amplitude (1.1).
Since we already showed that the operatorial statement (3.33) is universal, and we computed the relation for an arbitrary non-radiative amplitude A, this proves the LBK theorem.As we can see, working directly within the EFT allows for a short and simple derivation of the LBK theorem, since due to the multipole expansion, the effective Lagrangian already contains all the elements of the angular momentum operator.The derivation gives an intuition for the two universal terms based on the effective gauge-symmetry, where the first one stems from the gauge-covariant derivative and is thus not manifestly gauge-invariant.
The second term originates from the subleading interactions that are expressed in terms of the field-strength tensor.We can also immediately see that a third term would no longer be universal.Here, soft building blocks make an appearance in the operator basis, and the computation becomes process-dependent.
In view of the subsequent discussion of the gravitational soft theorem, we emphasise the following connection between the structure of the soft-collinear effective Lagrangian and the soft theorem for gauge theory: The effective Lagrangian contains the covariant derivative n − D s (x − ) of the background gauge field n − A s (x − ) only at leading power.All subleading interactions are expressed in terms of the gauge-invariant field strength tensor, multiplied by explicit factors of position x µ from the multipole expansion of soft fields around the classical trajectories of the energetic emitters.The covariant derivative interaction is related to the LP eikonal term in the soft theorem, which is gauge-invariant only after summing over all emitter directions, assuming charge conservation of the non-radiative process.All subleading terms are gauge-invariant, since the interactions involve only F s µν .However, universality ends at the NLP, since in higher powers there exist source operators containing soft field products invariant under the soft gauge symmetry, which have coefficient functions unrelated to those of the non-radiative process.The coupling to the angular momentum operator arises naturally from the dipole terms in the multipole expansion, when the full non-linear Lagrangian is restricted to single emission at tree level.
The following two sections explain how the universal spin term arises for soft gauge-boson emission.Readers mostly interested in the gravitational soft theorem may jump directly to Section 6 from here.
Fermionic QCD
The derivation of the soft theorem in the fermionic case is very similar to the scalar case.The main difference lies in the subleading term, where the angular momentum operator contains an additional spin contribution.We briefly discuss the derivation of the orbital momentum part and then focus on the spin part.
In the effective theory, one splits the full-theory spinor field ψ c = ξ c + η c and works with the leading component ξ c , integrating out the subleading component η c .The leading component satisfies the projection property which implies / n − ξ c = 0.This is similar to non-relativistic spinors, where one only keeps the leading two components of the spinor field.This also means that in the amplitude, we have to Taylor-expand the external spinors u(p i ), as explained below.
In the N -jet operator (2.15), the building block is now the gauge-invariant fermionic field χ c .As the fields now carry a spinor index, which must be contracted to form Lorentz scalars, the matching coefficients C A0 (t 1 , . . ., t N ) and C A1 µ (t 1 , . . ., t N ) become tensors in spinor space.For the sake of a simpler notation, we omit the spinor indices.Besides this change, the notation set up in Section 2 is still valid.This is one advantage of the SCET formalism: it works for collinear matter fields regardless of their specific spin or gauge representation.
Non-radiative matching
We proceed with the non-radiative matching, following the outline in Section 2. The LP matching (2.24) now reads Thus the C A0 (n 1+ p 1 , . . ., n N + p N ) can be understood as the LP amplitude with the external spinors stripped off, Here ξ c i (p i ) is an i-collinear spinor obtained from the expansion of the full QCD spinor The subleading current can be matched like in the scalar case (2.27).However, now we take into account the subleading term in the external QCD spinor expansion (4.4).This additional term is also universal and follows from reparametrisation invariance.Consequently, for spinors, the relation (2.28) reads Following the split performed in the last line, we also split the NLP operator into the orbital and spin parts proportional to C As for the scalar case, we find the universal contraction (3.12), which now reads Thanks to the different normalisation of fermionic fields, we do not need to include n + ∂ derivative to achieve (3.13) and (3.14).
Soft theorem
Like in the scalar case, there are no soft gluon building blocks available for the N -jet operators, and all contributions must stem from time-ordered products.As in Section 3, we focus on a single external line and choose the kinematics where all p i are aligned with their reference vectors n i− , i.e., p µ i⊥ = 0.The Lagrangian for fermionic SCET is [20] L (1) As before, we neglect non-contributing terms and split the Lagrangian into several parts that we discuss one by one with analogues of the scalar counterparts (3.4) and a new, spin-dependent part The spin operator Σ µν is expressed in terms of the light-cone components where, after = ∧ , we dropped the terms that vanish due to the projection property (4.1) of collinear spinors and our choice p µ ⊥ = 0.The derivation of the orbital angular momentum term is almost the same as in the scalar case.As before, we find that ξ (x) agrees with the first term in (3.23), however, the second term is absent, which implies that the contribution from L (2c) ξ (x) does not cancel.Instead, L (2c) ξ (x) supplies the longitudinal components of the spin as shown below.Thus, we find the orbital angular momentum as in (3.33), Let us focus on the spin part in the soft theorem.We see that the first term in (4.14) is directly given by T { Â(0) , L (2s) ξ } , while the last term is obtained from the remaining part L (2c) ξ .Following the same manipulations, which in the scalar case led us to (3.27), gives In summary, the second-order Lagrangian takes the form spin , (4.17 where the orbit part (4.12) and spin parts (4.13), (4.16) are manifest Note that here, in both the orbital and the spin part, the mixed transverse-longitudinal terms are missing.Like in the scalar case, the missing orbital term is reproduced by the contribution from ξ }.For the spin term, notice that the Â(0) operator contains the projection This projection eliminates the mixed transverse-longitudinal contribution, as This missing mixed term is related to the new contribution due to T { Â(1) spin , L ξ } .It is important here that the operator Â(1) spin is completely fixed by the RPI relation (4.5) and thus determined solely by the non-radiative amplitude.Using translation-invariance of the propagator and the fact that the SCET fermion propagator anti-commutes with γ µ ⊥ , we derive Summing the contributions from T { Â(0) , L spin } using (4.19) and T { Â(1) spin , L ξ } given in (4.22), we recover the spin term in the LBK theorem, We replaced the matching coefficient C A0 by the stripped non-radiative amplitude according to (4.3), and our choice of kinematics implies A = A (0) for the spin-dependent term, since it does not contain any x-dependent terms.In addition, we used the projection properties (4.1), (4.21) to pull out the full Σ µν .After evaluating the matrix element, the term inside the bracket becomes equal to the eikonal factor, similarly to our universal contraction (4.7).
Vectorial QCD
We now extend the treatment to the case of vector matter and show in particular how the spin-1 term arises.While the expressions look quite different at first, the situation is remarkably similar to the fermionic case.We consider a complex6 vector field V µ c in some representation of SU (N ), but in principle the discussion also holds for collinear gluons as emitting particles.The vector components scale as in the SCET power-counting parameter.The vector field must come with its own gauge symmetry, which we call V c -gauge, to define a consistent theory.However, the details of this gauge symmetry as well as the collinear gauge symmetry are irrelevant for the soft theorem, and either by explicit gauge-fixing, or by Wilson-line constructions similar to the case of the gluon, we can define a manifestly V c -gauge invariant field.It is only necessary that this vector transforms like a matter field under the soft gauge transformation.To stress this, we define the gauge-invariant field V µ c , which satisfies n + V c = 0 and only transforms under the soft gauge symmetry.It is advantageous to work with this gauge-invariant vector field.First, since n + V c = 0, there is no O(1) building block, and the first possible collinear vector operator scales as V c⊥ ∼ O(λ).Second, the n − V c component is subleading.We can express this component in terms of the leading V c⊥ using the equation of motion as Thus V c⊥ is the analogue of the spinor ξ c in Section 4, and (5.2) is the analogue of the spinor-relation (4.4).The crucial difference to the fermionic theory is that we write the Lagrangian in terms of the original field V cµ .For the actual computations, this means that V cµ should be expressed in terms of the original field V cµ .To linear order, they are related as
Non-radiative matching
Again, we consider first the non-radiative matching, following the outline in Sections 2 and 4. In the operator basis, we only have V c⊥ as a building block, which enters at O(λ).
The LP matching (2.24) now reads (5.4) In the vectorial case, there are a few subtleties to note: First, the amplitude A (0) is written in terms of the full-theory polarisation vectors ε α , which are not necessarily restricted to transverse components, whereas the N -jet operator contains the transverse building block V c⊥ and the corresponding polarisation tensor εc i α i⊥ , where the first index refers to the collinear direction, and the second is the Lorentz index.These vectors are related via (5.3) as Next, the full-theory amplitude A α 1 ...αn is a rank N tensor in Minkowski space, with one index per external polarisation tensor.Thus, also the matching coefficient C A0 is a tensor, indicated by the brackets.At leading order, only the ⊥ components are relevant.At subleading order, the other components are also relevant, as will be see in the C A1µ relation (5.8) below.Hence, we do not restrict the indices of C A0 to be transverse, but rather write an explicit η αβ ⊥ to indicate this projection.This is essentially the analogue of the projection properties (4.1) of the spinor ξ c in Section 4. Just as in the fermionic case (4.4), the subleading component of the polarisation vector is related to the leading one via (5.2) as (5.6) Thus, as in Section 4, the C A0 β 1 ...β N corresponds to the LP amplitude with external polarisation vectors stripped off, (5.7) The subleading current is related to the leading current in a similar fashion as in the fermionic case presented in (4.5).Here, we again find the spin-independent contribution present already in the scalar case (2.28), as well as the contribution from the subleading n − V c component, similar to the spin part in (4.5).The relation reads (5.8) Here the coefficients of the A1 current are contracted as C A1µ ...α i ... i∂ ⊥µ V α i⊥ c i .Note the similarity of the first term to the spin-term in (4.5).We define the corresponding Â(1) orbit Â(1) spin accordingly as in (4.6).Just as in the scalar case, the universal contraction (3.12) of the original V cµ fields should be defined with n + ∂ acting on V cµ , and it is now given by where the soft momentum is denoted by k and we use the propagator in Feynman gauge.
With this understanding we simplify the cluttered notation by dropping the Lorentz indices due to contractions with polarisation tensors in the following.We keep the ones which are related to contractions with derivatives, e.g. in C A1µ , essentially using the same notation as in Section 4, always keeping in mind the fact that C A0 is actually a rank-N tensor.
Soft theorem
It is now straightforward to derive the soft theorem.Again, following the discussion in Section 3, we make use of the fact that there are no soft gluon building blocks available, that we can consider a single leg and sum over the individual contributions, and we can choose The relevant part of the soft-collinear Lagrangian for the complex vector field can be conveniently expressed in terms of the linear current, just as in the scalar case.We define the current (5.10) The first two terms look very reminiscent of the scalar Noether current (3.3), while the last two terms are new, and are relevant for the contributions to the spin operator.Note that these terms also contain the linear terms of the interaction F µν A V † cµ V cν by virtue of integration by parts.Expressed in terms of the current, the Lagrangian contains the same structures as in the scalar and fermionic cases (3.4), (3.5) and (4.10), (4.11), respectively, and is given by V . (5.11) The leading term reads and the subleading linear interactions are with (5.17) In this form, we see the complete analogy to the scalar case (3.4).However, we can also rearrange the terms to bring them into a form similar to the fermionic Lagrangian.To achieve this, we have to abandon the compact notation in terms of the Noether current and write the terms explicitly.
To find the analogue of L (2s) ξ in (4.13), we manipulate We used integration by parts, dropping terms proportional to p µ ⊥ , the equation of motion, and ∂ µ V µ = ∧ 0, and introduced the spin-1 operator (Σ µν ) αβ = η µα η νβ − η µβ η να , decomposed as where we already neglected the components that do not contribute in the following.Just as for the fermionic case, we see that we can explicitly read-off the transverse contribution to the spin angular momentum.For L V , we find, using the same manipulations as for the scalar case (3.27) where we defined the short-hand notation F s a +− = n µ + n ν − F s a µν .This term does not have any interpretation, and it does indeed cancel out with some parts of L (2b) V .Together, we find The first term generates the orbital angular momentum component L +− .The second can be rewritten as and is related to the longitudinal part of the spin term.We now observe that we can rewrite L (2) V in the same form as in the fermionic case: generates the orbital angular momentum, and we have an explicit spin term L (2a) V .In summary, we find after these manipulations and we see that the first term generates the term in the orbital angular momentum proportional to n + ∂n − x, just as in the scalar (3.28) and fermionic (4.18) case.The second term contains two of the three spin terms, like for the fermion (4.19).The two missing terms are the two mixed transverse-longitudinal terms, namely the orbital angular momentum piece proportional to n + ∂x µ ⊥ and the spin piece proportional to Σ µν ⊥+ .Just as in the scalar and fermionic cases, these missing pieces stem from the time-ordered product T { Â(1) , L V }.Indeed, we can rearrange L V as follows, which gives a non-vanishing contribution only in the time-ordered product with the A1 current.The second term in the first line of (5.24) does not contribute at O(λ) in T { Â(0) , L V }, since it gets projected out by the transverse η ⊥ (cf.(5.4)) and can only give a non-vanishing contribution with the A1 current.In this case, however, it vanishes after setting p µ ⊥ = 0.The first term is essentially the same as the scalar L ξ (4.10), and it is this term that generates the orbital piece time-ordered product with Â(1) orbit .Just as in the fermionic case (4.22), the time-ordered product with Â(1) spin then yields the missing transverse-longitudinal spin term.
In summary, the terms contributing to single-soft emission in the Lagrangian are cast in the form spin , (5.25) with the soft-collinear interaction Lagrangians given by
L
(1) We note the similarity to the fermionic interactions (4.18), (4.19).Combining all contributions, we the subleading next-to-soft term in the soft theorem is orbit + L (2) spin and we recover the same structure as for the scalar (3.33) and fermionic (4.23) case.Although we shall not pursue this further here, the striking similarity of the structure of the derivation of the fermionic and vectorial case suggests that this procedure and the corresponding operatorial statement can now be generalised in a straightforward fashion to matter fields of arbitrary spin.
Soft theorem in gravity
Now that we thoroughly discussed the gauge-theory case, we proceed to the gravitational soft theorem.In gravity, there is an additional next-to-next-to-soft term.Hence, when deriving the gravitational soft theorem, we go to O(λ 4 ) to obtain the three universal terms.
In the following, we focus on the gauge principles underlying the terms contributing to single soft emission, i.e. terms linear in the soft graviton field s µν .This greatly simplifies the discussion, as the complicated structure of the non-linear interactions, as well as interactions between collinear and soft gravitons are avoided.For the general, non-linear soft graviton interactions, we refer to the companion work [22].
One major difference to gauge theory is that in gravity the gauge transformations are inherently inhomogeneous in λ.This is due to the fact that the components of collinear momenta scale differently in λ, and the momentum generates the gravitational gauge symmetry, local translations.Whereas the homogeneous gauge symmetry was the guiding principle in the construction of soft-collinear gauge-theory beyond leading power, in the gravitational case we have to relax this constraint and find a gauge symmetry that respects the soft multipole expansion.In the following, we briefly review the salient features of SCET gravity [22].We consider a minimally-coupled complex scalar field to make the connections to the previous sections transparent.In this section, we use the short-hand notation n α ± A αβ... = A ±β••• for the contractions with the collinear reference vectors.
Soft gravity
In the effective theory, the infinitesimal emergent soft gauge transformation consists of two parts [22], a local translation and a local Lorentz transformation.Under these, the collinear field transforms as where ω αβ s (x − ) is related to the derivative of the full-theory parameter ε s (x) as and x µ − is defined in (2.10).Let us stress that, since the soft gauge symmetry lives only on the classical trajectory x µ − of the energetic particle, the parameters ε s and ω s must be viewed as independent objects, as the latter is evaluated at x − only after taking the derivatives.Hence, we already anticipate that there are two independent gauge fields.These fields can be used to define a soft-covariant derivative n − D s , which is non-linear in the soft graviton field s µν .To first order in s µν , this derivative is given by where we introduced the angular momentum Just as their corresponding gauge parameters, the soft field s µ− (x − ) and its derivative [∂ α s µ− ] (x − ) are independent objects in the effective theory.We can thus truly interpret them as independent gauge fields, even though they stem from the same full-theory field, which couple to momentum and angular momentum, respectively.It is remarkable that the soft sector provides a soft-covariant derivative quite naturally, even though the minimallycoupled scalar field does not contain a gravitational-covariant derivative in the full theory.We can appreciate the strong similarities to the gauge-theory case outlined in Section 2.3.Besides this soft-covariant derivative, the subleading interactions are expressed entirely in terms of the Riemann tensor and its derivatives, in analogy to the gauge-theory case (3.4), where the subleading interactions are expressed in terms of the field-strength tensor F s µν .The full Lagrangian containing all relevant terms up to O(λ 4 ) is given in Appendix A.1.We focus on the terms up to O(λ 2 ) here for brevity.The Lagrangian for a complex gaugeinvariant scalar field χ c can be conveniently expressed as where we introduced the energy-momentum tensor In this form, we see quite transparently the coupling of s µ− to the energy-momentum tensor T µν , as well as the coupling of its derivative ∂ [α s β]− , which is an independent gauge field in the effective theory, to the angular momentum density However, the Lagrangian (6.6) is not homogeneous in λ, as the scaling of the collinear momenta leads to inhomogeneous contractions between soft fields and collinear derivatives.
Expanding in powers of λ, we find where In the soft theorem, the structure of the Lagrangian (6.6) manifests itself as follows: The leading interaction generates the eikonal term where the first factor p µ is due to the coupling, and the second factor is the eikonal propagator.ε µν denotes the polarisation tensor of the emitted graviton.The next interaction in (6.6) generates the subleading term where we can see that the coupling to the angular momentum in the Lagrangian already provides the correct form.Here, just as in the gauge-theory case, the term counts as O(λ 2 ), and the terms in L (1) only contribute in time-ordered products with suppressed N -jet operators, like T { Â(1) , L (1) }.These first two terms are the exact analogues of the leading eikonal term in QCD, and they also stem from the gravitational covariant derivative.Finally, the Riemann tensor terms which are present in L (2) , L (3) and L (4) generate the subsubleading term 1 2 .15)which starts to contribute at O(λ 4 ).Here, one factor of J is due to the coupling, while the second factor is from the eikonal propagator in combination with the explicit x ⊥ , just as in the second term in QCD.In the following, we make these ideas explicit and derive the soft theorem to O(λ 4 ), following closely the derivation in the gauge-theory case.
N -jet operators
Most of the concepts introduced in Section 2 can be carried over to the gravitational case, but there are some differences.The first difference to the gauge-theory case concerns the N -jet operator (2.15).For gravity, these operators are defined in a translationally-invariant fashion as where Â(0) is the same N -jet operator as defined in (2.15), and T x denotes a translation to the point x.Once we evaluate the matrix element, the integral over x turns into the momentum-conserving δ-function.We can therefore adopt the convention that we drop the integral over x, work with the unintegrated N -jet operators (6.17) as in the previous sections, and impose momentum conservation by hand in the final amplitude.This simplifies the notation greatly.The operator basis can be constructed analogously to the QCD case, and the generic building blocks are given by the analogues of (2.16).Note that this time, the A2 building block is relevant for the O(λ 4 ) contributions, as it is related to the second derivative of the non-radiative amplitude, 7 and the amplitude is expanded as The soft building blocks differ slightly from the gauge-theory case.In gravity, the covariant derivative n − D s contains two independent gauge fields, one linked to local translations and the other to local Lorentz transformations.As in the gauge-theory case, this covariant derivative can be eliminated using the equations of motion.The next allowed soft building block is then the Riemann tensor R s µναβ , the analogue of the field-strength tensor F s µν .However, the Riemann tensor contains two derivatives of the soft field, and thus counts as O(λ 6 ).Hence, in gravity, there are no soft graviton building blocks in the operator basis until O(λ 6 ).In other words, the first three terms in gravity are universal for all processes, and only at O(λ 6 ), process-dependent building blocks can enter.This already proves that the gravitational soft theorem contains three universal pieces, including a next-to-next-to soft term, and we determine them in the same fashion as the gauge-theory case presented in Section 3.
To sum up, the entire soft emission process up to O(λ 4 ) can be described with purely collinear building blocks (2.18) and time-ordered products with the Lagrangian.The nonradiative matching works exactly as discussed in Section 2, and there is no adaptation needed.
We can now proceed with the derivation of the soft theorem.This derivation is completely analogous to the one presented in Section 3, but we extend the discussion and go to O(λ 4 ) to also find the universal sub-subleading (next-to-next-to-soft) term.For the scalar field, the gravitational soft theorem (1.3) is given by We make use of the same manipulations and choices as we did in the previous sections, explained in detail in Section 3.1.Most importantly, we choose a reference frame where the external collinear momenta satisfy p µ ⊥ = 0 and make use of = ∧ to drop terms that do not contribute to the single soft-emission matrix element.Due to the absence of soft graviton building blocks, we can consider the emission off a single leg.In the following, we present the terms contributing to the soft emission and skip most of the computations, which are essentially the same as performed in detail in Section 3.3.For the interested reader, the full Lagrangian contributing to single soft emission up to O(λ 4 ) as well as details regarding the computation can be found in Appendix A.
Leading-power term
Just as in the gauge-theory case, the leading contribution can already be read-off from the Lagrangian.It is given by the time-ordered product of the leading current Â(0) with the leading-power Lagrangian, which we rewrite as Here, the first n + ∂ is the analogue of the colour-generator t a in QCD, and the second n + ∂ generates the eikonal propagator, as in Section 3, (3.17).We obtain the leading contribution to the radiative amplitude as where we can immediately read off the eikonal term proportional to n + ∂ (s −− n + ∂χ c ).Note that s −− depends only on x − , and hence n + ∂ commutes with s −− .
Subleading-power / next-to-soft term
Just as in the gauge-theory case, with p µ ⊥ = 0, there is no contribution at O(λ) in (6.21), and the subleading term, given by κ 2 enters at O(λ 2 ).The angular momentum is given by the two terms in (3.29) and these two terms are reproduced analogous to the scalar QCD case, the first one stemming from T { Â(0) , L (2) } and the second one from T { Â(1) , L (1) }.
First, we check that there is no contribution from T { Â(0) , L (1) }: In L (1) , (6.11), we integrate by parts to write it as and we see that there is no contribution at O(λ), as However, just as in the gauge-theory case, there is a non-vanishing contribution of the second term from the time-ordered product with Â(1) .We can split L (2) in a similar fashion as in the gauge-theory case (3.4) as Using integrations by parts and p µ ⊥ = 0, we find for the first contribution which vanishes by symmetry.For L (2b) , no additional manipulations are needed and we find the L µν +− orbital angular momentum term Next, we have L (2c) with the two x ⊥ factors.Again, we write x α ⊥ x β ⊥ in terms of a trace and a traceless part to get where we dropped non-linear terms in s µν .With η αβ ⊥ R s α ⊥ −β ⊥ − = η αβ R s α−β− and using the source-less equation of motion R s −− = 0, we find that this term vanishes.At the linear order in s µν , this is equivalent to working with the transverse-traceless external polarisation tensor.Finally, L (2d) = ∧ 0 because p µ ⊥ = 0.In summary, the subleading Lagrangian is expressed as where we identified the angular momentum using (3.29) and defined the short-hand notation where χ In this form, we see the universal contraction and the coupling to the angular momentum.Alternatively, we can write e.g.(6.35) as where we immediately recognise the structure of (6.25).
The operatorial version of the subleading term is then given by Upon evaluating the matrix element, we find the eikonal propagator and a coupling to the angular momentum.Unlike the gauge theory case (3.33),where the field-strength tensor F s µν appeared, this time the subleading term takes the form of an eikonal term, just as the leading term, where the analogue of the charge is given by the orbital angular momentum L µν , and the gauge field corresponds to ∂ [µ s ν]− .Indeed, this term is linked to the second group of terms in the Lagrangian, which are coupled to the angular momentum tensor, and is only gauge-invariant once we impose angular momentum conservation.Hence the twofold soft gauge symmetry immediately implies two eikonal terms in the soft theorem.Only the third term is then expressed in a manifestly gauge-invariant fashion, via the Riemann tensor.
Sub-subleading / next-to-next-to-soft term
We proceed with the derivation of the next-to-next-to-soft term.We highlight only the main points of its derivation and relegate the detailed computation to Appendix A.3.There is no contribution at O(λ 3 ) for p µ ⊥ = 0, so we turn to the sub-subleading term from (6.21), κ 4 When evaluating an on-shell amplitude, the two angular momenta L µν can be taken to act only on the amplitude [7].In position space, at the Lagrangian level, it is convenient to thus define the combination Expanding the product of angular momenta, we find four terms given by Identifying the linear single-emission terms in the Riemann tensor as we can write These three terms originate from three different time-ordered products, which we recognise from their explicit x-dependence.The first term, containing (n − x) 2 , contributes in the time-ordered product T { Â(0) , L (4) }, so this term has to be identified inside L (4) .The second term contains one x ⊥ , so it yields a non-vanishing contribution only with at least one ∂ ⊥ inside the current, i.e. it contributes inside T { Â(1) , L (3) }.Finally, the last term contains two factors of x ⊥ , so it needs to act on ∂ 2 ⊥ to give a contribution.It appears in the product T { Â(2) , L (2) }.We can immediately identify the relevant terms in the respective Lagrangians.After some manipulations (details in Appendix A.3), we find and the sub-subleading-power term of the soft theorem can be cast into the operatorial statement which exhibits the desired form.In the last line, we transformed the left-right angular momenta ← L µν → L αβ into the standard form L µν L αβ using on-shell properties and equations of motion similar to the proof given in [7].
In summary, we see that all three terms of the soft theorem can be cast into an operatorial statement.In view of the derivation provided here, the individual terms acquire a new interpretation.The first two terms (6.24), (6.39) take the form of an eikonal term and generalise the leading term (3.18) in gauge theory.This follows because the effective theory for soft-collinear gravitational interactions contains two soft background-gauge fields, one, s α− (x − ), coupled to the momentum, and another, ∂ [α s β]− (x − ) to the angular momentum.These two gauge fields appear in the soft-covariant derivative of soft-collinear gravity, and in turn these interactions determine the first two terms of the soft theorem.This explains why in the gravitational soft theorem, the first two terms are gauge invariant only after summing over all emitters, and assuming the conservation of the corresponding charges, momentum and angular momentum, in the non-radiative process.All further subleading soft-collinear interactions can be expressed in terms of the gauge-invariant Riemann tensor at x − .However, unlike in gauge theory, the Riemann tensor contains two derivatives and arises only at second (quadrupole) order in the multipole expansion, that is, at subsubleading order.(In gravity, the dipole terms are related to the gauge field coupling to angular momentum.)The universality of soft emission ends at this sub-subleading order, since in higher powers there exist source operators containing soft field products involving the soft Riemann tensor invariant under the soft gauge symmetries, which have coefficient functions unrelated to those of the non-radiative process.The two factors of angular momenta in the sub-subleading soft theorem are seen to have different interpretations.One factor is related to the charge of the soft theorem, similar to the one in the subleading term, while the second relates to the coupling to the Riemann tensor, similar to the appearance of J µν from the coupling to F s µν in the gauge-theory soft theorem.In this way, the gravitational soft theorem is directly linked to the structure of the soft-collinear gravity Lagrangian, restricted to single-emission at tree level.
Loop corrections to the soft theorem
Both the gauge-theory and the gravitational soft theorem are modified by loop corrections [28,37].However, in gravity, the structure of these modifications is quite different, as can easily be seen from power-counting in the EFT perspective.
In SCET, loop contributions arise from three different loop momentum regions, the hard, the collinear and the soft region, corresponding to the hard, collinear and soft modes in the effective theory.The hard modes are integrated out, thus the contributions of the hard loops are inside the matching coefficients C X (t i ) and part of the non-radiative amplitude.
< l a t e x i t s h a 1 _ b a s e 6 4 = " N t V Q v I y b n R a 7 C Z k 3 3 P l P q m B X e 9 I = " u e X u t n Q / t z a O 3 0 x a t W R v W S 2 v b 8 q x 9 6 8 h 6 b x 1 b J x a t f a 5 9 r X 2 r f a 9 / q f + o / 6 z / m k x d X Z n m v L D m R v 3 3 P w v / u P I = < / l a t e x i t > This means that a, b ≥ 2 and the soft loop is suppressed by λ 4 .In addition, the soft loop is scaleless unless the graviton is emitted via a purely-soft vertex, as depicted in the last diagram.This process is further suppressed by c ≥ 2, yielding a total suppression by λ 6 .
Hence, hard loops never affect the soft theorem, insofar as they modify the underlying non-radiative process.
In the following, we therefore focus on collinear and soft loops.Gravity differs from gauge theory in two important aspects in the soft and collinear sector [22], ultimately due to the dimensionful coupling: i) In the purely-collinear sector, that is in the Lagrangian terms containing only collinear but no soft fields, there are no leading power interactions.The λ expansion corresponds to the weak-field expansion, and the first collinear interaction appears in O(λ).
ii) In the purely-soft sector, that is in the Lagrangian terms containing only soft but no collinear fields, there are also no leading power interactions.Here, the weakfield expansion agrees with the λ 2 expansion, corresponding to an expansion in soft momenta k ∼ λ 2 .Purely-soft interaction vertices thus start at O(λ 2 ).
Hence, whenever a purely-collinear or a purely-soft interaction takes place, the contribution is already suppressed by at least one order of λ or λ 2 , respectively.In gravity, only softcollinear interactions exist at leading power.This has a drastic impact on the loop corrections to the three universal terms in the soft theorem.In the remainder of this subsection, we show within the EFT framework that the leading-power eikonal term is not modified, the subleading term is only corrected at one-loop, and the sub-subleading term by one-and two-loop contributions.These conclusions agree with [37] and sharpen the all-order power-counting of soft and collinear loop corrections.
Let us first add one collinear loop to the emission process.The first possibility is to connect the i-collinear loop only to the i-collinear leg (or legs, if one considers multiple i-collinear particles), as depicted in Fig. 3. Due to the aforementioned point i), these < l a t e x i t s h a 1 _ b a s e 6 4 = " 3 J a < l a t e x i t s h a 1 _ b a s e 6 4 = " 9 v 3 P / w K B N y y B e l R i 1 t 4 6 g D 0 s j j s = " > A T q W a T y t f K z 8 q v y e + N H d a v 6 v P p i F r q + N s 9 5 Z q 2 s 6 q s / T G H S W Q = = < / l a t e x i t > k p j p i b a < l a t e x i t s h a 1 _ b a s e 6 4 = " L d 8 l n h h 7 R 0 attachments must stem from the subleading purely-collinear Lagrangian L As we need to attach the loop twice, this yields a suppression of at least O(λ 2 ).Alternatively, one can form a tadpole using a vertex containing two gravitons.These vertices start at O(λ 2 ).The second possibility is to attach the loop to the hard scattering, by adding an additional i-collinear building block to the N -jet operator, and then connecting it to the i-collinear leg.However, these building blocks are also suppressed by another power in λ, so this contribution also counts at least as O(λ 2 ).In summary, the collinear one-loop contribution is suppressed at least by O(λ 2 ), thus it cannot affect the leading term of the soft theorem.
Next, let us consider the soft one-loop contributions.Here, it is important to note two major simplifications: First, if the soft loop connects only a single i-collinear leg, as depicted in Fig. 3, one can fix soft light-cone gauge in the n i− direction, effectively decoupling the soft-covariant derivative.For gravity, this implies that the first possible soft-collinear vertex is the Riemann-tensor term in L i , thus suppressed by at least O(λ 2 ).To attach the loop to the leg, one needs two such vertices, so this loop is already suppressed by O(λ 4 ), without considering the soft emission itself.Therefore, these types of diagrams are too powersuppressed and not important for the discussion.The only relevant soft-loop contribution arises from a loop connecting two legs of different collinear sectors, depicted in Fig. 4, which can already appear at O(1) using the leading-power interactions. 8However, we shall now argue that these contributions vanish unless the external soft graviton is connected to the loop through a purely-soft interaction, as shown in the last diagram in the figure.The loop depicted in Fig. 4, with the soft graviton emission removed, is given by the (dimensionally regulated) integral The integral is scaleless, and vanishes.If one now attaches the soft graviton to one of the collinear lines (see first three diagrams in Fig. 4), one can always route the k-momentum such that it appears in the eikonal propagators of only one of the legs, say i.In this way, the loop integral (6.50) may be modified to include eikonal propagators of the form (n i− (l + k) + i0) −1 .Since only the n i− k component of the soft momentum can ever appear in the denominator, one cannot form a soft invariant and the soft loop integral will remain scaleless and vanishing. 9In order for soft loops to yield a non-zero contribution, one needs to bring the full external soft momentum k into the loop integral.This requires the external soft graviton to couple to the loop through a purely-soft interaction (as in the last diagram in the figure).Such interaction vertices involve the full momentum conservation delta function and lead to propagators 1/(l + k) 2 .However, by point ii) above, such a purely-soft vertex comes at the cost of power-suppression by at least λ 2 .Hence, soft one-loop corrections can also not affect the leading term in the soft theorem.These considerations easily generalise to any loop order.The key results from the above discussion can be summarised as follows: i) A collinear loop can only be connected by purely-collinear vertices, which are powersuppressed in gravity.Thus, adding a collinear loop always brings suppression of at least O(λ 2 ).
ii) A soft loop is scaleless, unless it is directly connected to the external soft graviton by a purely-soft interaction vertex, due to the multipole expansion in soft-collinear vertices.Since purely-soft interactions are power-suppressed in gravity by a factor of λ 2 , adding a soft loop yields a suppression of at least O(λ 2 ).
For purely-collinear loops, the suppression by O(λ 2 ) per loop immediately allows us to conclude that the subleading soft factor can only be modified at the one-loop order but not by collinear two-or higher-loop contributions.Similarly, the sub-subleading factor can only be modified by collinear one-and two-loop processes, since three-loop is suppressed by at least λ 6 .
For purely soft, as well as mixed soft and collinear loops, one again needs to inject the full external soft momentum k into the loops, otherwise they are scaleless and vanish.For multiple soft loops, this is only possible if the soft emission is attached to the soft particle in the loop, and each additional soft loop is also directly connected to this loop via purely-soft emission vertices, as depicted in Fig. 5.If a loop is not directly connected (via purely-soft vertices) to the soft emission, it is scaleless and vanishes.Thus, effectively, as was the case Q h x h q V / K p S z F Y x i n e U M 3 x r v Z h l X j t N 3 y 9 l v d d + 3 m 0 e u q R R v W M + u 5 t W d 5 1 o F 1 Z L 2 1 j q 0 T C 9 d E 7 X P t S + 1 r / V P 9 W / 1 7 / c f U d X 2 t i n l q L a 3 6 z z 9 f G Q v b < / l a t e x i t > k p j p i c d Figure 5: An example of a non-vanishing soft two-loop contribution.In order for the soft scale k to be present in both loop momenta, the loops must be attached via purely-soft interactions to the emitted graviton, so c, d ≥ 2. Any soft loop yields an additional λ 2 suppression, and the two-loop contribution therefore starts at sub-subleading order λ 4 .
for collinear loops, each additional soft loop comes with at least O(λ 2 ) from the purely-soft vertex.We can again conclude that the subleading term can only be affected by one-loop processes, and the sub-subleading term is modified by one-and two-loop diagrams.In summary, the power-counting and multipole expansion of soft-collinear effective theory, combined with the necessity of a soft scale in soft loops for them to not vanish, immediately imply the following result, already obtained in [37]: The leading soft factor is never modified by loops, the subleading factor is only corrected by one-loop, and the sub-subleading factor is only modified by one-and two-loop contributions.Higher loopcorrections cannot affect the terms of the gravitational soft theorem.
Summary and outlook
Despite over 60 years of history, soft theorems are still an active field of research.Adopting the perspective of effective Lagrangians, we connect the structure of the NLP terms in the soft theorems for gauge theories and perturbative gravity to the emergent soft gauge theories of the respective Lagrangians.This point of view is especially revealing for gravity: • It explains without calculation why soft graviton emission is universal to sub-subleading order (rather than subleading order only as for gauge bosons).Since one constructs soft-gauge invariant sources with soft fields to these orders, the emission is controlled by Lagrangian interactions, which are independent of the hard process.
• The leading and subleading terms, despite resembling the two gauge-theory terms after substituting colour charge by momentum, should in fact both be interpreted as eikonal terms, stemming from the covariant derivative of soft-collinear gravity and representing the coupling to the charges of the soft gauge symmetry.This explains why, unlike in gauge theory, the first two terms in the soft theorem require charge conservation of the hard process in order to be gauge invariant.
• The two angular momentum factors in the sub-subleading term, while representing the same mathematical expressions, have different interpretations.One factor is related to the charge of one of the soft gauge symmetries, while the other arises from the kinematic multipole expansion, in analogy with gauge theory.
The EFT formalism allows us to cast the soft theorem into an operatorial form, since it follows from simple manipulations of the Lagrangian, which makes the universal nature of the theorem manifest, including the origin of the spin term (as demonstrated here for the case of gauge theories).Power-counting and general properties of soft-collinear Lagrangian further allow for a classification of loop corrections to the soft factors.In addition, it can immediately be applied to extensions of QCD, the Einstein-Hilbert theory, and matter theories with non-minimal coupling, as long as the underlying gauge or diffeomorphism symmetry is respected.In this way, one can immediately judge from power-counting and the Lagrangian if a higher-order effective operator, like the chromomagnetic ψσ µν F µν ψ, the gravitational √ −gR 2 or a non-minimal matter coupling √ −gR 2 φ affects the soft theorem.Importantly, since these extensions do not affect the soft gauge symmetry, it follows that there will always be two universal terms in gauge theory, and three universal terms in gravity, as the number of these terms is only linked to the effective soft gauge symmetry, and not to the actual form of the Lagrangian.
The present work derives from recent progress on the formulation of the SCET for gravity beyond leading power [22].The general construction of this EFT reveals the intricate structure of soft-collinear gravity as a gauge theory, which explains why the soft theorem looks the way it does.In this respect, it is worth noting that the soft-collinear effective Lagrangian automatically provides rules to generalise soft amplitudes to multiple emissions, including quantum corrections.This perspective complements the insights obtained from spinor-helicity methods [6] or the double-copy mapping [26], and ties them more closely to the properties of (effective) field theories.It would be interesting to further explore the connection of the EFT formulation of soft physics to asymptotic symmetries [13,14,[39][40][41].
Figure 2 :
Figure 2: Three possible contributions to the radiative amplitude in SCET at NLP.The first diagram represents the time-ordered product of the leading power current and the λ 2suppressed Lagrangian.In the second diagram, both current and Lagrangian are suppressed by a single power of λ.There are no soft building blocks at order λ 2 ; hence, the third diagram does not contribute to the LBK amplitude in SCET.The leading-power emission would correspond to the first diagram with an L (0) insertion.
given in (3.6)-(3.8).Let us start with L (2a) e y d P z 9 8 / 8 e H 3 v G T z l T 6 L p / l p Y b z X v 3 H 6 w 8 b D 1 6 / O T p s 9 W 1 5 y c q y S S F H k 1 4 I s 9 8 o o A z A T 1 k y O E s l U B i n 8 O p P 3 5 X 6 K e X I B V L x E e c p D C I yU i w k F G C B g 3 X G u t 9 H 0 Z M 6 D A O Q 8 Y h 1 y F M x E i S N M p b t n 0 t l u h N v u X t u o 7 n u q 9 b f Q N V J m W S i U B o Y z z M S 6 Z T T p h w s O u 2 t 3 P N P I d 7 C 0 I n 1 9 x z s F Z A 4 + h U Q g g y N s u c h f G O A 1 e e K E G j c A i x l L 0 y c u a 8 1 A H Q R L Z V R F L o U i Y p B 2 e K z F t y C L o Q p z h x 7 G o a + w T d X Y w Y H T u 8 S x z e D r p u E X g H a f 5 V 2 l 2 s j c 7 S c K 4 R O I U O V J + B E x + 4 3 h x v 5 n o O p U N W w G s / E Z j E5 a e C u c Z P 4 f a c / a K w b 9 f Z d + r g b h 3 c q 4 P 7 d f C g D h 7 + b y s j g j S C 4 F Y z 9 8 t m F k E m B U R w 4 / e + u i / 3 w n B 1 w 2 2 7 5 b A X C 6 8 q N a e Y X g w 0 E y k G Y K g 0 w e F G b c x s Y s j w Q 6 Y B I p 8 Y g p C J T N r t W l E J K F o D o 5 b T y n O h j j N W 6 Y x 3 n w b F o u T T t v b a + 9 8 6 G w c v a 1 a t G K t W y + t L c u z 9 q 0 j 6 7 1 1 b P U s 2 v j c + N r 4 1 v j e / N L 8 0 f z Z / D W d u r x U e V 5 Y t 0 b z 9 z 8 F I b j w < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = " U B / D 9 D 3 m 8L x k D t u T 4 L 1 Y 7 1 M t S W o = " > A A A F G X i c r Z T L b t N A F I b d J k A J t x S W 3 V i 0 S A V Z k Z 3 0 u o h U w Y Z l k e h F a q J o P D 6 O p x m P r Z n j S m H k D a / B C 7 C F N 2 C H 2 L L i B X g O x k 7 S N o k X L D q S p e P v n / P P 8 R n P + C l n C l 3 3 z 8 p q r X 7 v / o O 1 h 4 1 H j 5 8 8 f d Z c f 3 6 q k k x S O K E J T + S 5 T x R w J u A E G X I 4 T y W Q 2 O d w 5 o / e F f r Z F U j F E v E R x y n 0 Y z I U L G S U o E G D 9 d p G z 4 c h Ez q M w 5 B x y H U I Y z G U J I 3 y h m 3 f i C V 6 k 2 9 7 u 6 7 j u e 7 r R s 9
Figure 3 :
Figure 3: Diagram classes modifying the soft emission process with loop correction to a single collinear leg.Collinear interaction vertices are power-suppressed by at least one order in λ, so a, b ≥ 1, suppressing the collinear loop by λ 2 .For soft loops, one can fix soft light-cone gauge in the direction of the collinear leg, effectively decoupling the soft modes.This means that a, b ≥ 2 and the soft loop is suppressed by λ 4 .In addition, the soft loop is scaleless unless the graviton is emitted via a purely-soft vertex, as depicted in the last diagram.This process is further suppressed by c ≥ 2, yielding a total suppression by λ 6 .
d T n B Y N P R 7 v 5 j D W j X 6 7 5 R 2 2 D r 6 2 d 0 / e z w e 1 b T 2 z X l j 7 l m c d W S f W R + u L 1 b P o 1 p / a X u 1 N 7 a D + v H 5 a / 1 T / P H P d 3 J j H P L W u n X r v L 2 a I 4 z k = < / l a t e x i t > t e x i t s h a 1 _ b a s e 6 4 = " 5 g
Figure 4 :
Figure 4: Diagram classes modifying the soft emission process, where the soft loop connects two legs of different directions.The soft-collinear interactions are present starting from a = b = 0. Due to multipole expansion, soft-collinear vertices are only sensitive to the n i− k component.For on-shell legs, the soft loop vanishes unless a soft scale, provided by the injection of the full soft momentum k, is present.This can only happen by a purely-soft interaction vertex.Hence, only the last diagram has a non-vanishing contribution.Here, c ≥ 2 as it is a purely-soft interaction, and the loop is suppressed by λ 2 in total.
<
l a t e x i t s h a 1 _ b a s e 6 4 = "d 7 K f o z u E n y F R x i h W Q U d 3 F l l 4 d W c = " > A A A G C n i c r V T N b t N A E H Z L A i U U a O H I x S K t V J A V 2 f l p y y F S B R e O R a I / U h t F 6 / U 4 X r J e W 7 v r S m H l N + A F u M I b c E N c e Q l eg O d g 7 T h J 0 y w S E l 1 p p f H 3 z X w z n t l d P 6 V E S N f 9 t b Z + p 1 a / e 2 / j f u P B 5 s N H j 7 e 2 n 5 yK J O M Y T n B C E 3 7 u I w G U M D i R R F I 4 T z m g 2 K d w 5 o / f F P z Z F X B B E v Z e T l I Y x G j E S E g w k h o a b t c 2 L 3 0 Y E a b C O A w J h V y F M G E j j t I o b 9 j 2 g i y h l / m e 1 3 M d v V 8 0 L j U o M s 6 T j A V M 6 U C v n Z e g S i k i z J F 9 t 9 X O F f E c O i N C 4 L H O W l K d X N G 2 Q z s r M Q X R c W C h F i V S x 1 A I C 9 Y r f V x P O 2 l z q r 1 b + F 2 p A H D C W y J C K f Q x 4 Z i C M 4 X 0 f 1 E I + h C n c u L Y l R v5 C P 2 e j A g e O 7 S P H N o K + m 4 h e h t y / l y u + o n / U s M z N e n d g l o w r 6 2 3 3 O B Z X 4 u m 9 n T / z a y m a Hd K U e Q D V T v j n V z N v C s o H Z I S X I w Q M Z n E 5 W G A v 5 4 S z 0 T o K c / y r R y f 7 l x s n v d D k d e Y 1 j O B H R P Y N Y E 9 E 7 h v A g 9 M 4 K E J f G W s 0 z W i / z z 6 C E k c Q b A 0 / I N y + I W S V g E W X L v P 8 + / y 8 g + 3 m n r M 5 b J X D a 8 y m l a 1 j o d b v y + D B G c x M I k p E u L C c 1 M 5 U I h L o i v T 6 p m A F O E x G s G F N h m K Q Q x U + W z l 9 q 5 G A j t M u N 5 M 2 i V 6 P U K h W I h J 7 G v P G M l I 3 O Q K 0 M Rd Z D I 8 H C j C 0 k w C w 9 N E Y U Z t m d j F G 2 g H h A O W d K I N h D n R t d o 4 | 22,348.2 | 2021-10-06T00:00:00.000 | [
"Physics"
] |
Ring-Shaped Sensitive Element Design for Acceleration Measurements : Overcoming the Limitations of Angular-Shaped Sensors
A new modification of an acceleration measurement sensor based on an acoustic waves resonance principle is proposed. Common angular-shaped sensors exhibit stress concentrations at the angular points near the origin points of destruction under external stresses; these points are the “Achilles’ heel” of the entire design. To overcome the above limitation, we suggest an angular-free ring-shaped sensitive element design that is characterized by enhanced robustness against external stress. The analytical treatment is validated by computer simulation results performed using the COMSOL Multiphysics software package. For an appropriate model parameterization, an original experiment has been carried out to estimate the stress-strained robustness of two potential candidates for sensitive console materials. Moreover, characteristics of the proposed sensor design, such as sensitivity threshold and maximum stress, have been obtained from the simulation data. The above results indicate that the proposed concept offers a promising advancement in surface acoustic waves (SAW) based accelerometer devices, and could, therefore, be used for several practical applications in such areas as biomedical and sports wearable devices; vehicular design, including unmanned solutions; and industrial robotics, especially those where high-G forces are expected.
Introduction
In recent years, integrated micromechanical solutions based on surface acoustic waves (SAW) resonance principles have attracted broad interest and suggested several potential applications, including biomedical and/or sports wearable devices [1,2]; vehicular designs, [3,4] including UAVs [5]; and temperature sensors [6][7][8].In the current literature, various aspects of SAW sensor design and its manufacturing technology have been investigated thoroughly.Issues considered in recent literature include the choice of the sensitive element material [6,8,9], and the possibility of passive implementation with a wireless interface [10][11][12].Acceleration measurements represent a prominent application possibility [13], where the proposed class of solutions has the potential to develop serial devices characterized by enhanced robustness and the implementation of high-G values, as indicated by a series of recent publications [14][15][16][17], including some developments by our research team [18,19].
Examples include recently proposed angular-shaped solutions based on rectangular and triangular shaped consoles [20,21].However, the angular-shaped sensors exhibit inevitable disadvantages due to the concentration of external stresses; because of these stresses, the angular sensors represent the "Achilles' heel" of the entire design.
To overcome the above limitation, we suggest an angular-free design based on the ring-shaped sensitive console element.The results presented in this article are obtained by modeling in the COMSOL Multiphysics software package [22].
Design Concept and Analytical Treatment
Optimization of the sensitive element (SE) topology plays a key role in the design and future performance of SAW-based inertial sensors.By varying the parameters of inertial masses (IM) and inter-digital transducers (IDT), one can achieve improved measurement accuracy, a simpler sensor design, and the required frequency range.Analytical calculation of the IDT is a complex problem with multiple optimization parameters.Therefore, it is advisable to use computer simulations to optimize the SE by numerical treatment.In addition, achieving the ultimate performance of the sensors is another goal of the design process.Due to the extremely stringent requirements imposed on the SE shock resistance, their experimental evaluation presents considerable challenges, as in some cases a representation of actual extreme conditions can hardly be achieved.The most straightforward solution is again to replace full-scale tests with computer simulations at least in the first approximation.
As shown in Figure 1, the pendulum type sensitive micromechanical accelerometer (MMA) consists of a quartz ST-cut console (1), rigidly fixed at one end and loaded with inertial mass (IM) (2) at the other end.On the opposite surfaces of the SE console, reflectors (3) and inter-digital transducers (IDT) (4), each operating at their own frequency, are located.To exclude the mutual synchronization of the two self-excited oscillators ( 5) and ( 6), the resonant frequencies of the single-cavity resonators should be sufficiently separated.The output signals of the oscillators ( 5) and ( 6) are multiplied by a frequency mixer (7).Application of external acceleration leads to the cantilever bending, such that the SE surfaces experience stress-strained deformations.The differential frequency at the output of the low-band filter ( 8) is proportional to the acceleration applied.To overcome the above limitation, we suggest an angular-free design based on the ring-shaped sensitive console element.The results presented in this article are obtained by modeling in the COMSOL Multiphysics software package [22].
Design Concept and Analytical Treatment
Optimization of the sensitive element (SE) topology plays a key role in the design and future performance of SAW-based inertial sensors.By varying the parameters of inertial masses (IM) and inter-digital transducers (IDT), one can achieve improved measurement accuracy, a simpler sensor design, and the required frequency range.Analytical calculation of the IDT is a complex problem with multiple optimization parameters.Therefore, it is advisable to use computer simulations to optimize the SE by numerical treatment.In addition, achieving the ultimate performance of the sensors is another goal of the design process.Due to the extremely stringent requirements imposed on the SE shock resistance, their experimental evaluation presents considerable challenges, as in some cases a representation of actual extreme conditions can hardly be achieved.The most straightforward solution is again to replace full-scale tests with computer simulations at least in the first approximation.
As shown in Figure 1, the pendulum type sensitive micromechanical accelerometer (MMA) consists of a quartz ST-cut console (1), rigidly fixed at one end and loaded with inertial mass (IM) (2) at the other end.On the opposite surfaces of the SE console, reflectors (3) and inter-digital transducers (IDT) (4), each operating at their own frequency, are located.To exclude the mutual synchronization of the two self-excited oscillators ( 5) and ( 6), the resonant frequencies of the single-cavity resonators should be sufficiently separated.The output signals of the oscillators ( 5) and ( 6) are multiplied by a frequency mixer (7).Application of external acceleration leads to the cantilever bending, such that the SE surfaces experience stress-strained deformations.The differential frequency at the output of the low-band filter ( 8) is proportional to the acceleration applied.One of the most important properties that determines the sensitivity of MMAs on SAW is their relative deformation ability.The non-uniform distribution of the relative deformations leads to additional errors.To overcome this limitation, we suggest a ring-shaped sensor design that is rigidly attached along the edge (Figure 2a).Ring-shaped resonators are made from aluminum nitride and located symmetrically on the opposite surfaces of the SE.To increase sensitivity, the plate can be additionally loaded with an external IM (Figure 2b).One of the most important properties that determines the sensitivity of MMAs on SAW is their relative deformation ability.The non-uniform distribution of the relative deformations leads to additional errors.To overcome this limitation, we suggest a ring-shaped sensor design that is rigidly attached along the edge (Figure 2a).Ring-shaped resonators are made from aluminum nitride and located symmetrically on the opposite surfaces of the SE.To increase sensitivity, the plate can be additionally loaded with an external IM (Figure 2b).
In the above design (see Figure 2), external acceleration is applied along the Z axis and leads to the deformation of the piezoelectric console (2).This in turn leads to the frequency adjustment of the opposite SAW resonators (4).To increase the sensitivity, inertial mass (3) is optionally applied.Further formation and processing of the output signal is similar to previous designs, as illustrated in Figure 1.This SE design is suggested both because of its form and because its attachment method overcomes several drawbacks of earlier designs.In particular, the ring-shaped design is capable of withstanding and measuring significantly higher accelerations due to the absence of overstress points that represent the "Achilles' heel" of the angular design.Moreover, the ring-shaped design provides a uniform distribution of stresses and relative deformations in the SAW-resonator deposition area and reduced error levels due to its lower cross sensitivity.In the above design (see Figure 2), external acceleration is applied along the Z axis and leads to the deformation of the piezoelectric console (2).This in turn leads to the frequency adjustment of the opposite SAW resonators (4).To increase the sensitivity, inertial mass (3) is optionally applied.Further formation and processing of the output signal is similar to previous designs, as illustrated in Figure 1.
This SE design is suggested both because of its form and because its attachment method overcomes several drawbacks of earlier designs.In particular, the ring-shaped design is capable of withstanding and measuring significantly higher accelerations due to the absence of overstress points that represent the "Achilles' heel" of the angular design.Moreover, the ring-shaped design provides a uniform distribution of stresses and relative deformations in the SAW-resonator deposition area and reduced error levels due to its lower cross sensitivity.
We analyzed the stress-strained state to assess the strength of the suggested SE design.Symmetric bending fixed around the perimeter and uniformly loaded round plates at the polar coordinates have been used [13].Figure 3 shows the ring-shaped SE where r is a polar radius аnd θ is a polar angle, such that the relationship between polar and Cartesian coordinates is expressed by We analyzed the stress-strained state to assess the strength of the suggested SE design.Symmetric bending fixed around the perimeter and uniformly loaded round plates at the polar coordinates have been used [13].Figure 3 shows the ring-shaped SE where r is a polar radius and θ is a polar angle, such that the relationship between polar and Cartesian coordinates is expressed by thus, ∂r ∂x The differential equation for the curved console surface of a transversely loaded plate in Cartesian coordinates takes the form where x, y, z are coordinates of the Cartesian coordinate system, ω is a deflection of a plate, q is the load intensity, and D is a bending stiffness of the plate.The differential equation of symmetric bending of the uniformly loaded ring-shaped plate in polar coordinates is given by To obtain the maximum stress σ max of the plate, one needs to take into account the load distribution, the boundary conditions and integrate the Equation ( 5) yielding where R is the plate radius and h is the plate thickness.The differential equation for the curved console surface of a transversely loaded plate in Cartesian coordinates takes the form where x, y, z are coordinates of the Cartesian coordinate system, ω is a deflection of a plate, q is the load intensity, and D is a bending stiffness of the plate.The differential equation of symmetric bending of the uniformly loaded ring-shaped plate in polar coordinates is given by To obtain the maximum stress max σ of the plate, one needs to take into account the load distribution, the boundary conditions and integrate the Equation ( 5) yielding where R is the plate radius and h is the plate thickness.
Since MMA measures the acceleration the maximum stress can be expressed as where F is the external force, S is the load area, m is the mass of the plate, a is the applied acceleration, is the material density, and V is the ring-shaped plate volume.
The maximum stress max σ can be finally expressed as and thus is proportional to the acceleration value a and to the Since MMA measures the acceleration the maximum stress can be expressed as where F is the external force, S is the load area, m is the mass of the plate, a is the applied acceleration, ρ is the material density, and V is the ring-shaped plate volume.
The maximum stress σ max can be finally expressed as and thus is proportional to the acceleration value a and to the R 2 h ratio.
Model Implementation and Computer Simulations
Figure 4 depicts the maximum stress as a function of the applied acceleration for two different SE materials, namely SiO 2 ST-cut and LiNbO 3 YX-128 • -cut.Each straight line in Figure 4 corresponds to one of particular value of the R 2 h ratio.To estimate the sensor measurement range and its maximum overload represented by the ultimate strength for the mechanical bending of a ring-shaped console at which a qualitative change in the properties of the material occurs, it is necessary to know the mechanical capabilities of the SE.
The theory of maximum normal stresses is suitable only for the strength analysis of brittle materials and only under certain loading conditions [23].The condition of bending strength expressed as calculated maximum stress σ calc max = [σ], where σ is the ultimate strength of material.In this case, a strength test was performed to determine the magnitude of the maximum stresses and then compare them against permissible values.The elements satisfied the strength test conditions once they were able to withstand the maximum stress level at which the allowed deformation had not been exceeded.
In case of a linear stress state, the limiting value of the only principal stress can be determined directly from the experiment.In further analyses we assume that knowledge about the ultimate strength of the material allows one to estimate both the load values and the size of the SE.In order to determine the ultimate strength of the materials, static tests of quartz ST-cut samples and lithium niobate YX-128 • -cut samples were conducted.
The fixation of the elements on a test bench replicated the proposed method of fixing the SE in the body of the accelerometer; this process also replicated the fixing scheme for calculating the maximum stress, such that the applied stress simulated the effect of acceleration on the SE.
The experiment was produced with the high-capacity testing system INSTRON-5985 [24].As a result of the experiment, critical load values leading to the destruction of both quartz ST-cut (F = 555 N) and lithium niobate YX-128 • -cut (F = 290.8N) SEs have been obtained.Since the fixing scheme was non-standard, we next need to calculate the maximum stress of materials.
Electronics 2018, 7, x FOR PEER REVIEW 5 of 13 To estimate the sensor measurement range and its maximum overload represented by the ultimate strength for the mechanical bending of a ring-shaped console at which a qualitative change in the properties of the material occurs, it is necessary to know the mechanical capabilities of the SE.
Model Implementation and Computer Simulations
The theory of maximum normal stresses is suitable only for the strength analysis of brittle materials and only under certain loading conditions [23].The condition of bending strength expressed as calculated maximum stress [ ] max calc σ = σ , where σ is the ultimate strength of material.
In this case, a strength test was performed to determine the magnitude of the maximum stresses and then compare them against permissible values.The elements satisfied the strength test conditions once they were able to withstand the maximum stress level at which the allowed deformation had not been exceeded.
In case of a linear stress state, the limiting value of the only principal stress can be determined directly from the experiment.In further analyses we assume that knowledge about the ultimate strength of the material allows one to estimate both the load values and the size of the SE.In order to determine the ultimate strength of the materials, static tests of quartz ST-cut samples and lithium niobate YX-128°-cut samples were conducted.
The fixation of the elements on a test bench replicated the proposed method of fixing the SE in the body of the accelerometer; this process also replicated the fixing scheme for calculating the maximum stress, such that the applied stress simulated the effect of acceleration on the SE.
The experiment was produced with the high-capacity testing system INSTRON-5985 [24].As a result of the experiment, critical load values leading to the destruction of both quartz ST-cut (F = 555 N) and lithium niobate YX-128°-cut (F = 290.8N) SEs have been obtained.Since the fixing scheme was non-standard, we next need to calculate the maximum stress of materials.
Table 1 contains the key parameters of the materials that explicitly affect the performance of the experimental setup.The first two parameters, the elastic modulus and the Poisson's ratio, are tabulated for each material and can be found in the corresponding material science literature.The third parameter is the critical load value that had to be determined in a separate experiment, as Table 1 contains the key parameters of the materials that explicitly affect the performance of the experimental setup.The first two parameters, the elastic modulus and the Poisson's ratio, are tabulated for each material and can be found in the corresponding material science literature.The third parameter is the critical load value that had to be determined in a separate experiment, as described above.The remaining three parameters characterize the geometry of the console design determined with 0.05 mm accuracy.Next, we calculated the maximum stress σ max values for uniformly loaded plates using the materials parameters from the Table 1.
where σ 1 is a tensile stress (Pa), σ 2 is a maximum bending stress (Pa), ψ 1 (u) were obtained graphically for the stress-strain curve.The maximum stress values obtained were σ max = 141.8MPa and 106.8 MPa for the quartz ST-cut and the lithium niobate YX-128 • -cut, accordingly [8,9,13].These values were used in further mathematical models and computer simulations.
To estimate the distribution of the relative deformations on the opposite sides of the ring-shaped SE, finite element analysis (FEM) was performed.Corresponding models in the COMSOL Multiphysics software package are shown in Figure 5.
materials parameters from the Table 1.
where 1 σ is a tensile stress (Pa), 2 σ is a maximum bending stress (Pa), 1 ( ) u ψ were obtained graphically for the stress-strain curve.The maximum stress values obtained were max σ = 141.8MPa and 106.8 MPa for the quartz ST-cut and the lithium niobate YX-128°-cut, accordingly [8,9,13].These values were used in further mathematical models and computer simulations.
To estimate the distribution of the relative deformations on the opposite sides of the ring-shaped SE, finite element analysis (FEM) was performed.Corresponding models in the COMSOL Multiphysics software package are shown in Figure 5.The dimensions of the plates satisfied the previously chosen R 2 h ratios (in this paper the next ratio values were considered 25, 100, 200 and 400).All anisotropic properties of the materials were taken into account in the model.Figure 6 shows the placement of the SAW-resonators on the ring-shaped SE. Figure 7 indicates that relative deformations in the case of the ring-shaped SAW-resonator (red line) are homogenous in comparison with non-homogenous distribution for the linear SAW-resonator (blue curve).The symmetric location of the ring-shaped SAW resonator on the SE leads to equal sensitivities of the arms of the differential circuit, thus reducing errors induced by a geometric factor.Therefore, the distribution of relative deformations for the ring-shaped SE is uniform.
Deformation characteristics of the SE (deformations, internal stresses, relative stresses) under applied acceleration in the range between 0 and 20,000 g along the sensitivity axis z (see Figure 6) were obtained by computer simulations.Figure 8 shows the ring-shaped SE SAW-resonator frequency as a function of the applied acceleration.The symmetric location of the ring-shaped SAW resonator on the SE leads to equal sensitivities of the arms of the differential circuit, thus reducing errors induced by a geometric factor.Therefore, the distribution of relative deformations for the ring-shaped SE is uniform.
Deformation characteristics of the SE (deformations, internal stresses, relative stresses) under applied acceleration in the range between 0 and 20,000 g along the sensitivity axis z (see Figure 6) were obtained by computer simulations.Figure 8 shows the ring-shaped SE SAW-resonator frequency as a function of the applied acceleration.The symmetric location of the ring-shaped SAW resonator on the SE leads to equal sensitivities of the arms of the differential circuit, thus reducing errors induced by a geometric factor.Therefore, the distribution of relative deformations for the ring-shaped SE is uniform.
Deformation characteristics of the SE (deformations, internal stresses, relative stresses) under applied acceleration in the range between 0 and 20,000 g along the sensitivity axis z (see Figure 6) were obtained by computer simulations.Figure 8 shows the ring-shaped SE SAW-resonator frequency as a function of the applied acceleration.
Estimation of the Sensitivity Threshold of the Acceleration Sensor
Next, in order to assess the lower limit of sensitivity as recommended by the IUPAC (International Union of Pure and Applied Chemistry), it was necessary to determine the noise level [3].
To determine the noise level, the MMA output (shown in Figure 9) was recorded in the absence of external acceleration; this output indicated the standard deviation σ = 0.07 kHz.For normal functionality, the SE output signal should be at least 3 times higher than σ.In this case the lower measurement bound was approximately 0.2 kHz.
Estimation of the Sensitivity Threshold of the Acceleration Sensor
Next, in order to assess the lower limit of sensitivity as recommended by the IUPAC (International Union of Pure and Applied Chemistry), it was necessary to determine the noise level [3].
To determine the noise level, the MMA output (shown in Figure 9) was recorded in the absence of external acceleration; this output indicated the standard deviation σ = 0.07 kHz.For normal functionality, the SE output signal should be at least 3 times higher than σ.In this case the lower measurement bound was approximately 0.2 kHz.Next, based on the simulation results, the sensor sensitivity was evaluated.The simulation results indicated that the key sensor characteristics, such as sensitivity, dynamic range, and scale factor, depend explicitly on the ratio of SE sizes.Table 2 summarizes the simulation results for SE made of both quartz and lithium niobate materials and indicates predicted scaling factors, ranges of measured accelerations, and threshold sensitivity.Next, based on the simulation results, the sensor sensitivity was evaluated.The simulation results indicated that the key sensor characteristics, such as sensitivity, dynamic range, and scale factor, depend explicitly on the ratio of SE sizes.Table 2 summarizes the simulation results for SE made of both quartz and lithium niobate materials and indicates predicted scaling factors, ranges of measured accelerations, and threshold sensitivity.Calculations accounted for both the nonlinear effects and the anisotropic properties of the considered quartz materials of ST-cut and YX-128 • -cut lithium niobate.Based on the simulation data, one can see that the scale factor of the quartz SE is almost double that of the lithium niobate.In addition, the dynamic range of the quartz SE is enhanced, while its threshold sensitivity is considerably lower.
The Ring-Shaped Sensitive Element Additionaly Loaded by Inertial Mass
A possible method to increase the resolution (quantified here by threshold sensitivity) of the SEs of the proposed sizes is to load each of them with inertial masses.To validate the above approach, we next analyzed the stress-strain state of SEs with additional IM (see Figure 10).
The Ring-Shaped Sensitive Element Additionaly Loaded by Inertial Mass
A possible method to increase the resolution (quantified here by threshold sensitivity) of the SEs of the proposed sizes is to load each of them with inertial masses.To validate the above approach, we next analyzed the stress-strain state of SEs with additional IM (see Figure 10).
The IM consists of a cylinder located in the central part of the plate on one of its two sides.The size of the cylinder is denoted by and height H.The inertial mass was manufactured from a heavy alloy tungsten-nickel-copper (VNM).This alloy is characterized by high density ρ = 18,000 kg/m 3 , Young's modulus of elasticity E = 350 GPa, and Poisson's ratio γ = 0.29, while remaining nonmagnetic.The simulation data indicates that, as expected, the maximum stress in the case of a circular plate occurs at the IM attachment points.
Next, the sensitivity range of the stress-strain state of the SE was been evaluated.It is obvious that by increasing inertial mass, differential frequency also increases, as differential frequency depends on both the size and the relationship between the dimensions of the inertial mass and the plate, as represented by a 3D surface plot in Figure 11 (exemplified for (a) the quartz ST-cut, (b) is the lithium niobate YX-128°).
For one volume of the inertial mass and the dimensions of the plate, the maximum sensitivity is observed under the condition R′ < H (where R′ is the radius of the IM, H is the height of the IM), and ratio (R is the plate radius), and the high sensitivity zone is shifted to the zone of attachment (see Figure 11) where it reaches its maximum at following by a decline (see Figure 12а).Thus, the radius of the inertial mass should be approximately 60% of the radius of the SE.The height of the IM can be limited only by its practicality and design features (see Figure 12b).The IM consists of a cylinder located in the central part of the plate on one of its two sides.The size of the cylinder is denoted by R and height H.The inertial mass was manufactured from a heavy alloy tungsten-nickel-copper (VNM).This alloy is characterized by high density ρ = 18,000 kg/m 3 , Young's modulus of elasticity E = 350 GPa, and Poisson's ratio γ = 0.29, while remaining non-magnetic.
The simulation data indicates that, as expected, the maximum stress in the case of a circular plate occurs at the IM attachment points.
Next, the sensitivity range of the stress-strain state of the SE was been evaluated.It is obvious that by increasing inertial mass, differential frequency also increases, as differential frequency depends on both the size and the relationship between the dimensions of the inertial mass and the plate, as represented by a 3D surface plot in Figure 11 (exemplified for (a) the quartz ST-cut, (b) is the lithium niobate YX-128 • ).
observed under the condition R′ < H (where R′ is the radius of the IM, H is the height of the IM), and ratio (R is the plate radius), and the high sensitivity zone is shifted to the zone of attachment (see Figure 11) where it reaches its maximum at following by a decline (see Figure 12а).Thus, the radius of the inertial mass should be approximately 60% of the radius of the SE.The height of the IM can be limited only by its practicality and design features (see Figure 12b).For one volume of the inertial mass and the dimensions of the plate, the maximum sensitivity is observed under the condition R < H (where R is the radius of the IM, H is the height of the IM), and ratio R R > 1 (R is the plate radius), and the high sensitivity zone is shifted to the zone of attachment (see Figure 11) where it reaches its maximum at R R = 5 3 following by a decline (see Figure 12a).Thus, the radius of the inertial mass should be approximately 60% of the radius of the SE.The height of the IM can be limited only by its practicality and design features (see Figure 12b).For a comparative evaluation of the sensitivity of the two materials, 3D surfaces of difference frequencies were constructed in the sensitivity zones (Figure 13).Figure 12 shows the dependences of the differential frequency on the quotient of the IM height and the quartz plate radius of the ST-cut and lithium niobate YX-128 • -cut.The graph shows that the maximum sensitivity of the element is achieved with a quotient of approximately R R = 3 5 and the high sensitivity zone is located near the attachment point.
For a comparative evaluation of the sensitivity of the two materials, 3D surfaces of difference frequencies were constructed in the sensitivity zones (Figure 13).For a comparative evaluation of the sensitivity of the two materials, 3D surfaces of difference frequencies were constructed in the sensitivity zones (Figure 13).From the above, we can conclude the following: To achieve an increase in the sensitivity threshold using inertial mass, it is recommended to consider the following:
•
The ratio of the radius of the IM to the radius of the plate should be R R = 5 3 for quartz materials of ST-cut and lithium niobate YX-128 • ; • The ring resonator should be located near the attachment point;
•
The sensitivity of quartz ST-cut is 2 times more than lithium niobate YX-128 • .
The values of strains and internal stresses for cases with IM were re-read to the values of the sensor output signal for all the proposed sizes and materials ratios (Figure 14 From the above, we can conclude the following: To achieve an increase in the sensitivity threshold using inertial mass, it is recommended to consider the following: • The ratio of the radius of the IM to the radius of the plate should be Each proposed ratio of dimensions of SE and IM corresponds to its dynamic range and scale factor.The results of calculations for SE from quartz and lithium niobate materials: scale factors, ranges of measured accelerations and thresholds of sensitivity are summarized in Table 3. Table 3.The parameters of quartz ST-cut and lithium niobate YX-128°-cut for maximum stress simulations (absolute values for positive and negative acceleration are given).
Quartz ST-Cut
Lithium niobate YX-128°-Сut Each proposed ratio of dimensions of SE and IM corresponds to its dynamic range and scale factor.The results of calculations for SE from quartz and lithium niobate materials: scale factors, ranges of measured accelerations and thresholds of sensitivity are summarized in Table 3.
Conclusions
To summarize, we have suggested a new modification of an acceleration measurement sensor based on acoustic waves.Since angular-shaped sensors exhibit stress concentrations at the angular points near the origin points of destruction under external stresses, thus representing an "Achilles' heel" for the entire design, it is plausible to suggest that an angular-free shape would be more robust.Here we have confirmed the above hypothesis by considering a ring-shaped sensitive element design that, as we have explicitly shown above, overcomes the typical limitations of the angular-shaped designs previously considered in literature, including earlier developments by our group.The analytical treatment is further validated by the computer simulation results performed using the COMSOL Multiphysics software package.For an appropriate model parameterization, an original experiment to estimate the stress-strained robustness of the two potential candidates for sensitive console materials has been carried out.Moreover, several characteristics of the proposed sensor design, such as the sensitivity threshold and maximum stress, have been obtained from the simulation data.
The above results indicate that the proposed concept offers a promising advancement in SAW based accelerometer devices and thus could have several practical applications in such areas as biomedical and sports wearable devices; vehicular design, including unmanned solutions; and industrial robotics, especially those where high-G forces are expected.
Figure 2 .
Figure 2. The ring-shaped SE model with a triangle grid (a) and design components (b): 1: fixation scheme; 2: ring-shaped sensitive element console; 3: inertial mass; 4: ring-shaped SAW resonator (another one is located symmetrically below the console).
Figure 2 .
Figure 2. The ring-shaped SE model with a triangle grid (a) and design components (b): 1: fixation scheme; 2: ring-shaped sensitive element console; 3: inertial mass; 4: ring-shaped SAW resonator (another one is located symmetrically below the console).
Figure 4
Figure 4 depicts the maximum stress as a function of the applied acceleration for two different SE materials, namely SiO2 ST-cut and LiNbO3 YX-128°-cut.Each straight line in Figure 4 corresponds to one of particular value of the
Figure 5 .Figure 5 .
Figure 5.The overview of the manufactured experimental prototype according to the design illustrated in Figure 2.
Electronics 2018, 7 ,Figure 6 .
Figure 6.The location of the SAW resonator: (a) linear case (b) ring-shaped case.Figure 6.The location of the SAW resonator: (a) linear case (b) ring-shaped case.
Figure 6 .
Figure 6.The location of the SAW resonator: (a) linear case (b) ring-shaped case.Figure 6.The location of the SAW resonator: (a) linear case (b) ring-shaped case.
Figure 6 .
Figure 6.The location of the SAW resonator: (a) linear case (b) ring-shaped case.
Figure 7 .
Figure 7. Relative deformations of the linear and ring SAW resonators
Figure 7 .
Figure 7. Relative deformations of the linear and ring SAW resonators R 2 h = 400.
Figure 7 .
Figure 7. Relative deformations of the linear and ring SAW resonators
Figure 9 .
Figure 9.The MMA output without external acceleration.
Figure 9 .
Figure 9.The MMA output without external acceleration.
Figure 10 .
Figure 10.The model of a round sensitive element loaded by inertial mass broken by a triangle grid.
Figure 10 .
Figure 10.The model of a round sensitive element loaded by inertial mass broken by a triangle grid.
Figure 11 .
Figure 11.The differential frequency as a function of the SE and IM relative dimensions under acceleration applied to the inertial mass attachment points: (a) quartz ST-cut; (b) lithium niobate YX-128 • -cut.
Electronics 2018, 7 , 13 Figure 11 .
Figure 11.The differential frequency as a function of the SE and IM relative dimensions under acceleration applied to the inertial mass attachment points: (a) quartz ST-cut; (b) lithium niobate YX-128°-cut.
Figure 12 Figure 12 .
Figure 12 shows the dependences of the differential frequency on the quotient of the IM height and the quartz plate radius of the ST-cut and lithium niobate YX-128°-cut.The graph shows that the maximum sensitivity of the element is achieved with a quotient of approximately 3 5 R R ′ = and the high sensitivity zone is located near the attachment point.
Figure 12 .
Figure 12.Dependence of the differential frequency on R /R (a) and H/h (b) for SE with IM.
Figure 12 .
Figure 12.Dependence of the differential frequency on R′/R (a) and H/h (b) for SE with IM.
Figure 13 .
Figure 13.Surfaces of difference frequency values of SE from quartz ST-cut and lithium niobate of cut YX-128º under the action of acceleration.
13 Figure 13 .
Figure 13.Surfaces of difference frequency values of SE from quartz ST-cut and lithium niobate of cut YX-128º under the action of acceleration.
ST-cut and lithium niobate YX-128°;• The ring resonator should be located near the attachment point;• The sensitivity of quartz ST-cut is 2 times more than lithium niobate YX-128°.The values of strains and internal stresses for cases with IM were re-read to the values of the sensor output signal for all the proposed sizes and materials ratios (Figure14(a) quartz ST-cut, (b) lithium niobate YX-128°)).
Figure 14 .
Figure 14.Scale factors and ranges of measured accelerations two types of SE with IM: (a) quartz ST-cut; (b) lithium niobate YX-128°-cut; blue, green, red and black lines correspond to R 2 /h equals to 400, 200, 100 and 25, respectively.
Figure 14 .
Figure 14.Scale factors and ranges of measured accelerations for two types of SE with IM: (a) quartz ST-cut; (b) lithium niobate YX-128 • -cut; blue, green, red and black lines correspond to R 2 /h equals to 400, 200, 100 and 25, respectively.
Table 1 .
Parameters of quartz ST-cut and lithium niobate YX-128 • -cut for maximum stress calculation.
Table 2 .
Parameters of quartz ST-cut and lithium niobate YX-128 • -cut for maximum stress simulations (the absolute values for positive and negative acceleration values are given).
Table 3 .
The parameters of quartz ST-cut and lithium niobate YX-128 • -cut for maximum stress simulations (absolute values for positive and negative acceleration are given). | 8,115.8 | 2019-01-29T00:00:00.000 | [
"Materials Science"
] |
An analytical approach towards EHD analysis of connecting-rod bearing
A. Benhamou*, A. Bounif**, P. Maspeyrot***, C. Mansour**** *Faculty of Technology, Hassiba Benbouali University, B.O. 151, Chlef, 02000, Algeria<EMAIL_ADDRESS>**LCGE Laboratory, USTO Mohamed Boudiaf University, BO 1505, Oran, 31000, Algeria, E-mail<EMAIL_ADDRESS>***Pprime Institute, Structure and Complex Systems Department, B.O. 30179, Poitiers 86000, France E-mail<EMAIL_ADDRESS>****LCGE Laboratory, USTO Mohamed Boudiaf University, BO 1505, Oran, 31000, Algeria<EMAIL_ADDRESS>
Introduction
Tribology field aims to understand interaction problems between surfaces while suggesting specific solutions.It concerns lubrication, wear as well as contact mechanics.The expanding range of tribological applications, from early machinery applications to recently micro and nano applications, has not only demonstrated its importance but revived recent interests of the field.The introduction of a range of micro-fabrication techniques coupled with developments in materials design has had a profound effect on the resurgence of tribological applications at the friction levels [1].Competition in the automotive industry as well as the requirements of customers requires manufacturers to reduce pollution and consumption while improving engine performance [2].In this context many experimental and numerical studies were developed within this research area.Among the studies, one can distinguishes the elastohydrodynamic (EHD) lubrication domain for smooth bearings for which the lubricant film interacts with solid surfaces, yielding specific behaviours, as it is the case for connecting-rod bearings in IC engines.Steady state behaviour of journal bearing has recently been investigated numerically by Kumar et al [3].Despite six decades of EHD lubrication studies, safety and reliability improvement of journal bearing under dynamic loading is still considered as a challenging task.The kinematics and dynamics of the operation of such journal bearings generate a field of pres-sure inside the oil film which in turn creates a deformation on the surfaces of the bearing and the journal.On the other hand, solid deformations generate local modifications in the lubricant thickness while affecting the pressure distribution within the oil film.In the early works of Fantino et al [4], some numerical solutions of the EHD problems were obtained using a fourth order Runge-Kutta integration.Simultaneously, journal bearing under dynamic loading were concerned with the finite element Oh and Goenka [5].Fantino et al [6] have shown that the elastic strain of the structures and the location at which the thickness is minimal are nearly independent of the viscosity of the lubricant and that the friction and the axial flow are more important in the elastic case than in the rigid case.To overcome numerical difficulties regarding finite element convergence, McIvor et al [7] advocate the necessity of using high order elements despite their important computational cost.The inertia influence of solids on the oil film thickness was evidenced by Aitken et al [8], and numerically confirmed in the work of Bonneau et al [9].In this context, the authors have developed an EHD algorithm accounting for cavitation within journal bearings.In 1997, Garnier [10] approached the problem of EHD by considering the connection casing-crankshaft of a thermal engine.In 2000, Piffeteau et al. [11] conducted a thermo-elastohydrodynamic study of a connecting-rod bearing in transitory mode.In the same content a recent work has been performed on heat transfer towards the oil film, and quantified using thermoelastohydrodynamic (TEHD) [12].In the work of Bonneau et al. [13], some modelling approaches were proposed for the prediction of break-off and formation of lubricant films within EHD contacts.The authors have stated out the necessity of accounting for inertia effects in the deformation prediction procedure.There statements were confirmed by Olson et al [14] when the authors investigated an elastic structure dynamics including inertia effect.The model has been applied to solve a coupled set of equations governing hydrodynamics with elasticity of a journal bearing [15].In this work, an analytical resolution of the EHD problem is proposed for connecting-rod bearing engine.The procedure solves simultaneously the reduced form of the Reynolds equation as well as Hook equation.A space-time dependant variable is introduced in the set of equations in the aim to relate radial and longitudinal deformations.As an application case, a journal bearing of the General Motors Diesel engine is investigated [16][17].Inertia effects on the oil film thickness are evidenced under a moderate regime.Moreover, a parametric study regarding connecting-rod bearing materiel type is conducted towards an optimization insight.
Hydrodynamic lubrication
Investigation of when the L/D ratio of the length to the diameter of the journal bearing is small, i.e. lower than 1/6, the circumferential gradient of pressure can be neglected in front of the axial gradient of pressure [18].
Here one considers a connecting-rod bearing subjected to a dynamic head.The Reynolds equation in transitory mode is written as follows: In the case of a smooth bearing, the external forces acting on the rod consist of the combustion pressure resultant within cylinders and inertia forces for the moving parts.The system is believed to be in equilibrium throughout the loading cycle.The balance can be written in vecto-rial form by accounting for external forces acting on the journal bearing and the force resulting from hydrodynamic pressure and inertia.
Fig. 1 Coordinates of the system In the coordinate system (Fig. 1), the budget equations including the applied external force (load F) and the hydrodynamic reaction due to the pressure field, are as follows: ( Expressions for the various speeds of the journal u a and of the bearing v c , can be written as: This yields a reduced form for the Reynolds equation: where The speeds of crushing and rotation * and * are unknown factors in the problem.p is the reduced pressure to be determined.In this work, an analytical approach is performed by means of the mobility method [19].The socalled graphic method of calculating "mobility" makes it possible to calculate the trajectory of the journal in its housing.Two components for the journal movement are considered, the crushing and the rotation.These lasts define the mobility vector.This method is applied for the resolution of Eq. ( 3).In order to derive an analytical expression for the solution of Eq. ( 3), one makes a use of the mobility vector account for crushing and rotation effects, respectively.Some rearrangements on Eq (4) in ( 2) lead to a simplified form for the Reynolds equation: The appropriate boundary conditions are those of Gümbel [18]: The corresponding pressure distribution is therefore:
Elastic deformations
It is worthy noticing that both hydrodynamic pressure as well as inertia effects contributes to the elastic deformation of the connecting-rod bearing.
According to the works of Goenka et al. [16] and Bonneau et al. [17], the connecting-rod bearing deformation for each effect was calculated separately by means of a numerical procedure.In this work, one proposes an analytical approach for evaluating simultaneously this deformation.In a case where the bearing thickness is lower than the radius of the journal, the constraints within the bearing are assumed to be uniform.The journal operates at a constant speed while producing elastic deformations which are related to the longitudinal and radial constraints.
are the expressions of linear deformation, which are related to the plane constraints expressions by: The set of the partial differential Eq. ( 11) expresses the coupling between the bearing deformation and the hydrodynamic pressure.Under a mechanical equilibrium; the set becomes: Far from the bearing axis, the origin of the deformation can be considered as a perturbation moving at a speed V. Consequently, the displacements U and W as well as the hydrodynamic pressures are related to dependent variable ( Vt x X ). Accordingly, the set (11) takes the following form: where denote for the initial deformations.A compact form for the system can be expressed as: where K refers to the non-homogeneous part in the second equations of the set (13) and V X is a delay time.In a case where only progressive perturbations are considered, the pressure can be expressed as a step function: with 1 when and 0 when where H denotes for the Heaviside distribution.
The obtained system can be simply written as: The general solution of system (15) is the sum of a particular solution 0 and a general solution of the as- sociated homogeneous linear equation l , in manner that: The characteristic solution of Eq. ( 17) is as follows: respectively the propagation velocities of the longitudinal and radial elastic waves in the rod bearing.
The form of the solution l will depend on V, V b and V p as: The second member of Eq. ( 17) becomes: The expressions for the longitudinal and radial deformation are given by: The two coefficients A θ and A x are related to the ratio V/V p their expressions shows that: the radial deformation increases with an increase in V; the longitudinal deformation increases as V decreases, with the evolution of A x and pressure p; the positive ratio , does not depend on pressure P and decreases as V increases.
Resolution procedure
As the right term of Eq. ( 7) depends only on the mobility direction α, the resolution is believed to be simple.In this context, an interpolation procedure is performed for a given attitude angle (Fig. 1) for each time step within the cycle.More details regarding the tabulation method of the mobility direction can be found in [19].Here, for a given load diagram, the hydrodynamic pressure is obtained by solving Reynolds equation which allows for the determination of both longitudinal and radial deformations.
Results and discussion
The journal bearing considered in this study is that of a Diesel General Motors engine, as investigated by Goenka et al [16] and Bonneau et al. [17].Table 1 summarises the journal bearing characteristics.
According the work of Goenka et al [16], two regimes corresponding to 2000 rpm and 4000 rpm were considered.The corresponding load diagrams are shown in (Fig. 2).
The loading was numerically fitted with a piece- wise polynomial expression, and implemented within the calculation loop.For the particular speed of 4000 rpm, the calculations were performed for the specific case where the inertia effects are neglected.For a complete cycle of the engine operation, the minimum oil film thickness is compared to the results obtained by Goenka et al. [16] and Bonneau et al. [17] (Fig. 3).One notices that the minimal film thickness for the three approaches, exhibits the same tendency.A slight deviation is noticed on the film thickness value which seems to be overestimated due to the short journal assumption, considered in the present work.In the following, the elastic strains are calculated with and without inertia effect by considering three types of material (Table 2).A parametric study regarding the materiel is also performed.Three types of materials are considered at both low (2000 rpm) and moderate (4000 rpm) regimes.It is noticed that except for the oscillatory behaviour created by the non-stationary term of Eq. ( 21), the oscillations appearing in the material are described by the very significant frequencies.The deformations become negative, in contrast to their behaviour in the case where inertia effects were neglected.This specific behaviour due to the effects of inertia makes it possible to moderate the effects of dilation generated by the deformations induced by the governing hydrodynamic pressure appearing in the journal bearing.
One notices in Fig. 5, with regard to longitudinal deflection, that no modification is apparent except for the oscillations induced by the non-stationary term.Figure 6 shows the evolution of the minimal thickness of oil film with and without the effect of inertia for a cycle loading.The curve accounting for the effect of inertia, exhibits values lower than that without inertia effects.Consequently, the effects induced by inertia are relatively constraining on journal deformation at high-speeds.
In addition, in the case relating to the effects of inertia, the materials' influence becomes less consistent by the overall effect undergone by the deformation resulting from the superposition of the coupled influence of materials and the effects of inertia.
For the specific speed of 4000 rpm and for the three types of material considered the radial and longitudinal elastic strain with and without the effect of inertia are shown in (Figs. 7 and 8).The minimal thickness of the oil film calculated for = 4000 rpm with and without the effect of inertia is shown in (Fig. 9).The same tendency is observed as for 2000 rpm.The effect of inertia and the material type on deformation have a more marked tendency, in particular for the radial deformations and their consequence on the minim oil film thickness.In Fig. 10, one clearly notices the correction introduced by the introduction of the effect of inertia, particularly, the clear difference between the results without the effects of inertia.The results obtained so far are very close to those obtained in the former work of Goenka et al. [16] and Bonneau et al. [17], despite the assumption of the short journal bearing introduced into the analytical calculation.The contact in the journal bearing, the elastohydrody-namic machine part subjected to very harsh operating conditions, which is conducting the minimum reducing the density of the lubricant film which could be premature wear of the contact.In this situation, the load is applied on the rod journal bearing while yielding strong stresses.Despite the short journal assumption considered in the present case, the results appear to be fairly close to those of Goenka [16] and Bonneau [17].
Conclusion
Analytical approach was used in the purpose of predicting the deformations of a connecting-rod bearing subjected to inertia and hydrodynamic pressure effects.The results show that materials with low density exhibit important deformations while allowing for possible contacts, between surfaces.This behaviour seems to be significant at moderate (4000 rpm) and high regimes.Accounting for inertia effects, the results have confirmed the contribution of in the global deformation, particularly for the behaviour discrepancy with and without inertia effects.The curve accounting for inertia exhibits lower values than that without the inertia effects.Consequently, the effects induced by inertia are relatively constraining on journal deformation at high-speeds.tas analitinis minimalaus tepalo plėvelės storio skaičiavimo metodas.Jame įvertintas tampriųjų deformacijų poveikis atsirandantis dėl hidrodinaminio slėgio ir inercijos.Tampriosios deformacijos paskaičiuotos analiniu ir skaitiniu metodu.Jungiamosios traukės-guolio deformacijos buvo įvertintos esant vidutinėms ir mažoms variklio apkrovoms.Analizė atlikta įvertinus inercijos efekto įtaką guolio funkcionavimui stipriai skyrėsi nuo įprastinės.Metodas buvo pritaikytas General Motors variklio jungiamosios traukėsguolio analizei.
A. Benhamou, A. Bounif, P. Maspeyrot, C. Mansour AN ANALYTICAL APPROACH TOWARDS EHD ANALYSIS OF CONNECTING-ROD BEARING S u m m a r y Elastohydrodynamic (EHD) journal bearings in rotating machine parts are subjected to very severe operat-ing conditions, leading to a reduction in the minimal thickness of the lubricating film which can generate premature wear of the contact.In this work, an analytical method of calculating the minimal thickness of the oil film was proposed.It takes into account the effect of the elastic deformations due to both hydrodynamic pressure and inertia.The elastic deformations are calculated analytically and simultaneously.The connectingrod bearing body deformations was predicted under low and moderate engine regimes.The correction produced by the introduction of the effect of inertia, have shown a clear difference for the global behaviour.The method was applied for analysis of a General Motors connectingrod bearing engine.
Fig. 3
Fig. 3 Oil film thickness distribution without inertia effectsTable 2 Characteristics of the materials connecting-rod bearing materials
Fig. 9
Fig. 9 Material effects on the minimum oil film thickness without inertia effect
Table 1
Characteristics of the connecting-rod bearing | 3,533.6 | 2014-04-14T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Parallel Processing of Economic Programs , a New Strategy in Groups of Firms
In recent years parallel and distributed systems have become increasingly attractive for applications with high computational demands such as simulation of complex systems from groups of companies. The main advantage of such systems is the ratio, rather than attractive, between the price and performance that can be achieved. In the present paper, authors describe some possibilities of parallel processing at the level of economic programs in groups of firms. The architecture, model and future development are shown below. This paper is an extended version of the paper presented at International Conference on Informatics in Economy (IE 2013), Bucharest, Romania
Introduction
Having a quite complicated organizational structure, behavioral flexibility and lack of bureaucracypresent in all the sectors of the industry, commerce and services, groups of firms easily adapt to the constantly changing economic and social conditions.Modeling of group of companies economic systems containing hundreds of market actors (companies) who apply management strategies common to group or individually (at company level) can be a difficult task.During the last years, had been remarked two classes of individual strategies: one based on envy 'comp benefit "(companies compare their benefits with those of competitors and copy competitors with higher profit side, well known in the literature) and maximization strategies "max-benefit" (company calculates benefits obtained for increasing, decreasing, or maintaining selling prices and take a decision as to maximize its profits without regard for other companies).
Fig. 1. Mother-firm management and subsidiaries management
The necessity of passing, on a large scale, from the mother-firm management to the subsidiary management required passing from sequential programming to parallel processing, in groups, having at least the following basic motivations: No matter the future performance of one processor, it is impossible to have an unlimited increase of execution capacities and efficiency of the uniprocessor systems; [4], [5], [7]).
As compared to other problems whose solutions were found more quickly, parallel programming has not yet managed to assert itself as a general method of design, implementation and execution of algorithms, at the level of groups of firms.Solutions for problems arising from a completely new approach have proved to be difficult to find, at the level of theoretical substantiation, but especially at the level of mentalities in the developers community.
The fact that the broad IT community is learning and thoroughly studying an algorithmic approach that is inherently sequential used at the level of imperative programming languages (actually also deriving from the inherent sequential nature of management) makes this compulsory transition become extremely complicated.
Research Methodology
A parallel system can be used to describe groups of firms indicating the firms central of the group, that coordinates the function of strategy and eventually influence the company has focused on companies in the group.
A parallel system can be used to describe how they are affected by price fluctuations on where a company is located, competition neighbors, the purchasing power of the customers in that area, etc. Taking groups of firms as practical example, we identify two practical ways of passing from group management to subsidiary management (referring to the passage from sequential programming to parallel programming): 1).Designing algorithms conceived by parallel approach that would then be implemented at the level of some programming languages designed for such execution; 2).Elaboration of specialized software for the automatic transformation of sequential programs in efficient parallel versions.For the current stage, only the level of technological development of hardware equipment could contribute to the rapid finalization of such a transition (despite the lack of architectural standards for parallel computers).Unfortunately, the smart and especially correct management of the resources involved at the level of an algorithm proves to be quite difficult when trying a parallel approach of algorithm elaboration.This is why it remains more as a research domain, rather than a well established and universally accepted practical methodology.For these reasons, a large part of the current research, oriented towards the parallel processing in groups of firms is directed towards the development and analysis of theoretical models that would allow the substantiation of certain general constructive principles, as well as the efficient implementation of restructuring compilers (translators which restructure a sequential program for the purpose of parallel execution).The development of algorithms for groups of firms using the parallel approach will certainly remain, in time, the only programming methodology able to justify and accomplish the computer science development, in terms of software.
Manifestation of Parallel Processing Possibilities, at the Level of a Program in Groups of Firms
The main feature of imperative languages is the reflection of the von Neumann architecture at the level of language constructions.Such a paradigm is focused on the assignment instruction and provides programs whose effect can be described as a sequence of transformations of the memory cells values on which the program acts.
The complete programming methodology used in the last decades by the programmers' community emphasizes the following description of the algorithm design activity: any algorithm is nothing more than a sequence of assignment instructions that a programmer imposes to certain memory locations through control structures available through the programming language that they work with.It is worth noticing the necessity of sequencing, imposed by the intellectual activity of elaborating algorithms according to the current methodology.On the other hand, the varied nature of the real problems that computer science is expected to solve often makes algorithmic sequencing become obviously forced in relation to a natural description of a solution for that problem, a solution that most of the times contains activities that can be and should be executed simultaneously.One of the main reasons for using the computer is the repetition of a certain sequence of instructions for a large amount of data.Parallel processing becomes an essential factor, the only possible one, for obtaining the desired performance, to the extent that the data features allow it.Imperative languages are mainly oriented towards scientific calculations involving large amounts of data.The problems of programs are not much related to their length (requirements of internal and external memory are mostly already solved, from the technological point of view) but to generally large amount of time required for their execution.Thus, the identification of parallelizing possibilities at the level of a sequential program becomes of maximum importance, as a first step towards obtaining its parallel version.It is obvious (experimental studies prove it) that the most of the time required for a sequential execution (the classical estimation is around 90%!) [6] is consumed during the iterations.Thus the iteration loops become the main candidates for parallelization.
Hierarchically structured machines [3] currently represent the dominant configuration of hardware elements in computer systems.The feature of these systems is the Non-Uniform Memory Access time, which is why they are called NUMA machines.sNon-uniform access is reflected in relation to the major difference between the time required for a processor to access data in the local memory associated to it and the time required to access data located at a distance (most probably through communication methods based on message exchange).The execution itself, of a parallel program on a particular system in the group, requires partitioning, distribution (also called mapping) and planning of data and calculations at the level of the nodes in a network of processors.The creation of an application that would accomplish this optimally, in an automatic way, and independently from the particularities of the computer system is considered to be an NP-complete problem [9][2].In the absence of an automatic support which can accomplish this task, the optimal partitioning of calculations and data remains, in many cases, the responsibility of the developer, who manually performs it at the level of the program source code, using a specialized description language (as in the case of the Occam language, for example [8]).In order to emphasize the utility of equivalent transformations at the level of loops, let us consider a distributed system (network of processors with local memory) where the following program sequence is executed: end for end for where A is an m n matrix, B is an n-vector and n m.For the example above, let us assume that we make a mapping of the calculations, so that each instance of the outer loop is executed at the level of one processor (thus we must have at least m available processors); in this case, each processor will sequentially execute the corresponding iterations of j from the previous loop.Thus, the k processor will execute all the iterations with i = k.Regarding the data mapping, let us assume that we are making a distribution in order for processor k to store the k line of the matrix in its local memory (the notation of the line is A[k, * ]) and the element B[j] is stored in the processor's memory (j mod (m+1)).In such a situation (figure 2) the k processor will execute n-k+1 iterations, but at least (n-n/(m+1) ) elements of vector B must be brought from the processors where they have been distributed.In addition to this, each k processor must bring line A[k-1, * ] from the (k-1) processor.
Fig. 2. An example of allocation
The thin lines correspond to moving the elements of vector B and the thick dotted ones, correspond to movements of elements of A. Let us now take an example that is semantically equivalent to the sequence above, in which we assume that n processors are available:
Assuming that processor k executes all the iterations with j = k and stores element B[k]
and column A[ * ,k] in its local memory, then processor k will execute min(k,m)+1 iterations.Element B[k] is never changed, and can be stored even at the level of an available register.However, the most important change is the presence of all elements of A accessed by the processor at the level of its local memory (see figure 3).The two analyzed mappings use m and n processors, respectively.
Since n m, the second mapping achieves a greater degree of parallelism as compared to the first case, but the greatest benefit is that all the data required for the calculations are locally stored and thus it is not necessary to perform any data transfer operation.This makes operations be independent, allowing them to execute simultaneously.As opposed to this, the first version involves the necessity of many data exchanges among the processors, which considerably increase the execution time.The second mapping also displays a better static localization, even perfect in this case.The elements of vector B can be stored in the cache memory or in an available register, to be used in each iteration, thus also obtaining a better dynamic localization.
In the first version, the access requests for the value of the same B[k] came from more processors.In both cases analyzed above, processors do not perform the same number of iterations, which leads to a variation of the computational load, at processors level.If the variation of this load is too big, there is a negative effect on the performance of the computer system.Considering the two cases discussed, the second mapping has a smaller variation, thus displaying a greater degree of balanced processor load (IEP).
Fig. 3. An optimal allocation
These two examples show us that different structures of nested loops, semantically equivalent, can represent very different execution times, depending on the degrees of parallelism, localization and IEP displayed.The transformation of a nested loop structure (SCI) in a semantically equivalent SCI, displaying possibilities of parallel execution is the main purpose of a restructuring translator and at the same time it is one of the main analysis and research objectives in groups of firms.The theoretical basis of the transformation methodology of the SCI, is the data dependency mathematical concept.Data dependencies are a measure of parallelism that can be highlighted in the source code.Therefore, their accurate determination is of paramount importance for effective parallelization.Unfortunately, analysis of data dependencies is in the general case determining undecidable problem.For example, in the code sequence: (in) dependence of the two references to elements of array A depends on the value of n which is not unknown at compiling time.That means that in the management of the motherfirm, for each subsidiary we can consider the dependence between all i-n companies (see figure 4).
Fig. 4. The dependence between i-n subsidiaries
This is necessary to implement dynamic analysis methods of data dependencies.Even if we limit ourselves only to the static aspect of the problem, complications persist.Let us consider for example the following code, we want to analyze whether the array indexing expressions V will denote the same element or not: We must solve the equation given of the condition of equality of the corresponding index expressions.The problem of data dependences in this case would mean neither more nor less than the problem of demonstrating Fermat's great theorem (the existence of three positive integers a, b, c> 0 which satisfy the equation, a n = b n + c n , nN ) theorem which until now was neither demonstrated nor contradicted (although was verified the absence of such numbers verified to very high values, the problem is that we don't have another methodology for proving such theorems than just checking!).Also, the classical un-decidable problem of stopping a program (halting problem) can be formulated in such a framework of determining dependencies in a program.For these reasons, the approach of the analysis of data dependencies is doing only in a relatively small area, namely the affine index expressions (ie those of the form ax + b, expressions linear plus a constant) which occur in the majority of situations in practice.The problem formulated in such a framework is decidable, but obtaining accurate solutions can be a very costly process.Therefore, if you cannot find the exact solution or finding it too expensive, it is assumed conservatively dependence.On the other hand, the restricting of the analysis of affine equations, followed automatically by data dependencies conservative assumption in all other cases, we can lose a large amount of potential parallelism.For example, in sequence: is not known value of n, we do not only make affine index expressions (ie data analysis will conservatively assumed dependence), but it is obvious that the equation i 2 = 3 has no solution in the set of integers and hence the dependence between these two references for array there.The emergence of expressions un-affine therefore means loss of information, but not always necessarily forced to assume dependence.For example, if other dimensions of the array elements involved are affine, we can use these to demonstrate (in) dependence: The first dimension contains un-affine term, but the terms of the second dimensions are both affine.Data Dependency would require the simultaneous satisfaction of the equations representing the identity of reference in the two dimensions.Analyzing only what the second dimension but it appears immediately that equation 2i = 2i +1 has no solutions, so we do not have data dependencies.As architectures become more complex, significantly increasing the number of directions of optimization and decision Mother -firm S 1 S 2 S n-i making relative to the range of transformations applied is very complicated.The problem of choosing an optimal sequence of transformations leading to the most efficient parallel version remains an open question.Relative to this, compilers moment only managed to incorporate a set of heuristic decision.Before attempting to generate an optimal sequence of loop transformations, it is natural to ask whether in the general case, a program always supports optimal planning scheme execution.
Conclusions
In mathematics, computer science or management, mathematical optimization (alternatively, optimization or mathematical programming) is the selection of a best element (with regard to some criteria) from some set of available alternatives.In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function.The generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics.More generally, optimization includes finding "best available" values of some objective function given a defined domain, including a variety of different types of objective functions and different types of domains.As machines become more complex, the number of optimization directions increases significantly, and the decision making process related to the transformations to be applied becomes very complicated.The problem of choosing an optimal sequence of transformations that would lead to the most efficient parallel version remains an open one.Regarding this aspect, the current compilers only manage to embed a certain set of heuristic decisions in the activity of the groups of firms.This paper has attempted to connect the management group of companies and parallelization algorithms and proposes a parallelization of their activity.
for j := 1
to n do for i := 1 to min(j,m) do A[i,j] := A[i-1,j] + B[j]; end for end for read(n); for i := 11 to 20 do A[i] := A[i-n] + 3; end for | 3,907.4 | 2014-06-30T00:00:00.000 | [
"Economics"
] |
Characteristics of Layered Gels Formation by Additive Technologies
In order to study the properties of the different density silica gels optical methods were used. It is shown that layers of gels spatially inhomogeneous and anisotropic, their optical density are significantly reduced near the free surface. It is found that on the surface of thin layers of silica gel there are convective structures. Features of multi-layered gel structures with different densities were studied. It is found that after the gels forming layers have poor adhesion to each other. This is due to the fact that the volume between different layers of gels filled with liquid. The presence of liquid in the interlayer volume is observed visually and confirmed by the light scattering of the laser beam. The thicknesses of the liquid boundaries between the layers of the gels were estimated.
Introduction
Gel is a dispersive system with liquid dispersing medium, and the dispersion phase makes up a spatial structured mesh due to intermolecular interaction in the contact sites.Gels are micro-or nanostructured media whose internal structure depends on the material and on the method of preparation.The characteristics of gel can change due to the internal processes [1,2].
The promising application of gels is the use in regenerative medicine for growth of biological tissues, including the problem of culturing stem cells in vitro [3,4].This capillary network is intended for the delivery of nutrients to individual cells, and to remove products from the metabolism.It seems promising the idea that gels can used as a carrier during the formation of ordered bio-structured elements (based on the 3D printing) for additive medical technology.However, there are a number of factors that influence diffusion transport in gels and change mass-transfer laws [5,6].
A large number of studies concerning the synthesis of different chemical nature gels and the study of its properties have been published.However, for use in technology gels have many common properties.For example, the most important properties that define the processes of mass transfer in the gels are nonstationarity and anisotropy due to the structure and behavior of the transfer medium.
For experimental study we used gels that are based on silicates.Such gels are well investigated (see, for example [7]), simply prepared and not toxic.Such gels are appropriate for additive technologies used for the creation of skeleton matrix (retaining biologically active gels, but transmissive to liquid nutrient) of bioreactors by 3D-printing.Gels of different density were investigated.The stock solution of sodium silicate was diluted with distilled water to the levels of density 1.04 -1.08 g/cm 3 .The gel-generating mixture was prepared by adding the hydrochloric acid.
The aim of the investigation is to study of the fundamental properties of the gels layers by optical methods (for example, silica gels different density) in relation to its using in additive 3D technologies.It is important to obtain data on the properties of multilayer gels structures and to determine the effects on the properties of the boundary layers between the gels.
Nonuniformity and anisotropy of layered gel
For the investigation of the anisotropic properties of gel the spectroscopic method was used.A container for a gel has produced from special quartz glass.The experimental setup provides the instantaneous registration in spectral range of 200 -1100 nm.The optical resolution is in average 1.0 nm.The spatial resolution is 1 mm.This setup allows us to investigate the intensity of the transmitted light through the gel as a function of the distance from the top surface.Thickness of wetting meniscus is about 1.5 mm.For this reason, measurements were made starting at a depth of 2 mm.
The intensity of the light passing through the gel, referred to the value corresponding to the transmittance intensity of the optical system without gel.Measurement processed for light wavelength of 500 nm, that corresponds to the minimum transmittance of the gels.The results of the measurements for gels with two different initial densities are shown in Fig. 1.From the experimental data it can be concluded that gels of both densities are nonuniform in their thickness.The density of the surface layers of the gel can be less than their density inside the gel.The explanation of this fact, that describes the temporal dynamics of the process of syneresis, is the slow deposition of a microstructured phase in a liquid under the forces of gravity.The deposition rate is determined by the rate of filtration of the liquid phase from the lower layers of gel to the top.
The formation of silica gel from liquid media requires a few minutes and it is associated with chemical changes.As a result a structured phase is occurred; gel has different properties from original liquids.It is found that on the surface of thin layers of silica gel there are convective structures (Fig. 2).These structures make more complicated to depose the upper layers of the gel at bioprinting.
Specific properties of multilayer gels
In order to create 3D additive technologies using gels, there is a need to check whether the properties of additivity are valid for multilayered gels.For this purpose it is required to determine which properties of multilayer gels differ from the homogeneous gel of the same chemical composition.Gels are very promising for additive technologies, since they are easily form matrixes of different shapes and densities (Fig. 3).It is possible to apply a denser layer of gel on the less dense without any mixing corresponding to liquid media.Such bilayer systems are hydrodynamic stable.For layered gel structures presented on pictures the border between the layers (pointed by arrow) is clearly visible.A comparison of the spectrum of single-layered and double-layered gel structures of the same thickness did not show differences in the intensity of transmitted light, either in the form of spectrum.However, the presence of interphase boundaries may have an impact on the technological properties of gels.In Fig. 4 shows the mechanical properties of the gels after drying.On the left there is the initial lattice structure created from the fully formed gel, on the right -inadvertent mechanical disruption of this structure after the day as a result of the drying of the gel.Breaks the continuity of the patterns observed in the contact layers.This indicates poor adhesion between the layers of the gel.Important factor complicating the use of additive technologies to gels is their spontaneous densification.As a result of the pressure of the upper layers of the liquid is squeezed out of the lower layers forming the system drops on the perimeter of the upper layer (Fig. 5).Such a drop will obstruct the uniform application of the subsequent layers of the gel.
To make sure that between layers of the gel there is a thin layer of fluid, a study of impact of passing the green laser with wavelength 532 nm through it was performed.The main results are shown in Fig. 6.The passage of the beam through a homogeneous gel is uniform in all directions scattering of light, which fully corresponds to the scattering of light on the microinhomogeneities smaller wavelengths of light.When the ray go along the boundary layers of the gels it is clearly visible a lateral dispersion that occurs in a planar waveguide.This phenomenon is caused by the multiple reflections of light on the surfaces of the gels formed, that are limited by layer of liquid.In order to estimate the thickness of the liquid layer between the gels a photographic method was used.Since the space between layers of the gel is a thin capillary, for receiving the contrast photos the inert colored dye was used that is well identified by spectral processing photo.The estimation showed that the thickness of the layer between gels is at least 0.5 mm and increases with aging of the gel.
Conclusions
Layers of gels are not uniform in thickness.The density of gels decreases towards the free surface.This is true for gels of different density.On the free surface of the gel during its formation, the occurrence of spontaneous interfacial convection is possible.
Multilayered gels are different in properties from a single layered of the same thickness.Between the layers of gel a thin fluid-filled layer is formed.This layer reduces adhesion between the layers of gel, but can significantly increase mass transfer of gel matrix.These features of layered gel structures must be taken into account when using such materials in additive 3D printing technology.
Figure 1 .
Figure 1.Intensities of transmitted light I (relat.units) vs. distance from the top surface of the gel layer h (mm) for different density: 1 -1.08 g/cm 3 ; 2 -1.04 g/cm 3 .Measurements were carried out 1 hour after gel formation.
Figure 2 .
Figure 2. Photo of the convective structures appearing on the surface of a thin layer of silica gel during its formation.
Figure 3 .
Figure 3. Photos of silica gel two-layer systems of different densities: a -both layers with density of 1.04 g/cm 3 , b -both layers with 1.08 g/cm 3 , c -the upper layer has the density of 1.08 g/cm 3 ; the lower has the density 1.04 g/cm 3 .The arrow indicated the separation surface between these two layers.
Figure 4 .
Figure 4. Photo of gel grating (left) on Petri dish, the same grating after drying of gel for 24 hours (right).
Figure 5 .
Figure 5. Photo of drops formed along the perimeter of upper layer during the formation gel two-layer matrix.
Figure 6 .
Figure 6.Photo of characteristic scattering of laser beam of 532 nm passing through a homogeneous gel (left), the characteristic scattering of the laser beam passing through the boundary between the gels layers (right). | 2,202.2 | 2016-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
3-D Geological modelling : a siliciclastic reservoir case study from Campos Basin , Brazil
Reservoir static modelling plays a fundamental role in the evaluation phase of a petroleum field. Integrated modelling allows a better understanding of how the local geology and depositional systems are related through the distribution of facies and petrophysical properties within the reservoir. In this study, geological static models of the siliciclastic Carapebus Formation of Campos Basin were built using subsurface data. The applied methodology was divided into five phases: (1) establishment of a conceptual model, (2) building of a structural model, (3) generation of 100 realizations of lithofacies using sequential indicator simulation, (4) generation of 100 realizations of porosity and permeability using sequential Gaussian simulation, and (5) validation of models by targeting both statistical and geological consistency. The obtained models are consistent and honor the conditioning data. A lithofacies constraint is crucial to better characterize the petrophysical properties distribution of the reservoir. A Dykstra-Parsons coefficient of V=0.52 characterizes this reservoir as moderately homogeneous.
Introduction
The geological models, often called reservoir static models, play an essential role in the understanding of intrinsic spatial characteristics and features of the reservoirs.These models have been important to predict reservoir performance because they incorporate several information, such as static petrophysical properties within the stratigraphic layers and structural framework.Static models are also used to predict inter-well distributions of relevant properties, such as porosity and permeability.Furthermore, static reservoir models are improved through an iterative process in order to better quantify and assess uncertainty.Cosentino (2001) and Deutsch (2002) point out that it is necessary to create consistent 3-D geological models.
Additionally, these authors suggest the application of geostatistical algorithms of sequential simulation, such as Gaussian and indicator to conduct the modelling process.Whereas sequential indicator simulation is suitable for categorical variables, the sequential Gaussian simulation is applied to continuous variables.Ravenne (2002) and Remacre et al. (2008) emphasize that proportion curves are powerful tools for reservoir characterization, because they provide a better understanding on how the facies proportions vary within and between the wells with respect to a stratigraphic datum.Furthermore, these authors highlight that petrophysical properties should be constrained to facies proportions in order to get a better estimate of the petrophysical distributions.Dykstra and Parsons (1950) developed a criterion for quantifying reservoir heterogeneity based on the permeability distribution and the well-known coefficient of variation.Dykstra-Parsons coefficient takes values between 0 and 1 and, for most reservoirs this coefficient ranges between 0.5 and 0.9, from homogenous to heterogeneous.
Understanding the heterogeneities of the main reservoirs of the Campos Basin (Brazil), far from being a mature basin, has still been a challenge.
This study aims at building a 3-D geological model of the turbidite reservoir of the Carapebus Formation in the Campos Basin, in order to better understand and characterize reservoir heterogeneities.
Campos Basin setting
Campos Basin lies between the updip limits of the turbidites to the west, the Vitoria-Trindade Arch to the north, the Cabo Frio Arch to the south, and the boundary of the salt diapir region at water depths of ~2,200 m to the east.The basin is approximately 500 km long and 150 km wide encompassing nearly 120,000 km 2 of the Brazilian southeast offshore.It reaches up to 3,400 m of water depth.
Based on a compilation of previous works on Campos Basin, Winter et al. (2007) divide its stratigraphic evolution into five main chronostratigraphic megasequences, from early Cretaceous to Tertiary, as follows: (1) continental megasequence with three sin-rift phases where the first and the second are characterized by eolian and alluvial fan deposits, and the third is characterized by extensive strata of coquinas, calcarenites, calcilutites and shales; (2) evaporitic transitional megasequence with conglomerates and sandstones, followed by the formation of an evaporitic sea in the southern part of the basin; (3) shallow carbonate platform which developed most of the basin's shallow carbon-ates; (4) transgressive marine megasequence composed by calcilutites, marl and shales; this transgressive megasequence registered a gradual increase in depth recorded by the deposition of deep water turbidites; and (5) regressive marine megasequence where a modification in the sedimentation regime occurred, with the development of deltaic, fluvio-deltaic, terrigenous and carbonate platform depositional systems.
Carapebus reservoir description
According to Bruhn et al. (1998), the studied reservoir is part of the siliciclastic Carapebus Formation of Campos Group in Campos Basin.The age of the studied section of this reservoir ranges from Oligocene to Miocene.It was deposited by several turbiditic events in a regressive marine megasequence of deep water unconfined depositional systems (water depths between 1,000 to 2,000 m), defining unconfined sand lobes.In terms of grain size, medium to coarse sand are predominant.Porosity reaches 35% and permeability ranges roughly from 1,000 to 2,500 mD in the producing intervals, consisting of the most prolific turbidite reservoirs in the basin.Bruhn (2001) measured porosity and permeability of core samples which allowed distinguishing two deposits in the Carapebus Formation reservoir both from Oligocene-Miocene.The author obtained porosity and permeability values of (1) 32% and 574 mD for thicker and poorly sorted deposits, and of (2) 26% and 2,434 mD for thinner and moderately well sorted deposits.
Data set
The data set was provided by the Brazilian National Agency of Petroleum, Natural Gas and Biofuels (ANP), and was comprised of four wells and eight 2-D seismic lines (Figure 1).All wells included gamma ray, density, sonic and resistivity logs, and descriptions of cores from two wells were available.
Method
To conduct the 3-D geological modelling, the method proposed by Cosentino (2001) and Deutsch ( 2002) was applied using an academic license of Petrel 2013.1 software.It is a workflow where stratigraphy constrains the geostatistical grid geometry, which in turn supports the modelling (Figure 2).It comprises five phases: Phase 1: Development and establishment of a consistent conceptual model emphasizing the depositional system, in order to identify the main expected scales and heterogeneities.
Phase 2: Structural modelling considering the stratigraphic layering, in order to get a suitable geostatistical grid which incorporates well-known heterogeneities, and has an adequate size estimated based on the computational power available.Well formation tops and interpreted seismic surfaces are used to define the vertical boundaries of the layers, while fault structures define lateral boundaries.
Phase 3: Cell-based lithofacies modelling within each previously es-
Application
The ideal conceptual model used to guide the modelling phase was the unconfined sand lobes defined by Bruhn et al. (1998).
Top and base reservoir surfaces were interpreted from the available seismic data, based on concepts of seismic stratigraphy and geomorphology (see e.g.Posamentier & Kolla, 2003).Time-to-depth conversion was made using a simple layer cake velocity model built with well checkshots.
To establish lithofacies, a logic function was applied to gamma ray logs, distinguishing three main lithofacies: shale, sandstone and marl.These were derived from core samples which 12 well lithotypes were identified and then resampled to three representative lithofacies.
The log-derived porosity was calculated from the bulk density log based on Asquith & Gibson (1982) equation.For this calculation, it was considered that the siliciclastic rocks do not have significant shale content in their composition (lower than 15% within reservoir intervals).The used equation is defined as follows: tablished stratigraphic layer, using the sequential indicator simulation algorithm.This should be performed for a number of realizations to ensure stability of the variance of the models.This geostatistical simulation uses vertical proportion curves taking into account facies distributions along stratigraphic layers (e.g.Ravenne, 2002;Remacre et al., 2008).
Phase 4: Petrophysical modelling of porosity and permeability for each lithofacies realization, using the sequential Gaussian simulation algorithm.For the permeability modelling it might be necessary to develop a porosity-permeability transformation (Phi-K) from core data, in order to get a permeability log in uncored intervals.Therefore, this implies that permeability will follow the porosity distribution.
Phase 5: Validation of the models based on statistical and geological consistency.This validation is carried out by analyzing the statistical parameters of the simulated and original distributions and by comparing overall results to those available in the literature.
total porosity = matrix density -density log matrix density -fluid density where, for this study, matrix density = 2.644 g/cm 3 and fluid density = 1.025 g/cm 3 .
The log-derived permeability curve was fitted using core data and it reads as follows: log permeability (mD) = 4.09121 (porosity) −1.30056 where, for this study, a correlation coefficient of r = 0.82 was obtained.
The constructed geostatistical grid is a corner point with 2.5 million cells, defined by 400 x 400 m in the horizontal directions and 1.5 m in the vertical direction, with 123x121 cells and 170 cell layers.
It was required to upscale the modelled properties to the grid resolution using averaging methods: (1) for lithofacies, the "most of" was applied, (2) for the log-derived porosity, the arithmetic mean was applied, and ( 3) for the log-derived permeability, the geometric mean was applied (see e.g.Durlofsky, 2005).
A structural analysis was carried out with the aid of experimental variograms.These variograms were computed along two horizontal directions, NW and NE and the vertical direction, as they are well-known as major Campos Basin's turbidite deposition (Figure 3).It is possible to verify the degree of uncertainty in variograms owed to the lack of data.The spherical model was adopted for both lithofacies and petrophysical modelling, even if the proximity of data points to the origin was not linear.It was possible to infer that for the vertical direction, both properties are characterized by a high variability, and the data points of experimental variograms are distantly spaced.Table 1 shows the variogram parameters for the fitted models of lithofacies and petrophysical properties.Table 1 Variogram model parameters for lithofacies and petrophysical properties.
For the lithofacies modelling, 100 realizations of the sequential indicator simulation algorithm were performed.For the petrophysical models, 100 realizations of the sequential Gaussian simulation were performed, conditioned by the lithofacies models.Permeability was co-simulated with porosity as a secondary property by applying collocated co-kriging.A locally varying correlation coefficient from poros-ity was used, assigning one value for each grid cell.
The global vertical proportion curve includes high percentages of marl, but low percentages of sandstone and shale.Emphasis can be given to the top and central layers (10 to 50 and 80 to 120), defined by 50% to 70% of sandstone facies.These heterogeneities were characterized by a total of 170 stratigraphic layers with thick-ness of 1.5 m each, in order to capture the vertical variation of the well logs (Figure 4).
According to Bruhn et al. (1998) and Machado et al. (2004), lobes deposited over several time events are a common feature of this reservoir in proximal areas.
The heterogeneity of this reservoir was quantified by calculating the Dykstra-Parsons coefficient of permeability variation, as follows: where Log(K) 50 -permeability mean; Log(K) 84.1 -mean plus one standard deviation.
Results and discussion
One example of equiprobable realization of the obtained lithofacies and petrophysical models is shown in Figures 5 to 7. The facies proportions obtained in the lithofacies models were in accordance with the global vertical proportion curve.The simulated proportion for the shale was 14%, for the sandstone was 26% and for the marl was 60% (Table 2).
The value of log-derived porosity averages 25% and ranges from 0% to 40%.The simulated value averages 25% and ranges from 10% to 36% (Table 3).Core-measured permeability ranges from 107 to 864 mD, the simulated log ranges from 51 to 610 mD and the geometric mean is 271 mD (Table 4).
Both porosity and permeability are characterized by a bi-modal distribution.These bi-modal distributions helped to distinguish between reservoir and nonreservoir potential facies.By having bimodal distributions, careful analysis of the descriptive statistics is required.Differences between original and simulated distributions are presented in Figures 8 and 9.These differences are caused by the employed upscaling procedure and by the spatial density of the data.
For the proportions of the lithofacies models, there were differences of -9.2% for shale, +0.8% for sandstone, and +2.0% for marl.For the mean of the porosity models, there was a difference of -1.2%.For the mean of the permeability models, there was a difference of +1.5%.
It is possible to assess the bi-modal distributions of the petrophysical parameters by considering different sources of the turbidite depositional system, which comprises at least five different main features.According to Bruhn (2001) those features are: (1) complex external geometry, (2) fine-muddy interbedded deposits, (3) discontinuous calcitic concretions, (4) cemented sandstones and conglomerates, and (5) porosity and permeability variation with deposits grain size and sort.It was possible to compare the porosity and permeability results with those achieved by Bruhn (2001) in core samples of Oligocene-Miocene.Comparing these results with the thicker deposits presented in Bruhn (2001), the porosity obtained in this study is 22% lower and the permeability is 6% higher.Comparing with the thinner deposits, the porosity obtained is 5% lower and the permeability is 74% lower.
Dykstra-Parsons coefficient of permeability variation for sandstone lithofacies of the studied reservoir is V=0.52.This coefficient points to a moderately homogeneous reservoir.The obtained coefficient is equal to that calculated by Dutton et al. (2003) for a deep water siliciclastic case study from Ramsey reservoir in the East Ford field.Moreover, this coefficient is within the range of most reservoirs.
The results of several studies show that these turbidite deposits have different types and can be fairly complex.In addition, these reservoirs are distinguished based on their grain size, net-to-gross ratio, external geometry, processes and depositional settings (Bruhn et al., 2003;Machado et al., 2004).
This study has two main limitations: (1) the low data density and the data distribution, which affected directly the variogram modelling, and (2) Phi-K transformation, knowing that the permeability does not follow the porosity distribution.
As pointed by Chambers et al. (2000), in a situation with lacking data, the parameters chosen in the sequential indicator simulation and sequential Gaussian simulation algorithms certainly impact the results.
Despite these limitations, the histograms of the simulated values reproduce those of the original values within an acceptable error level.Therefore, by applying the Cosentino (2001) and Deutsch ( 2002) methodology for integrated reservoir studies it was possible to infer the relationships between geology and depositional systems of this reservoir.
The observed differences between simulated and original distributions might be straightforwardly assigned to the sparse data set used to perform the geological modelling in this study.Additionally, these differences may also be assigned to the change-of-support effect, produced by distinct size support of data set and model grid; and the smoothing character of the kriging algorithm.
Conclusions
The main conclusions of this study are: 1.The obtained models reveal geological and statistical consistency, with tolerable statistical variations.
2. The geostatistical model grid plays a major role in defining the vertical layering of the global vertical proportion curve.
3. Lithofacies models show minor differences between original and simulated proportions (lower than 10%).4. Petrophysical models conditioned to lithofacies presents very low differences with respect to the original distributions (1.2% for porosity and approximately 1.5% for permeability).
5. Lithofacies constraints are essen-tial to an improved characterization of the distribution of petrophysical properties in the reservoir.6. Phi-K transformation outlines the permeability models statistical outputs because it assumes a linear correlation with the porosity distribution.
Generating lithofacies models is
Figure 1
Figure 1 Data set location: study area is limited by the polygon (black), 2D seismic lines (grey), and wells (black symbols).
Figure 4Global vertical proportion curve of lithofacies. | 3,532.2 | 2016-12-01T00:00:00.000 | [
"Geology"
] |
Probing scattering mechanisms with symmetric quantum cascade lasers
A unique feature of quantum cascade lasers is their unipolar transport. We exploit this feature and designed nominally symmetric active regions for terahertz quantum cascade lasers, which should yield equal performance with either bias polarity. However, symmetric devices realized in the InGaAs/GaAsSb material system exhibit a strongly polaritydependent performance due to different elastic scattering rates, which makes them an ideal tool to study their importance. In the case of an InGaAs/GaAsSb heterostructure, the enhanced interface asymmetry can even lead to undirectionally working devices, although the nominal band structure is symmetric. The results are also a direct experimental proof that interface roughness scattering has a major impact on transport/lasing performance. 2012 Optical Society of America OCIS codes: (000.0000) General; (000.2700) General science. References and links 1. B. Williams, “Terahertz quantum-cascade lasers,” Nat. Photonics 1, 517 (2007). 2. F. Capasso, “High-performance midinfrared quantum cascade lasers”, Opt. Eng. 49, 111102 (2010). 3. C. Gmachl, A. Tredicucci, D. L. Sivco, A. L. Hutchinson, F. Capasso, and A. Y. Cho, “Bidirectional semiconductor laser”, Science 286, 749 (1999). 4. L. Lever, N. M. Hinchcliffe, S. P. Khanna, P. Dean, Z. Ikonic, C. A. Evans, A. G. Davies, P. Harrison, E. H. Linfield, and R. W. Kelsall, “Terahertz ambipolar dual-wavelength quantum cascade laser”, Opt. Express 17, 19926 (2009). 5. T. Kubis, C. Yeh, P. Vogl, A. Benz, G. Fasching, and C. Deutsch, “Theory of nonequilibrium quantum transport and energy dissipation in terahertz quantum cascade lasers”, Phys. Rev. B 79, 195323 (2009). 6. R. Nelander and A. Wacker, “Temperature dependence of gain profile for terahertz quantum cascade lasers”, Appl. Phys. Lett. 92, 081102 (2008). 7. C. Jirauschek and Paolo Lugli, “Monte-Carlo-based spectral gain analysis for terahertz quantum cascade lasers”, J. Appl. Phys. 105, 123102 (2009). 8. A. Bismuto, R. Terazzi, M. Beck, and J. Faist, “Influence of the growth temperature on the performance of strain-balanced quantum cascade lasers”, Appl. Phys. Lett. 98, 091105 (2011). 9. Y. Chiu, Y. Dikmelik, P. Q. Liu, N. L. Aung, J. B. Khurgin, and C. F. Gmachl, “Importance of interface roughness induced intersubband scattering in mid-infrared quantum cascade lasers”, Appl. Phys. Lett. 101, 171117 (2012). 10. M. P. Semtsiv, Y. Flores, M. Chashnikova, G. Monastyrskyi, and W. T. Masselink, “Low-threshold intersubband laser based on interface-scattering-rate engineering”, Appl. Phys. Lett. 100, 163502 (2012). 11. P. Q. Liu, A. J. Hoffman, M. D. Escarra, K. J. Franz, J. B. Khurgin, Y. Dikmelik, X. Wang, J.-Y. Fan, and C. F. Gmachl, “Highly power-efficient quantum cascade lasers”, Nat. Photonics 4, 95 (2010). 12. S. Fathololoumi, E. Dupont, C. W. I. Chan, Z. R. Wasilewski, S. R. Laframboise, D. Ban, A. Mátyás, C. Jirauschek, Q. Hu, and H. C. Liu, “Terahertz quantum cascade lasers operating up to 200 K with optimized oscillator strength and improved injection tunneling”, Opt. Express 20, 3866 (2012). 13. H. Luo, S. R. Laframboise, Z. R. Wasilewski, G. C. Aers, H. C. Liu, “Terahertz quantum-cascade lasers based on a three-well active module”, Appl. Phys. Lett. 90, 041112 (2007). 14. C. Deutsch, A. Benz, H. Detz, P. Klang, M. Nobile, A. M. Andrews, W. Schrenk, T. Kubis, P. Vogl, G. Strasser, and K. Unterrainer, “Terahertz quantum cascade lasers based on type II InGaAs/GaAsSb/InP”, Appl. Phys. Lett. 97, 261110 (2010). 15. M. Nobile, H. Detz, E. Mujagic, A. M. Andrews, P. Klang, W. Schrenk, and G. Strasser, “Midinfrared intersubband absorption in InGaAs/GaAsSb multiple quantum wells”, Appl. Phys. Lett. 95, 041102 (2009). 16. R. M. Feenstra, D. A. Collins, D. Z.-Y. Ting, M. W. Wang, and T. C. McGill, “Interface roughness and asymmetry in InAs/GaSb superlattices studied by scanning tunneling microscopy”, Phys. Rev. Lett. 72, 2749 (1994). 17. T. Unuma, T. Takahashi, T. Noda, M. Yoshita, H. Sakaki, M. Baba, and H. Akiyama, “Effects of interface roughness and phonon scattering on intersubband absorption linewidth in a GaAs quantum well”, Appl. Phys. Lett. 78, 3448 (2001). 18. T. Ando, A. B. Fowler, and F. Stern, “Electronic properties of two-dimensional systems”, Rev. Mod. Phys. 54, 437 (1982). 19. C. Deutsch, APL published soon
Introduction
The emission wavelength of quantum cascade lasers (QCLs), currently ranging from the midinfrared to the terahertz (THz) spectral region (3-300 µm) [1,2], can be engineered by designing subband levels in a semiconductor heterostructure.This unique feature is accomplished by band structure engineering.A second unique feature of QCLs is their unipolarity, which allows the realization of bidirectional devices [3,4].In this work, we demonstrate how such bidirectional devices can be used to gain a deeper understanding of elastic scattering in a QCL.
The carrier transport in QCLs is still a controversially discussed subject and simulations are especially contradictory for active regions used in THz QCLs.While longitudinal optical (LO) phonon scattering is known to be crucial in mid-infrared and, at elevated temperatures, in THz QCLs, the influence of elastic scattering mechanisms is not well understood.Low temperature transport in THz QCLs is a complex interplay of coherent tunneling, elastic scattering and LO phonon scattering.However, which elastic scattering mechanisms are dominant and which are negligible is disputed.Theoretical modeling has been focusing on interface roughness, impurity and electron-electron scattering [5][6][7].Even for the very mature mid-infrared QCL technology, the recent focus of research into elastic scattering due to interface roughness gained considerable device improvements [8][9][10][11].THz QCL performance is expected to be even more susceptible to elastic scattering since the optical transition energy is well below the LO phonon energy.The knowledge of the origin and influence of scattering mechanisms is going to be important in the race for higher operating temperatures, as they decrease/broaden the optical gain.Currently, the only successful strategy towards higher operating temperatures is to increase the optical gain by careful design optimizations [12], avoiding elastic scattering could be the next step.The experimental challenge lies in the direct observation of the influence of a certain scattering mechanism in QCLs.For this purpose we exploit the unipolar carrier transport feature of QCLs.Combined with a high degree of freedom due to band structure engineering, one can design bidirectional dual wavelength or nominally symmetric active regions.An early work on mid-infrared QCLs was focused on dual wavelength operation with either bias polarity and a symmetric active region was merely demonstrated as a proof of principle [3].
Symmetric active regions
Motivated by nonequilibrium Green's function calculations performed on asymmetrically rough interfaces [13], we designed a series of three symmetric THz QCL active regions based on the three-well phonon depletion scheme in the InGaAs/GaAsSb material system [14,15].The band structures are modeled with an effective mass 1D Schrödinger solver and the material parameters used are from [16], Nobile et al.The active region of the initial design (sample 1), biased with either polarity, is shown in Fig. 1(a) and 1(c).Samples 2 and 3 are improved versions of sample 1 and the modifications are described in Section 4 and Fig. 2. In theory, the bias polarity makes no difference and should result in equal device performance.However, the two sides of the GaAsSb barrier are not symmetric due to the actual growth of the sample.This effect of asymmetric interface qualities is known to be pronounced in the 6.1 Å material family, when different group V materials like As and Sb are used in the growth of the semiconductor heterostructure [17].The same effect is also found in the presented material system, InGaAs/GaAsSb latticed matched to InP.A high-resolution cross-sectional TEM picture of the 3 nm GaAsSb barrier, shown in from the well to barrier material in the conduction band during growth (from InGaAs to GaAsSb in this case) and vice versa for the inverted.The interface asymmetry with respect to the growth direction is also sketched in the band structures, where the rough lines indicate the inverted interfaces.By applying a bias, the nominally symmetric band structure becomes asymmetric and depending on the polarity, the wave functions are either pushed to the rougher or smoother interfaces, i.e. the subband levels exhibit a different value of probability at the position of the two interfaces.The consequence is different scattering rates and lifetimes between the two operating directions.The interface roughness scattering rate has to be split into two parts, accounting for the difference in step height 1 2 , Δ Δ and correlation length 1 2 , Λ Λ between the normal and inverted interface (Fig. 1(b)) [9,18,19]: Here, m* is the effective electron mass, δU is the conduction band offset, 2 2 , i f ψ ψ are the probabilities of the initial and final subband at the position z inv , z norm of the respective interfaces and q if is the absolute value of the 2D scattering vector.The evaluation of the two sums in Eq. ( 1) for the upper to lower laser level scattering rate for positive bias and vice versa for negative (the sums are evaluated for sample 1 and at an electric field of 8.4 kV/cm).
Thus an interface asymmetry leads to different upper level lifetimes, τ 4 -and τ 4 + , for the two bias polarities.
Fabrication and experimental setup
The active region designs are grown with a Riber 32 on InP with a substrate temperature of 470-480°C.Both group V species are supplied from valved-cracker sources, which allow presetting the fluxes for reliable lattice-matching throughout the growth.Due to the large number of interfaces, switching is done immediately by shutter operations only [20].Symmetric InGaAs contact layers doped to 7 × 10 18 cm −3 are grown bottom and top.The total active region thickness is ≈10 µm (170 periods).The 10/1000 nm Ti/Au metal layers are deposited on the sample wafers and on n + GaAs host substrates.After wafer bonding using a thermo-compression method, the original substrate is removed by mechanical polishing and selective wet etching.Standard lithography is used to define the top contact/waveguide and is followed by metallization of 10/500 nm Ti/Au.Subsequently, ridges are dry-etched with an inductively coupled plasma reactive ion etcher using an SiCl 4 :Ar chemistry.The top contact is protected by an additional SiN or Ni layer and, which acts as an etch mask.Unlike commonly used wet-etching, dry-etching also guarantees vertical sidewalls and a symmetric geometry of the device.Typical device dimensions are 0.5-1 mm by 40-90 µm.
The fabricated devices are indium soldered on a copper submount, wire-bonded and mounted on the cold finger of a helium flow cryostat.Light emission is collected using an off-axis parabolic mirror, sent through an FTIR spectrometer and detected with a DTGS detector.Absolute power values are measured with a calibrated thermopile detector (Dexter 6M) under vacuum conditions, mounted at a distance of 5-10 mm from the laser facet inside the cryostat.The detector element has a diameter of 6 mm.Light-current-voltage (LIV) curves and spectra are recorded in pulsed mode, with a pulse duration of 200 ns and up to 4% duty cycles (200 kHz repetition rate).
Experimental results and discussion
Figure 2(a) summarizes the design parameters and major differences between the three tested samples.The bidirectional LIV characteristic of sample 1 is shown Fig. 2(b).Although the active region design, as well as contacts and device geometry, is completely symmetric, lasing is just observed with negative top bias polarity with a threshold current density of 0.62 kA/cm 2 .In this case, the electron transport occurs in growth direction and electrons "see" the normal, sharper interfaces, resulting in a decreased interface roughness scattering compared to the opposite polarity.The enhanced interface roughness scattering with positive bias causes increased leakage beyond 4 V.More important though, the upper level lifetime τ 4 + is also dramatically decreased and the optical gain cannot overcome the waveguide losses.Lasing with negative bias is observed up to a maximum temperature of 105 K.
By redesigning the subband level alignment and energy separation of level 1 and 2 for resonant depletion, sample 2 also shows lasing for positive polarity, presented in Fig. 2(c).However, the IV asymmetry remains basically unchanged with a higher leakage current and hence higher threshold for positive polarity.The threshold bias is almost identical for both polarities ( -0.5 , which underlines the importance of the alignment of subbands and also indicates symmetric contact properties.Again, the reduced optical output power reflects the decreased upper level lifetime τ 4 + .For a more significant comparison of the polarity-dependent performance, we characterized four devices.The average values and their negative/positive bias ratio, including standard deviation, are summed up in Table 1.Sample 2 exhibits a 43% reduced threshold, a 238% higher output power and a 20% higher operating temperature for negative polarity.The peculiar kinks, visible in the negative polarity IV of sample 1 and 2, vary from device to device and can be attributed to the formation of high field domains in the active region..86 ± 0.28 Four devices of sample 2 and 3 have been measured for a meaningful comparison between the two operating directions.Sample 1 only works with negative bias.The average values for threshold current density J th , maximum operating temperature T max and the ratio of them are listed.Please note that for the statistics the negative/positive ratio is calculated separately for every single device.Peak optical output power P peak is only provided as a ratio since the absolute values strongly depend on the device dimensions and collection efficiency.The standard deviation is also provided for the ratios and shows that the difference is not within the error bars.J th and P peak have been measured at 5K.The purpose of sample 3 is to study the influence of a thinner injection/extraction barrier on the performance for both bias polarities.Figure 2(d) shows the LIV characteristics of sample 3. The IV exhibits a less pronounced asymmetry and the threshold current difference is reduced to -19%.Thinner barriers give stronger tunneling coupling, which means that coherent transport plays a more dominant role compared to elastic scattering in this structure.The change of the barrier width from 3 nm to 2.8 nm in sample 3 yields an anticrossing energy of 3 meV, compared to 2.2 and 2.5 meV, respectively.However, the difference for the upper level lifetimes τ 4 -and τ 4 + remains unaffected and causes a similar negative/positive ratio of the output power ( + 286% for negative polarity) compared to sample 2. Sample 3 shows the best temperature performance of 134 K for negative polarity, 32% higher than for positive polarity.
The results prove that rough interfaces, even if they are only present on one side of the barrier, play a major role in the transport and performance of THz QCLs, confirming theoretical predictions of non-equilibrium Green's function calculations by Kubis et al. [5,13].Results of such calculations on symmetric active region designs are in good qualitative agreement with the experimental findings of this paper.Accurate transport simulations would require precise input parameters, such as step height and correlation length, of the normal and inverted interface.However, we can estimate whether realistic interface roughness parameters [5][6][7][8][9] lead to a reasonable lifetime ratio - ), fitting the threshold ratios rather well.
Figure 3 illustrates the positive and negative polarity spectra of a representative device of sample 2, recorded at the maximum output power.The spectra are relatively broadband, which is attributed to the generally rougher interfaces compared to GaAs/AlGaAs [22].More interesting though is the similarity of the recorded spectra.It indicates that the subband structure and alignment are symmetric for the both bias polarities, resulting in the same emission frequency.
Conclusion
In this work we demonstrated the potential of symmetric quantum cascade lasers to study the influence of elastic scattering on the device performance.Due to their unipolarity, QCLs can be designed in a way to operate under both polarities.The only difference is the transport direction with respect to the growth direction.The material system InGaAs/GaAsSb exhibits a pronounced interface asymmetry and is therefore ideal to investigate the role of interface roughness scattering in QCLs.Inverted interfaces exhibit an increased roughness and cause a stronger influence of interface roughness scattering when electrons are incident on them.We compared the performance of three symmetric samples with different design parameters.One sample shows the most outstanding situation of only unidirectional lasing.The other two samples can be operated in both directions but with significant performance degradation with positive bias.We observe up to 43% lower threshold current densities, 286% higher optical output power and 34% higher maximum operating temperature for negative polarity.
The results show that the growth order of a QCL sample, which then defines the applied bias polarity, can significantly affect the device performance.So far, the growth order of QCLs was believed to have no influence on performance [3,4,23].The interface asymmetry in InGaAs/GaAsSb heterostructures defines a favorite operating direction for THz QCLs [22].To our knowledge, there are no other unipolar intersubband devices (e.g.resonant tunneling diodes, quantum well infrared photodetectors), which suffer from interface asymmetry and hence require to be grown in a certain order.
Due to their sensitivity to interface roughness scattering, symmetric THz QCLs could also serve as a test structure to evaluate and improve interface qualities and interface engineering recipes.In addition, they could also be used as a benchmark for testing transport models since scattering mechanisms can be increased or decreased by simply switching the bias polarity.
# 4 -Fig. 1 .
Fig. 1.Nominally symmetric active region of a terahertz quantum cascade laser biased with either bias polarity.The symmetry is broken by the different quality of the normal and inverted interfaces with respect to the growth direction.The inverted interfaces are sketched as a rough line in the band structure.(a) Positive top bias polarity results in an electron flow incident on the inverted interfaces.(b) The cross-sectional TEM picture confirms the interface asymmetry in the InGaAs/GaAsSb material system and the enhanced roughness of the inverted interface.(c) Negative polarity reverses the current flow and electrons are moving against the normal interfaces.The influence of interface roughness scattering can therefore be studied by simply switching between the bias polarities.
Fig. 1 (
b), reveals the increased roughness of the inverted interface.The expression normal and inverted are commonly used in 2DEG structures for the two kind of interfaces, where the normal interface describes the switching #180918 -$15.00USD Received 11 Dec 2012; revised 14 Feb 2013; accepted 14 Feb 2013; published 14 Mar 2013 (C) 2013 OSA
Fig. 2 .
Fig. 2. Comparison of the different active region designs and light-current-voltage characteristics of each sample measured with both operating polarities.(a) The layer sequence of sample 1 is 3.0/14.0/1.0/14.0/3.0/2.0/22.0/2.0 nm.For sample 2 the layer sequence is modified to 3.0/13.3/1.0/13.3/3.0/2.0/18.8/2.0 nm, which results in a better depletion of the lower laser level due to optimized subband alignment and energy separation (E 21 ).Sample 3 employs thinner injection/extraction barriers for stronger tunneling coupling.The layer sequence is 2.8/13.0/0.9/13.0/2.8/1.9/20.5/1.9 nm.GaAsSb barriers are in bold fonts.The Si doping density is 1 × 10 16 cm −3 in the underlined sections.(b) Sample 1 only shows lasing for negative polarity.(c) Sample 2 works with both bias polarities but exhibits a significant transport and performance difference.(d) Thinner barriers in sample 3 enhance the tunneling coupling and reduce the threshold difference.
4 4 /=
τ τ + of the upper laser levels as observed in our measurements.In a first approximation (unity injection efficiency, negligible lower level lifetime τ 3 , equal broadening of the optical transtition - ) of rate equations[21], the upper level lifetime is inversely proportional to the threshold current, which is -
#Fig. 3 .
Fig.3.Normalized spectra of sample 2 recorded in both bias polarities at the maximum optical output power and at 5 K.The main lasing modes are observed in both spectra.It indicates that the spectral position of the gain is not influenced by the applied polarity and the resulting subband level alignment/spacing is identical. | 4,517.8 | 2013-03-25T00:00:00.000 | [
"Engineering",
"Physics"
] |
Cost‑effectiveness of Childbirth Strategies for Prevention of Mother‑to‑child Transmission of HIV Among Mothers Receiving Nevirapine in India
Background: Mother-to-child transmission of HIV is an important mode of spread of HIV in India. With strategies like caesarian section and nevirapine therapy, this spread has been reduced. However, they have costs attached. In this context, this paper attempts to compare the cost-effectiveness of alternative childbirth strategies among HIV-positive mothers receiving nevirapine. Materials and Methods: Using sentinel surveillance data from three districts in Tamil Nadu, a model was created to test the cost-effectiveness of vaginal delivery against elective caesarian section among mothers receiving nevirapine. Sensitivity analysis was applied to evaluate cost per HIV infection prevented. Results: Vaginal delivery is not only cheaper in HIV-infected mothers receiving nevirapine but also cost-effective as compared to elective caesarian section. The incremental cost for preventing an additional HIV infection through caesarian section was Rs. 76,000. Sensitivity analysis reveals that the findings are robust over a range of HIV transmission probabilities, 0.04-0.14 for vaginal delivery and 0.00-0.02 for caesarian section. Conclusions: From a clinical perspective, the findings suggest that pregnant HIV-infected women receiving nevirapine should consider the benefits of a cheaper and safer vaginal delivery. From an economic perspective, the findings support the strategy of vaginal delivery in mothers receiving nevirapine.
Introduction
For millions of children, AIDS has drastically altered the experience of growing up. In 2007, it was estimated that 2.1 million children under age 15 were living with HIV, and 15 million children had lost one or both parents to the virus. Millions more have experienced deepening poverty, school dropout, and discrimination as a result of the epidemic. (1) Perinatal transmission of HIV is the primary and most common way that children below the age of 15 years become infected with HIV worldwide, (2) with more than half of transmission probably occurring late in pregnancy or during labor and delivery. (3) Mother-to-child transmission of HIV varies according to geographical region, delivery method, and breastfeeding practices and is estimated to be 21-43% in less-developed countries. (4) Mother-to-child transmission becomes more important where heterosexual intercourse is the predominant mode of transmission, where women of childbearing age form a significant proportion of the infected population, and where fertility is high. This pattern exists in many of the low-income and developing countries, including India. Recent estimates suggest that 3.8% of the total HIV infections in this country are among children less than 15 years of age. India has an estimated 220,000 children infected by HIV/AIDS. It is estimated that 55,000 to 60,000 children are born every year to mothers who are HIV positive. (5) Tamil Nadu has been one of the high-HIV-prevalence states in India but has shown progress in controlling the epidemic. The HIV prevalence in antenatal cases in this state is 0.25%. (6) Without treatment, the babies born to these mothers have an estimated 30% chance of becoming infected during the pregnancy, during labor, or through breastfeeding during the first 6 months. (5) However, if the HIV status of the woman were known during the pregnancy, it would be possible to achieve reduction in transmission to 5-8% through strategies such as avoidance of breastfeeding; (7) administration of antiretroviral therapy (ART) during pregnancy, at delivery, and to new born; (8) and avoidance of invasive procedures during pregnancy and at delivery. (9,10) This has already been achieved in parts of Europe and the USA. (11,12) The challenge for India is to determine how best to help meet the HIV targets of the Millennium Development Goals (MDG) and, more specifically, the target set by the United Nations General Assembly Special Session (UNGASS) on HIV/AIDS, which seeks to reduce the proportion of infants infected with HIV by 20%. There are several interventions-such as family planning, obstetric care, integrated counseling and testing centers (ICTC), antiretroviral (ARV) drugs, and artificial feeds-that can reduce the risk of transmission from an HIV-infected mother to her child. Though some of them are becoming more affordable, implementing these interventions poses complex challenges for health systems, communities, and individuals, especially in resource-poor settings. The focus of this study is on the benefits of adopting appropriate childbirth strategies along with the use of antiretroviral drug therapy.
Research has shown a 50% reduction in mother-to-child transmission if elective caesarian section (ECS) is carried out before labor and before membrane rupture (13) among HIV-infected women not taking ART or taking only zidovudine. (14) However, the available data regarding the benefit of caesarian section as an intervention to prevent transmission are largely from studies conducted in developed countries and/or before the widespread utilization of highly active antiretroviral therapy (HAART) for HIV-positive women. The risk of mother-to-child transmission of HIV according to mode of childbirth among HIV-infected women receiving HAART is unclear in the context of developing countries.
Preliminary data from developed countries suggest that ECS can benefit HIV-infected women with plasma viral loads ,1000 copies/ml (15) or those receiving HAART. (16) The benefit of ECS for prevention of mother-to-child transmission of HIV may persist among women with low plasma viral loads because of compartmentalization of HIV reservoirs, i.e., a low plasma viral load does not necessarily indicate a low viral load in genital tract secretions. Therefore, an important issue to be addressed is assessment of the effectiveness of caesarian section for prevention of mother-to-child transmission of HIV among HIV-infected women receiving HAART.
While the national guidelines of prevention of mother-to-child transmission of HIV in India do recommend ECS for mothers infected with HIV, these guidelines were developed before HAART (such as nevirapine) became available for treatment of pregnant mothers. Also, in resource-poor settings such as India, the cost of ECS is often borne by the government and the amount can be significant. In this context, this paper attempts to compare the cost-effectiveness of alternative childbirth strategies among HIV-positive mothers receiving nevirapine in government ICTCs in Tamil Nadu, India. The findings of this study would be relevant for clinical case management as well as for formulation of state/national HIV/AIDS policy.
Materials and Methods
The present study was designed as a retrospective cohort analysis with a comparison group. The study used the sentinel surveillance data from Tamil Nadu State AIDS Control Society (TANSACS). Sentinel surveillance data of mothers attending 26 government ICTCs in three districts (Chennai, Theni, and Dharmapuri) of the state during the period 2001-2005 were used for this study. These sites were selected as they provided nevirapine to mothers who tested HIV positive. Data meeting the following inclusion criteria were selected for analysis: HIV positivity of mothers, acceptance of the nevirapine regimen by mothers, availability of data on mode of childbirth, availability of data on 1 month postdelivery HIV status of child. This data was obtained with permission from TANSACS, which was implementing the prevention of mother-to-child transmission programme at these sites. Three hundred and sixty-two women fulfilled the inclusion criteria for this study.
A decision analysis model from the clinical perspective was constructed [ Figure 1]. The model compared two childbirth delivery strategies among mothers fulfilling the inclusion criteria: (1) usual care, in which HIV-infected mothers undergo vaginal delivery and (2) HIV-infected mothers are offered the option of elective caesarian section.
The model followed the data cohort of 362 HIV-infected women who received nevirapine from these centers to note their mode of childbirth and postdelivery HIV status of the child at 1 month. The model was developed with the probabilities of different childbirth strategies, their costs, and vertical transmission risk as witnessed in the cohort group. For the purpose of this study, only uncomplicated, noninstrumental vaginal delivery and uncomplicated elective caesarian delivery were considered. For this cohort, it is assumed that the HIV-infected women who received nevirapine therapy were in prenatal care by 36 weeks and did not breastfeed.
Cost data was provided by TANSACS. The cost of uncomplicated vaginal delivery plus nevirapine was Rs. 3,750 per mother, while the cost of uncomplicated caesarian section delivery plus nevirapine was Rs. 7,550 per mother. This included the cost of the drug, number of women given the drug, cost of personnel involved, hospitalization cost, and the direct and indirect costs related to mode of delivery (vaginal and caesarian). Costs of transport, productivity loss for mother, and other opportunity costs were not considered in the model. Effectiveness was measured in terms of perinatal HIV transmissions prevented. The outcome measure was the number of cases of HIV among children tested at 1 month after birth. The outcomes of the decision analysis model were measured in terms of proportion of HIV infection prevented among the newborn. Effectiveness was measured by giving a weight of '1' for every HIV-negative child and a weight of '0' for every HIV-positive child.
The cost-effectiveness of the alternative childbirth strategies is evaluated in terms of their vertical transmission probabilities and the average costs per woman for each strategy. The costs, outcomes, and probabilities were entered into Tree-Age software for cost-effectiveness and sensitivity analysis.
One-way sensitivity analysis was undertaken to illustrate the impact of a range of HIV transmission probabilities for alternative childbirth strategies. Variables were deemed sensitive if their values were within the ranges found in literature. This was done based on a range of probabilities reported in literature of HIV transmission rate in vaginal and caesarian section deliveries. The range selected was from 0.04-0.14 (17)(18)(19) for vaginal delivery and 0.00-0.02 (14,18,(20)(21)(22) for caesarian section.
Results
Of the 362 mothers fulfilling the inclusion criteria, 295 had uncomplicated vaginal deliveries and 64 underwent uncomplicated elective caesarian section. Three had assisted breech delivery and were not considered for analysis. HIV transmission rate to child was 6.10% for vaginal delivery and 1.5% for ECS. On roll back the model suggests that vaginal delivery plus nevirapine is a cost-effective strategy as compared to caesarian section plus nevirapine. The incremental cost for preventing an additional HIV infection through caesarian section was Rs. 76,000. Sensitivity analysis results across the range of HIV transmission probabilities for vaginal deliveries are depicted in Figure 2 and for caesarian delivery in Figure 3. Text report of the same are presented in Tables 1 and 2.
Discussion
The results of the study indicate that among HIV-positive mothers receiving nevirapine, vaginal delivery is more cost-effective than ECS. This is true for a range of probabilities of HIV transmission rate. Prospective cohort studies have shown a decreased likelihood of perinatal HIV transmission with elective caesarian delivery (23,24) and an additive protective effect with zidovudine therapy. (17,18,20,21) Data from North America, Thailand, and Europe (15,25,26) suggest a benefit from caesarian section in HIV-infected pregnant women, even those with low viral loads. However, the apparent benefit of caesarian delivery needs to be weighed against its cost, especially in resource-poor settings. In resource-poor settings like India, there are additional complications such as lack of adequate infrastructure and skilled manpower. In this context, the findings of this study assume greater significance. The role of mode of childbirth in the management of HIV-infected women must be assessed in light of the risks as well as benefits. HIV-infected pregnant women receiving nevirapine must be provided with all available information so that they can make informed decisions regarding the mode of childbirth to prevent transmission of infection to their children. With the availability of safe childbirth practices, individual women may consider the benefit of a cheaper vaginal delivery to outweigh the potential disadvantages of an ECS.
The findings from this study, from a cost perspective, seem to favor vaginal delivery among mothers taking nevirapine rather than caesarian section. The relatively low prices of antiretroviral drugs in India have made it possible to consider, for the first time, the financing of nevirapine therapy for pregnant women. While availability of skilled manpower and materials for elective caesarian section still remains an issue, the findings of this study suggest that safe vaginal delivery may be an acceptable solution for prevention of mother-to-child transmission of HIV in mothers on nevirapine. However, cost may not be the only determining factor in selecting the appropriate strategy for child delivery.
Since the outcome restricts itself to the 1-month HIV status of child, it is possible that some HIV infections in children may have been missed in this early period. Also, the effect of breastfeeding practice on outcome is not considered in this model. The costs considered do not take into account opportunity costs for mother, HIV diagnostic costs, and the costs of lifetime care of an HIV-infected child. The sample size is too small to allow generalization of the findings and hence policy implications are to be drawn with caution. As the study was based on secondary data, it was not possible to assess the reasons for which the mothers underwent vaginal delivery and the ethical implications, or whether Mukherjee: Cost effectiveness of child birth strategies for PMTCT of HIV counseling was conducted with respect to the protective effect of caesarian delivery.
Conclusions
The economic factor is one of many factors influencing policy change. Although this study does provide critical economic evidence in favor of a change, factors outside the economic arena also influence policy formulation and change. This study had a small sample size and therefore more studies need to be done in different states of India to test the generalizability of the findings. | 3,074.4 | 2010-01-01T00:00:00.000 | [
"Economics",
"Medicine"
] |
Electrochemical Properties of Metallic Coatings
: The metallic coating is an outstanding corrosion-protection option with extensive applications, especially in high-temperature environments. Considering the close relationship between anti-corrosion ability and constitutions, it is necessary to acquire the electrochemical properties of metallic coatings for optimizing their corrosion resistance, and further provide guidance for coating design based on the protection mechanism. Thus, this Special Issue aims at collecting research articles focusing on the electrochemical properties of various metallic coatings, especially on the application of new electrochemical techniques for analyzing the corrosion protection process and mechanism of these coatings. Both experimental and theoretical types of research are welcome for the contribution.
Introduction
Metallic coatings can improve the corrosion resistance of steel, copper, and other metals in various environments, and thus extend the service life of metal constructions.Particularly, metallic coatings can be used in a wide variety of industrial applications in which metals are subjected to elevated temperatures and aggressive conditions, where organic coatings will encounter serious thermal degradation and thermal conductivity issues.For example, the use of inorganic coatings containing anions such as chromates or molybdates have been tested because of their relative temperature stability [1,2].Metallic diffusion coatings can significantly improve the corrosion resistance of low-cost alloys such as carbon steel.It also has been reported that Cr-or Ni-coated 409 SS specimens exhibit corrosion-resistant properties similar to that of superalloys in high-temperature chemical heat pump environments.Thus, coated steels possess the potential of replacing expensive alloys and reducing the capital cost of equipment.
Metallic coatings have developed significantly in these years, ranging from singlelayer coating to multi-layer gradient coating and micro-stack coating, from alloy coating to the current ceramic coating and composite coating.Up to date, protective metals and alloys such as Zn [3], Ni [4], Al [5], Ti [6], Hf [7], Cr [6], and Cu [8], for various applications have been studied for metals that are prone to corrosion.Al and Al alloy coatings are commonly used for corrosion protection of metal components.The open corrosion potential of aluminum coating in seawater showed a relatively low corrosion potential, which could provide cathodic protection for carbon steel [9].Al-based amorphous coatings have also been widely studied for their excellent wear and corrosion resistance.Compared with Al coating, amorphous alloy coatings, such as Al88Ni6La6 and Al86Ni6La6Cu2, have higher corrosion potential and lower corrosion current density [10].Ni and its alloys can maintain high strength and good corrosion resistance in seawater, sulfuric acid, and alkaline liquid, and are widely used in corrosive industrial fields.Ni-based amorphous metallic coatings [4] have a lower corrosion rate when compared with some typical Febased, Zr-based amorphous alloys, the electroplated-Cr, and stainless steel [11].This makes the Ni-based fully amorphous metallic coating a good candidate for anticorrosion applications.Ni-Cr coating, as a sacrificial layer of high-temperature resistance and acid and alkali corrosion resistance, is widely used in a high-temperature environment such as boiler and waste incinerator [12].Ni53Nb20Ti10Zr8Co6Cu3, a fully amorphous metallic coating has been deposited by using gas-atomized powders [13].Besides Al, Ni, and Cu alloys mentioned above, Ti and other alloys, such as Zn-Al-Si, were also used as anticorrosive coatings and composite coatings [14].During the last few decades, there has been considerable interest in the corrosion resistance of amorphous alloys [15,16].Recently, some attempts have been made to prepare Fe-based amorphous alloy coatings, for example, Fe-Cr-Mo-(C, B, P) and Fe-Cr-B-based amorphous coatings, which expressed prominent erosion resistance.All these researches illustrate the extensively applied prospect of these metal coatings [17].
The metallic coatings mentioned above could be divided into three categories according to their anti-corrosive mechanisms: (i) forms a passive film.A physical barrier consists of compact and dense oxide films formed between the metal parts and the corrosive medium can be found in Al-based or Cr-based coatings, and these oxide films could prevent the water and oxygen in the environment from reaching the protected matrix, and play the role of physical isolation and blocking [5]; (ii) serves as a sacrificial anode.Since the self-corrosion potential of the coating (such as Zn) is lower than that of the steel matrix, the metal coating, as a sacrificial anode, preferentially reacts with the electrolyte by losing electrons and plays a cathodic protection role on the steel matrix [3]; (iii) supplies soluble ions used as corrosion inhibitors.By electrochemical properties, amorphous Al-Co-Ce alloys can release corrosion-inhibiting ions Ce3+ from the metallic coating via a pH-controlled mechanism [18,19] These released inhibitor ions diffuse and migrate toward the damage site, reach critical concentration and finally suppress the corrosion [20].
Methods used for studying metallic coatings are of great importance to analyze their properties and protection mechanism.Optical microscopy, scanning electron microscopy, X-ray diffraction, and other surface analysis techniques are always employed to determine the microstructure and composition of corrosion products, while salt spray tests and weight loss methods are frequently adopted to evaluate the corrosion resistance of metallic coatings.However, these techniques are far from enough to illustrate clearly the features and the anti-corrosion mechanisms of these metallic coatings.Surface analysis techniques and weight loss methods could merely provide a certain state of the corrosion process, while salt spray test is not quantitative and is limited to giving a general idea of the corrosion rate in an aggressive environment.The electrochemical properties of metallic coatings, which don't exist in the insulated organic coatings, could provide additional messages about the coatings and thus give an insight into the corrosion protection process.
Conventional electrochemical testing methods, such as open-circuit potential (OCP), electrochemical impedance spectroscopy (EIS), and potentiodynamic polarization curve (PDP), could not only evaluate the corrosion resistance of the metallic coatings but also provide direct evidence and details of the corrosion protection process.There are many other electrochemical techniques that are of great potential for assisting the investigation on the corrosion mechanism of metallic coatings and thus prompt a wiser strategy for coating design.For example, the cyclic polarization curve, a mainly used method to evaluate the pitting ability of the materials or the self-repair ability of passive films [21] can also be chosen to indicate the tendency of a metallic coating to undergo pitting corrosion [22].Current-time transients, where changes in current as a function of time were determined at an applied polarization potential, could also be used as an indicator of corrosion resistance of coatings [22].
Additionally, due to the influence of various factors, the cathode and anode regions in the metal-electrolyte interface may exhibit different characteristics.Conventional electrochemical methods often reflect the macroscopic surface of the sample, but cannot be used to study its corrosion behavior in microcosmic scope, which brings difficulties to the analysis of the corrosion protection mechanism of metallic coatings.In these cases, micro area electrochemical technologies are necessary to be employed, such as Scanning Vibrating Electrode Technique (SVET) [23], Local Electrochemical Impedance Spectroscopy (LEIS) [24], which have been used widely in self-healing organic coatings.This technique can map the current density changes inside and around the coating defects at the microscopic scale, which are very essential for analyzing the corrosion protection mechanism deeply.
For metallic coatings, researchers nowadays pay more attention to their mechanical properties such as wear resistance, while their electrochemical properties are relatively rare to be reported, and the electrochemical testing methods are also limited.Actually, it has a lot of work to do to analyze the protective mechanism of metallic coatings by electrochemical technology, which can provide beneficial guidance for the design and preparation of coatings in the future.
Conclusions
The electrochemical properties of metallic coatings can not only directly reflect the corrosion protection on the substrate materials, but also provide sufficient information about the thermodynamic and kinetic processes of corrosion, thus provide insight into the corrosion mechanism and further guidance for the coating design.From this point of view, the electrochemical properties of metallic coatings are a worthy and important topic for corrosion and protection in the past, present, and future.This Special Issue aims to exchange the progress and frontier of corrosion-protection knowledge for metallic coatings based on various electrochemical techniques, including (but not limited to) CV, Tafel, EIS, SVET, LEIS, and LSV.We sincerely invite high-quality contributions that present innovative and significant findings and experiences on this topic. | 1,864.8 | 2022-12-10T00:00:00.000 | [
"Materials Science"
] |
Evaluation of Antimicrobial Properties of Conventional Poly(Methyl Methacrylate) Denture Base Resin Materials Containing Hydrothermally Synthesised Anatase TiO2 Nanotubes against Cariogenic Bacteria and Candida albicans.
The purpose of this study was to investigate the antimicrobial properties of a conventional poly methyl methacrylate (PMMA) modified with hydrothermally synthesised titanium dioxide nanotubes (TNTs). Minimum inhibitory concentration (MIC), minimum bactericidal concentration (MBC), and minimum fungicidal concentrations (MFC) for planktonic cells of the TiO2 nanotubes solution against Lactobacillus acidophilus, Streptococcus mutans and Candida albicans were determined. The powder of conventional acrylic resin was modified using 2.5% and 5% by weight synthesised titanium dioxide (TiO2) nanotubes, and rectangular-shaped specimens (10 mm × 10 mm × 3 mm) were fabricated. The antimicrobial properties of ultraviolet (UV) and non-UV irradiated modified, and non-modified acrylic resins were evaluated using the estimation of planktonic cell count and biofilm formation of the three microorganisms mentioned above. The data were analysed by one-way analysis of variance (ANOVA), followed by a post-hoc Tukey’s test at a significance level of 5%. MIC, for Streptococcus. mutans, Lactobacillus. acidophilus, and Candida. albicans, MBC for S. mutans and L. acidophilus and MFC for Candida. albicans were obtained more than 2100 µg/mL. The results of this study indicated a significant reduction in both planktonic cell count and biofilm formation of modified UV-activated acrylic specimens compared with the control group (p = 0.00). According to the results of the current study, it can be concluded that PMMA/TiO2 nanotube composite can be considered as a promising new material for antimicrobial approaches.
Introduction
Poly methyl methacrylate (PMMA) acrylic resin is the most commonly used material for fabrication of removable dentures and intraoral maxillofacial prostheses. Its working characteristics, ease of processing, accurate fit, chemical stability in the oral environment, moderate cost and light weight are some of the favourable properties of PMMA that have made it a suitable material for denture base fabrication. Despite these desirable properties, PMMA denture base resin is susceptible to colonisation of microorganisms in the oral environment (1). Surface roughness, porosity, continual denture wearing, and poor denture hygiene are some factors which may have effect of the adhesion of microorganisms and biofilm formation on the surfaces of acrylic resins (2-4).
A wide variety of microorganisms are detected on removable denture base materials, including Gram-positive organisms, Grampositive and negative rods, as well as Gramnegative cocci and fungi.
C. albicans and C. glabrata are the predominant microorganisms that are isolated from denture base acryles (5, 6). Candida species can colonise the surfaces of denture base resins and cause biofilm formation, which is the main etiologic agent for the development of denture stomatitis (7,8). Moreover, the surface of acrylic denture base in a removable partial denture or orthodontic appliance encourages the residence and accumulation of aerobic and anaerobic, and also facultative micro flora such as streptococci and lactobacilli, which are the main cause of dental caries. Therefore, different efforts have focused on developing an antimicrobial denture base resin. Methallyl phosphate monomers (9), methacrylic acid monomers (10), 2-tert-butylaminoethyl methacrylate (TBAEMA) (11), silver zeolites (12, 13), silver nanoparticles (AgNPs) (14, 15), and titanium oxide nanoparticles (TiO 2 NPs) (16) are some antimicrobial agents which have been used for production of an antimicrobial acrylic denture base.
Titanium dioxide nanoparticles have strong antimicrobial activity through photocatalysis. It has been reported that the anatase crystalline form of titanium dioxide (TiO 2 ) displays photocatalytic activity under ultraviolet A (UVA) illumination. The irradiation of UVA light with a wavelength less than 385 nm can activate crystalline TiO 2 and generate electron (e -)-hole (h + ) pairs. The excited electrons can react with electron acceptors like oxygen and produce a superoxide ion (O 2 •-). The positive holes also can react with H 2 O or OHand generate hydroxyl radicals (•OH). Other reactive oxygen species (ROS), like hydrogen peroxide (H 2 O 2 ) may also be produced. The reactive oxygen species including (O2•-), (•OH), and H 2 O 2 can decompose nearby organic compounds (17)(18)(19). The photodecomposition property of TiO 2 has been investigated for employment in various antimicrobial approaches (20-22). The mechanism results from the reaction between reactive oxygen species and the outer membrane of bacteria, like cell membrane or cell wall, which leads to leakage or damage of the cell (23). Different studies have demonstrated the death of Gram-positive and Gram-negative bacteria and fungi like Streptococcus mutans, Escherichia coli, and C. albicans by the photocatalysis of TiO 2 (1, 24 and 25).
Recently, nanostructured titania has been widely used and synthesized via different methods as a photocatalyst due to its higher photocatalytic activity than mico-scaled titania (26). The higher photocatalytic activity mainly results from the high specific surface area of the TiO 2 nanostructure. Different studies have added TiO 2 nanoparticles to dental restorative materials and demonstrated antibacterial and improved mechanical properties. In this regard, Anehosur et al. (27) modified the poly methyl methacrylate denture base resin using titania nanoparticles and found inhibitory activity against S. aureus in the fabricated PMMA/TiO 2 nano-composites. They suggested that PMMA denture base polymer modified using light activated TiO 2 NPs could improve the dental hygiene of denture wearers. Sodagar et al. (28) also reported strong antimicrobial activity against the cariogenic bacteria, including L. acidophilus and S. mutans in PMMA acrylic resins containing TiO 2 and SiO 2 nanoparticles. They also determined that this antibacterial property is more efficient under irradiation of UVA due to the photocatalytic properties of nano-TiO 2 .
Regarding the high specific surface area of nanoparticles, the cylindrical morphology of nanotubes with a hollow cavity at their centre may present increased active surface area compared with nanoparticles and enhance light trapping (29), which ultimately leads to higher antimicrobial activity than that of nanoparticles.
Therefore, this study aimed to investigate the incorporation of TiO 2 nanotubes on the antimicrobial properties and biofilm formation of a commercial conventional denture base resin. The null hypothesis is that the addition of TiO 2 nanotubes into denture base polymer does not affect its antimicrobial and biofilm formation properties.
Study groups
Conventional acrylic resin (mega CRYL HOT, megadental GMBH, Germany) was modified using titanium dioxide (TiO 2 ) nanotubes, which were synthesised and characterised according to our previous study (30). TiO 2 nanotubes were prepared via an alkaline hydrothermal process from a commercial TiO 2 nanoparticle powder (SkySpring, Nanomaterials, Inc., 2935 Westhollow Drive, Houston, TX 77082, USA) with a crystalline structure of approximately 99.5% anatase and a particle size of 10-30 nm. The synthesizing procedure of TiO 2 nanotubes was started by treating 1.14 g of nanoparticle powder with 40-45 mL of 10 N NaOH solution. The suspension then was sealed in a Teflonlined autoclave at temperatures of 150 °C for 48 h. Subsequently, the resultant precipitates were washed with deionized water and HCl aqueous solution (1 M). Finally, the powders were dried using an oven at 80 °C for 3 h to give the as-synthesized nanotubes. The synthesized TiO 2 nanotubes and the powder of conventional denture base resin were weighted using an analytical balance (Sartorius, Goettingen, Germany) and then divided into equal parts. Each part of TiO 2 nanotube powder was mixed manually with powder of acrylic resin. Subsequent to hand mixing they mixed with a dental amalgamator device (Ultramat 2, SDI, Australia) for better particle distribution. The fabricated specimens were assigned to three groups according to the percentage of added TiO 2 nanotubes including TNT 0% (control), TNT 2.5% and TNT 5% by weight. The antimicrobial and anti-biofilm formation activities of PMMA/ TiO 2 composites were evaluated in both non-UV-irradiated and UV-irradiated samples for three microbial strains including C. albicans, L. acidophilus and S. mutans.
Sample preparation
Rectangular-shaped specimens with dimensions of 10 mm × 10 mm × 3 mm were fabricated according to the manufacturer's instructions and ISO 20795-1:2013 (31). The powder of acrylic resin was modified using 2.5% and 5% by weight synthesized TiO 2 nanotubes. The proportioned polymer/monomer was mixed and then packed in dental stone moulds (Hydrocal dental stone, Moldano, Bayer Lerekusen, Germany). The two portions of the flask were closed together tightly and pressed slowly at 40,000 N under the hydraulic press so that the dough resin evenly flowed all over the mould space. Then the pressure was released, the two portions of the flask were opened, and the excess material was removed using a sharp scalpel. Finally, the two portions of the flask were closed and placed under a press (20 bars) for 5 min. Subsequently, the flask was left under low pressure for 30 min and then maintained in a water bath at room temperature. The temperature was raised up to 73 ± 1 °C slowly and then was held at the boiling point at 100 °C for 30 min. Finally, the fabricated specimens were subjected to polishing and finishing procedures to obtain to a glossy and smooth surface. Finally, the fabricated samples were sterilised by gamma rays at a dose of 25 kilograys.
Microbial strains and growth conditions C. albicans ATCC 90028, L. acidophilus ATCC 4356 and S. mutans ATCC 25175 were obtained from the Iranian Biological Resource Center (Tehran, Iran) and employed in this study. S. mutans and L. acidophilus were grown in microaerophilic and anaerobic conditions, respectively, in Brain-Heart Infusion (BHI) broth (Difco, Sparks, MD, USA) at 37 °C until the cells attained the mid-logarithmic phase (OD 600 nm = 0.2 for S. mutans and OD 600 nm = 1.0 for L. acidophilus) (32, 33). C. albicans strain was cultured on the Yeast Extract Peptone Dextrose (YEPD) broth (10 g yeast extract, 20 g peptone, 20 g dextrose, 1,000 mL distilled water, pH 7.0). The cells of C. albicans grew aerobically until they reached the mid-logarithmic growth phase (OD 600 nm = 1.0) (34).
MIC, MBC, and MFC
The antimicrobial activity of the TiO 2 nanotube solution was evaluated by measurement of MICs, MBC, and MFC against planktonic microbial cells, as recommended by the Clinical and Laboratory Standards Institute (CLSI) and International Organisation for Standardisation (ISO) (35-37). In this method, a single 96-well sterile polystyrene microtiter plate was used for each microbial strain. A susceptibility panel in the microtiter plates was prepared by pipetting 100 μL of 2 × BHI broth to each well; 100 μL of TiO 2 solution (10 mg mL -1 ) was added to the wells in column 1 (far left of the plate), and the TiO 2 concentration was diluted to 1:2 (i.e. 5 mg mL -1 ). TiO 2 was diluted 2-fold by transferring 100 μL aliquots from column 1 to column 2. Therefore, column 2 is a 2-fold dilution of column 1 (i.e. 2.5 mg mL -1 ). The process was continued across the microplate to column 10, and then 100 μL was discarded from column 10 rather than dispensing it into column 11. Starting from column 11 to column 1, the columns were inoculated with fresh BHI microbial cultures (100 μL/well) and adjusted to a concentration of 1.0 × 10 6 CFU/mL for bacterial suspensions and 1.0 × 10 5 CFU/mL for the C. albicans suspension using a multi-channel pipet. In the susceptibility panel, column 11 served as the positive (growth) control, and column 12 was not inoculated and considered as the sterility control.
The MIC was defined as the lowest concentration (μg mL -1 ) of TiO 2 that inhibited the visible growth of microorganisms. In this regard, after an incubation period, the MIC value was estimated by visual examination. The MBC and MFC determined the lowest concentration of TiO 2 to kill tested bacteria or fungi, respectively. The MBC and MFC were then found by subculturing (10 μL) the contents of each well without visible growth onto BHI agar plates. After 24 h of incubation of BHI agar plates at 37 °C, the colony-forming units per millilitre (CFU mL -1 ) were determined using the Miles and Misra Method (38). The MBC and MFC were thus determined as the lowest concentration (μg mL -1 ) of TiO 2 yielding ≥99.9% reduction of the initial CFU mL -1 after incubation.
Planktonic growth assay
The antibacterial and antifungal activities of non-UV-and UV-irradiated TiO 2 nanotubes were evaluated in the three aforementioned groups including control (TNT 0%), TNT 2.5%, and TNT 5% (n = 15) via estimation of the planktonic phase for each mentioned microbial strain separately.
In this assay 1.5 × 10 5 CFU mL -1 of freshly prepared microbial suspensions were poured into 2 mL tubes, and then the prepared acrylic samples were placed in the tubes containing microbial suspensions.
To treat the PMMA/TiO 2 nanotube samples with UV irradiation, the acrylic disks were placed in a chamber equipped with a 15 W BLB lamp (Philips Electronics, Seoul, Korea), and the emitting radiation was at 350-410 nm. The distance between the lamp and the acrylic samples in an anaerobic cabinet was set up to obtain 1.0 mW/cm 2 of ultraviolet type A (UVA) incident light. UVA light was emitted for 10 min, and the intensity of UV light was measured by a UVA radiometer (Konica Minolta) (39, 40).
After UV irradiation, 10 μL aliquots of tube suspensions containing microorganisms and PMMA-TiO 2 nanotube composites were inoculated into a flat-bottom, polystyrene 96-well microtiter plate, of which each well had previously been prepared to a volume of 90 μL with BHI broth. A serial dilution (10 -1 , 10 -2 , 10 -3 , 10 -4 and 10 -5 dilutions) was then performed, and 10 μL from each well was inoculated in the BHI agar. Subsequently, a spread culture procedure was done and incubated according to the incubation conditions of the above-mentioned strains for 24 h at 37 °C, and the count of vital bacteria and fungi was determined as CFU mL -1 following incubation as mentioned above.
Biofilm formation
The biofilm formation on the surface of the acrylic samples placed in the tubes containing three aforementioned microbial suspensions was evaluated separately for each strain before and after UV irradiation. Then, the microorganisms were incubated for 48 h at 37 °C under the proper incubation conditions for each strain. After incubation, the specimens were gently washed twice with 3 mL sterile phosphatebuffered saline (PBS) (10 mM Na 2 HPO 4 , 2 mM NaH 2 PO 4 , 2.7 mM KCl, 137 mM NaCl, pH 7.4) to remove the nonadherent and loosely bound cells. After that, the specimens were placed in the tubes containing 1 mL of BHI broth and sonicated using a sonicator (Branson, China) with a frequency of 50 Hz and 150 W power for 5 min. Serial dilutions were performed as described in the previous section. Ten microlitres from each diluted microbial suspension was transferred to BHI agar medium and spread over the entire agar surface with a sterile spreader. Following incubation for 24 h at 37 °C, the viable bacteria were counted and their number were calculated as CFU disk -1 as mentioned above.
Statistical analysis
The homogeneity and normality of variances of the data were tested before statistical analysis. The normality of data was investigated and confirmed by Kolmogorov-Smirnov analysis at a significance level of 5%.
Then, the data were analysed by one-way analysis of variance (ANOVA), followed by a post hoc Tukey's test at a level of significance of p < 0.05. The data were analysed using SPSS 23.0 for windows.
Results
In the current study, the growth rate of planktonic cells and biofilm formation of three microbial strains including C. albicans, L. acidophilus and S. mutans were evaluated for the non-UV and UV-irradiated conventional and titania nanotube-modified denture base acrylic resin.
MIC, MBC and MFC
The MIC, for S. mutans, L. acidophilus and C. albicans, MBC for S. mutans and L. acidophilus and MFC for C. albicans obtained were greater than 2100 µg/mL. Table 1 and Figures 1a and 1b show the antimicrobial effect of three non-irradiated groups of fabricated samples including nonmodified acryles and those modified with 2.5% and 5% titania nanotubes against the three above-mentioned microbial strains. Although the UV-irradiated disks exhibited a more efficient antibacterial effect than the non-UV irradiated disks for three microbial strains (p = 0.00), the non-UV irradiated samples also Table 1. Viable planktonic phase counts of C. albicans, L. acidophilus and S. mutans in non-modified conventional denture base acrylic discs (control group) and those modified with 2.5% and 5% TiO 2 before and after UV irradiation. showed a significant reduction in the microbial count in both the 2.5% and 5% TiO 2 nanotubemodified groups (p = 0.00).
Non UV irradiated samples UV irradiated samples
The results also demonstrated that though the activation of TiO 2 nanotubes in the control group did not affect biofilm formation in any of the three microbial strains (TNT 0%), in the planktonic growth assay, the microbial count of the control group was significantly reduced in UV-irradiated samples in comparison with non-UV irradiated ones (p = 0.00).
The representative images of the microbial viability of the three different microbial strains employed in non-UV-and UV-irradiated acrylic samples are shown in Figure 2.
Biofilm formation
The formation of mature biofilms on the surface of three different acrylic resins including nanotubes modified with 2.5% and 5% titania, and the control group (TNT, 0%) were recorded in the current study. Counts of Table 2 and Figures 1c and 1d. One-way analysis of variance showed a significant reduction in biofilm formation of TNT's modified acrylic resins compared with the control group. Tukey's post-hoc test indicated a significant reduction of biofilm formation by addition of 5% titania nanotubes to denture base acrylic resins in both non-UV and UV-irradiated samples for three microbial strains (p = 0.00). Although the addition of 2.5% non-activated TNTs to acrylic resins reduced the viable microbial count, the differences were not statistically significant for C. albicans (p = 0.133) and L. acidophilus (p = 0.341). According to the results of this study, the efficiency of incorporation of 5% non-activated TiO 2 nanotubes to acrylic resins was not significantly different from that of 2.5% activated TNTs in C. albicans (p = 0.706) and L. acidophilus (p = 0.746). The results also indicated that UV irradiation did not affect biofilm formation on the surface of nonmodified acryles in any of the three microbial strains (p > 0.05).
Discussion
Poly methyl methacrylate is an ideal material for denture base fabrication due to its desirable properties. However, this material is susceptible to colonisation by various microbial species, including C. albicans, C. glabrata and gram positive/negative organisms. Different attempts have been made to overcome this drawback of denture base resins. The incorporation of biocide additives like silver zeolites, silver nanoparticles (AgNPs) and titania nanoparticles into the polymer matrix is an approach to developing a denture base acrylic resin with antimicrobial potential (1). In this study a hydrothermally synthesised titania nanotube was incorporated into denture base acrylic resin to improve its antibactrial properties. To the best of our knowledge, this is the first study which incorporated titania nanotubes into the matrix of denture base resin. In our unpublished article, we described the effect of TiO 2 nanotubes on the mechanical properties, and in the current study, we focused on the antibacterial properties of titania nanotubes.
Since 1972, when Fujishima and Honda first discovered the photocatalytic watersplitting potential of TiO 2 electrodes (41), their use in environmental applications has gained increasing interest. Besides, its photocatalyst property, TiO 2 has high chemical stability Table 2. Viable cell counts of C. albicans, L. acidophilus and S. mutans biofilms in non-UV activated resin and following activation with UV irradiation in a conventional denture base acrylic resin containing 0% (control), 2.5% and 5% titania nanotubes.
Non UV irradiated samples UV irradiated samples
Groups C. albicans L. acidophilus S. mutans C. albicans L. acidophilus S. mutans Control 6.43 ± 0.020 a 6.36 ± 0.036 cd 6.20 ± 0.020 fg 6.33 ± 0.057 6.28 ± 0.036 d 6.17 ± 0.024 g TiO 2 -2.5% 6.36 ± 0.016 a 6.27 ± 0.020 c 6.13 ± 0.013 f 6.14 ± 0.027 6.02 ± 0.017 e 5.86 ± 0.076 TiO 2 -5% 6.18 ± .040 b 6.07 ± 0.034 e 5.99 ± 0.038 4.86 ± 0.083 b 4.46 ± 0.141 4.39 ± 0.080 The same letters denote the groups which have not statistically significant differences. a p = 0.133 for the difference between biofilm formation on non-UV-irradiated TiO 2 -2.5% and control samples of C. albicans. b p = 0.706 for the difference between biofilm formation on non-UV-irradiated TiO 2 -5% and UV irradiated samples of TiO 2 -5% of C. albicans. produce reactive oxygen species (ROS), such as (O2•-), (•OH) and H 2 O 2 , which can decompose nearby organic compounds (43). In this study we evaluated the antimicrobial properties of TiO 2 nanotube-modified acrylic resin against three microbial strains including C. albicans, L. acidophilus, and S. mutans under ambient light and UV irradiation. C. albicans is the predominant microorganism isolated from dentures and the main cause of stomatitis in denture wearers. Streptococci and lactobacilli are some of the bacteria which are known as the main cause of dental caries (47,48). On the other hand, in recent years the demand for orthodontic services and thus the need to use acrylic removable appliances and retainers has increased. These removable appliances can promote the plaque accumulation and increase the risk of dental caries (49). The synthesised TiO 2 nanotubes used in this study showed strong antibacterial and antifungal properties against all employed species, and the MIC, MBC, and MFC values were more than 2100 µg/mL. Earlier we described that the production of reactive oxygen species (ROS) such as •OH, •O 2 −, •HO 2 and H 2 O 2 due to the photocatalytic properties of titania leads to the decomposition of microorganisms. Moreover, the attachment of the TiO 2 nanotubes to the cell membrane of species may affect and upset the permeability of the cells, induce oxidative stresses, and inhibit cell growth (29). Maness et al. proposed that ROS generated on the surface of UV-irradiated TiO 2 can cause lipid peroxidation reactions and subsequently a breakdown in the structure of the cell membrane (18). They considered this function to be the principal mechanism for cell death in E. coli. They determined that because all cell membranes are made up of different lipids with various degrees of unsaturation, the proposed cytotoxic mechanism can be extended to all cell types. Kubacka et al. described that rapid inactivation of the cells at the regulatory and signalling levels, a strong decrease in the coenzyme-independent respiratory chains, a lower capacity for iron and phosphorous transportation, a lower capacity for the biosynthesis and degradation of heme (Fe-S cluster) groups and wall modifications are the main factors responsible for the high biocidal activity of titania-based nanomaterials (50). Moreover, the higher surface area of nanotubes compared with nanoparticles may lead to greater antibacterial properties. However, Verran et al. reported that the antibacterial efficiency of TiO 2 nanoparticles are more sensitive to the crystallinity of structure than to surface area (51). They reported that the inherent ability of nanoparticles to produce radicals would affect their antibacterial properties. The results of our study showed a significant reduction in both biofilm formation and viable bacterial count after UV irradiation in PMMA/TiO 2 nanotubes composites. The TiO 2 photocatalytic response to UV irradiation is the explanation for the antibacterial properties of UV-irradiated samples. The vital organism count in the control group (TiO 2 0%) also showed a significant reduction in UV-irradiated samples compared with non UV-irradiated ones. The reduction of microbial strains in the control group in UV-irradiated samples can be explained by the following mechanisms. Ionising and non-ionising radiation are two forms of electromagnetic radiation. UV light is a non-ionising radiation that exerts its mutagenic effect by exciting electrons in DNA molecules. This excitation leads to the formation of extra bonds between two adjacent pyrimidines in DNA. These bonded pyrimidines form a structure which is called a pyrimidine dimer. The shape of the DNA often changes due to the formation of these dimers, which can cause problems during replication. In this way, UV irradiation can inactivate microorganisms and control microbial growth (52, 53).
In our study, the acrylic samples which had been modified with 5% nanotubes showed greater antibacterial and antibiofilm formation properties than 2.5%-modified and nonmodified samples. The higher concentration of TiO 2 nanotubes may lead to higher ROS and cause greater antibacterial properties. However, Verran et al. indicated that in a liquid system high concentrations and aggregates of particles results in fewer antibacterial properties because less light can pass through the suspension (51). In our attempt, greater antibacterial properties were achieved in the samples which were modified using a 5% concentration of TiO 2 , in spite of higher agglomeration of nanotubes.
For synthesis of TiO 2 nanotubes, we employed the anatase phase of TiO 2 nanoparticles. The crystalline structure of the material has an important role in its antimicrobial properties. In this regard, different studies determined improved antimicrobial properties for the anatase and rutile crystalline phase of titania (54). Li et al. determined the highest antibacterial activity for the anatase nanotubes among three crystalline phases of titania including anatase, rutile, and amorphous (55). Del Curto et al. reported a significant reduction in bacterial adhesion and colonisation of S. mutans, S. salivarius, and S. sanguis on anatase-coated titanium of dental implant abutments (56). Surface modification of Ti and surface topography, like the diameter of nanotubes, are other factors that can affect the antimicrobial properties of material (54,55). Studies also reported that resistance of bacteria to ROS attack affects the antibacterial properties of a material. The ability of a bacterium to tolerate ROS attack depends on its inherent characteristics, such as the cell wall, the thickness and structure of the cell membrane, and ROS-scavenging systems (17,20). However, all microbial strains utilised for this work showed very similar behaviour in all groups. In photocatalytic disinfection systems, when a large amount of ROS are produced, the excess of ROS would likely overwhelm the ROS scavenging systems of the microorganism (20). It should also be noted that the adsorption of bacteria on titania surfaces affects the antibacterial activity. The number of active hydroxyl groups in the nanotube structure could greatly enhance the 'adsorption' of the bacterial cell, and thereby enhanced antibacterial properties can be achieved (29).
Conclusion
Based on our results, it can be concluded that the modification of denture base acrylic resin with TiO 2 nanotubes can greatly improve its antimicrobial properties. Hence, the novel PMMA/ TNTs composite can be considered as a promising material for fabrication of acrylic resin base dental materials. However, further studies have to be done to evaluate the biological response of oral tissues to PMMA/ TiO 2 nanotube composites. (1) | 6,220.2 | 2018-12-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Inverse photonic design of functional elements that focus Bloch surface waves
Bloch surface waves (BSWs) are sustained at the interface of a suitably designed one-dimensional (1D) dielectric photonic crystal and an ambient material. The elements that control the propagation of BSWs are defined by a spatially structured device layer on top of the 1D photonic crystal that locally changes the effective index of the BSW. An example of such an element is a focusing device that squeezes an incident BSW into a tiny space. However, the ability to focus BSWs is limited since the index contrast achievable with the device layer is usually only on the order of Δn≈0.1 for practical reasons. Conventional elements, e.g., discs or triangles, which rely on a photonic nanojet to focus BSWs, operate insufficiently at such a low index contrast. To solve this problem, we utilize an inverse photonic design strategy to attain functional elements that focus BSWs efficiently into spatial domains slightly smaller than half the wavelength. Selected examples of such functional elements are fabricated. Their ability to focus BSWs is experimentally verified by measuring the field distributions with a scanning near-field optical microscope. Our focusing elements are promising ingredients for a future generation of integrated photonic devices that rely on BSWs, e.g., to carry information, or lab-on-chip devices for specific sensing applications.
Introduction
The control of electromagnetic fields in integrated environments is of paramount importance for a large number of applications in the broader context of information transmission, acquisition, and processing [1][2][3][4][5] . Traditionally, light in an integrated environment is controlled by waveguides that confine the light in the bulk of some media 6,7 . However, even stronger integration with better accessibility is achieved by confining electromagnetic waves to surfaces 8 . This led to the notion of surface waves, i.e., self-consistent solutions to Maxwell's equations localized at the interface between two media that exponentially decay away from the interface. Surface waves are characterized by a field profile in the direction normal to the interface and a dispersion relation that governs their propagation along the interface.
The most prominent surface waves are potentially surface plasmon polaritons (SPPs). SPPs are sustained at the interface between a metal and a dielectric. SPPs consist of a hybrid excitation in which some fraction of the energy is stored in the electromagnetic field while another fraction is stored in charge density oscillations of the conduction electrons in the metal 9 . While exploiting the coupling of light to an electronic excitation, the concentration of electromagnetic fields in a nanometric region close to the interface can be achieved. This has been instrumental for a large number of applications, e.g., to sense molecules or, more generally, to guide light at small length scales 10 . However, there is no free lunch; the large confinement of the fields in the metal also results in the dissipation of SPPs. This limits their propagation lengths. Typically, propagation lengths on the order of microns or at most tens of microns are achieved 9 . This rather short propagation length scale is considered to be an obstacle for many applications, and alternative surface waves that do not suffer from this limitation are explored.
The most appealing solution relies on Bloch surface waves (BSWs). BSWs are confined at the interface between an ambient material, i.e., air in our case, and a dielectric one-dimensional (1D) photonic crystal (PC) 11 . The PC consists of an alternating sequence of low-and high-permittivity dielectric materials. BSWs reside in the spectral region of the photonic band gap of the PC. They possess an evanescent field that decays exponentially in both the air medium and the PC and propagate along the interface of the PC. Propagation lengths on the order of millimeters for BSWs designed to operate at near-infrared wavelengths have been reported 12 . The propagation lengths, in the limit of vanishing intrinsic material absorption and the absence of scattering losses, ultimately are only limited by the number of layers in the PC. Such appealing characteristics make BSWs suitable for many sensing applications [13][14][15] and also for enhancing the interaction with a magnetic optical field 5 and supporting organic polaritons that result from the strong coupling between a BSW and organic excitons 16 .
To control the propagation of BSWs, a spatially structured dielectric device layer is usually deposited on top of the PC. The device layer modifies the dispersion relation of the BSW locally and changes the effective index experienced by the BSW 17 . As such, trajectories for light propagation can be defined, and multiple functional elements thereof have been investigated in the past. In particular, elements have been investigated that enable the BSW to impinge on a tiny spatial domain using the photonic nanojet effect 18 . This ability is particularly crucial for future application perspectives in the broader field of lab-on-chip sensing devices 19,20 . There, a fluid containing a substance to be detected flows through a channel, and the fluid should ideally interact with the BSW in a spatial domain as small as possible.
However, in most material platforms for BSWs, the accessible index contrast resulting from the propagation constant of the BSW in the absence/presence of the device layer on top of the 1D PC is rather small, i.e., on the order of Δn≈0.1. A larger index contrast is possible in theory 21 but because of practical difficulties has not been demonstrated. The achievable low index contrast is insufficient for a basic element, such as a circular disc, to efficiently focus the incident BSW. Other geometries have been investigated based on the rational design of selected geometries, and only recently, isosceles triangles have been identified as the most suitable geometry 18 . Subwavelength focusing was possible, but due to the limited index contrast, the focal width was well above half the wavelength, which is typically considered to be a lower bound for the focal width of far-field optical devices. Additionally, the field amplitude in the focus was rather small, and the undesirable side lobes were well pronounced.
To mitigate these limitations, here we rely on a computational strategy for inverse photonic design to identify elements that can focus BSWs more strongly than elements perceived by rational design. Our approach delivers material distributions on a checkerboard that focus the BSWs to a width nearly exactly half the wavelength of the BSW. We demonstrate the impact of increasing spatial resolution in the definition of the checkerboard pattern on the ability to better focus the incident BSW. We even The BSW is excited by means of frustrated total internal reflection at a free space wavelength of 1555 nm with TE polarization. The intensity distribution around the functional elements is imaged by a SNOM obtain a focal width that is slightly smaller than half the wavelength when the focal point is placed directly behind the element. This is realized by exploiting near-field components directly behind the structured device. Selected devices are fabricated, and we experimentally demonstrate their anticipated functionality by means of measurements with a scanning near-field optical microscope (SNOM) 22 . All together, we demonstrate that computational strategies for inverse photonic design are suitable to achieve functional elements that can control the propagation of BSWs to an extraordinary degree. The complex character of the structure allows for improved performance with respect to devices designed by a rational approach and constitutes an important step for the integration of BSWs for applications. Our approach can be used for other surface waves in which the limited index contrast imposes limitations in the design of functional photonic elements.
Design
The anticipated device is sketched in Fig. 1 along with the experimental prism coupling setup. It consists of the 1D PC on top of a glass prism. The PC is decorated with a spatially structured device layer that defines the functionality by means of refractive index contrast. BSWs are launched by frustrated total internal reflection in a spatial region sufficiently ahead of the functional element, i.e., on the order of 100 μm. The functional element, as designed by computational means, is shown in more detail in the inset of Fig. 1. It consists of a spatial domain in which the device layer is structured in a checkerboard pattern. In the examples presented in this manuscript, the spatial extent of the functional element is 40 μm in the x-direction, i.e., normal to the propagation direction of the BSW, and 10 μm in the z-direction, i.e., parallel to the propagation direction of the BSW.
The spatial resolution of the checkerboard structure is subject to modifications. Once the BSW is launched, it propagates towards the functional element. The functional element focuses the BSW as strongly as possible in a predefined distance behind the terminating edge of the element. As indicated in Fig. 1, the functionality is verified by measuring the intensity distribution of the BSW using a SNOM. The design of our optical chip consists of two steps. First, we must define the properties of the platform that sustains the BSWs. Second, we have to design the spatial distribution of the device layer that focuses the BSW in a predefined spatial region.
For the first part we rely on an established platform. We consider a fused silica substrate on top of which five double layers of Si 3 N 4 and SiO 2 are deposited to define the 1D PC. The top device layer is made from Si 3 N 4 . The free space wavelength of the device is λ 0 =1555 nm. The layer stack is designed to sustain a BSW with TE polarization, and an extension of our results to TM polarization is feasible 23,24 . The effective index without/with the device layer amounts to n w=o eff ¼ 1:1 and n w eff ¼ 1:2, and this defines the wavelength of the BSW λ BSW ¼ λ 0 =n w=o eff . More details on the multilayer design and dispersion curves are documented in the Materials and Methods Section and the Supplementary Information.
To design the functional elements, our objective is to maximize the electric field intensity within a region χ representing the focal spot at some distance z behind the functional element. As we have no prior information about the achievable focal width, it stands to reason that optimizing the intensity over a small region of space will naturally lead to the formation of a tighter focal point. The optimization problem can be stated as: where ψ represents the material distribution in the design region. [a,b] denotes the interval over which the localization of the focus is restricted. Having specified the design region, objective region, and focal distance of the device (see Materials and Methods section), we only need to define a resolution for the material grid, i.e., the feature size of the dielectric inclusions. Then, the material at each grid point needs to be optimized. The optimization method, which is inspired by the literature 25 , proceeds as follows. From an initial geometry, we test a separate inclusion at each location of the material grid and keep the one that maximally improves our design objective. In each iteration of the optimization, the impact of modifying each possible inclusion on the figure of merit (FOM) is simulated in 2D (see Materials and Methods section for details on the simulation method). The FOM is calculated according to the objective of maximizing the intensity in the focal spot. The modification of the inclusion that shows the largest improvement in the FOM is kept, while the others are discarded, and the next iteration begins. In case none of the inclusions leads to an improvement in the FOM, the algorithm uses a suboptimal inclusion and continues with the next iteration. Should the FOM improve in the next iteration, the optimization continues normally. If it does not, the algorithm considers another suboptimal inclusion in its search. The depth of this suboptimal search procedure is controlled by a parameter in the optimization routine. Generally, a shallower search leads to faster convergence but worse end results, and conversely, a deeper search leads to slower convergence but better results. We found a search depth of 3 (number of suboptimal inclusions allowed) to be suited for our problem. This somewhat exhaustive search mitigates the problem of rapid convergence into local optima while still guaranteeing convergence over time. The optimization landscape of our problem contains many local minima with different material distributions but similar performance. Our algorithm does not aim to find a global optimum for the problem but rather a local minimum that meets our design targets (e.g., regarding focal width and field intensity). Further aspects of the optimization are discussed in the Materials and Methods section and in the Supplementary Information. Taking all these aspects together, we can design functional elements that occupy a given spatial domain in which the material is distributed according to predefined constraints. The purpose of the elements is to focus the incident BSW into the smallest spatial region at a certain working distance behind the terminating edge of the functional element.
Selected results of the above design involving Eq. (1) are shown in Fig. 2, where we demonstrate the ability of a functional element to focus the incident BSW at some distance behind the terminating edge of the functional element, i.e., 5 µm in this case. In this simulation, we made a sequence of optimizations by varying the number of pixels to discretize the functional elements. This translates to a feature size of our grid with which the space is discretized. The intensity distributions of the electric field of selected designs are shown in Fig. 2a-d, where the feature sizes of the grid are 2μm ( = 1.414 λ BSW ), 1μm ( = 0.707 λ BSW ), 667 nm ( = 0.471 λ BSW ), and 500 nm ( = 0.354 λ BSW ), respectively. In Fig. 2e, we show the peak intensity of the focal spot and the focal width from the intensities shown in Fig. 2a-d along with some more data points. The focal width was measured as the full-width-at-half-maximum (FWHM) of the intensity in the focus. By means of 2D simulations, we demonstrate that the devices are rather robust against changes in the illumination profile, the angle of incidence, and the operation wavelength. These findings are reported in the Supplementary Information (see Section 7).
It is clear that a tighter focus can be achieved for a smaller feature size resolution. This result is not surprising, as more degrees of freedom are provided that affect the propagation of the BSW. One can also notice that decreasing the spatial resolution below a certain level (e.g., approximately smaller than 0.5 λ BSW ) does not necessarily improve the foci quality. That makes sense because the focal width is fundamentally bound by the resolution limit, and upon approaching it, further improvement is prohibited. In other words, if the structuring takes place on lengths much smaller than the wavelength of the BSW, the BSW does not resolve the spatial details of the structure anymore but experiences an effective medium instead. The only thing that can be further improved with a refined spatial distribution is the maximal field intensity of the focus. This is possible due to the suppression of side lobes in the field distribution behind the functional element that allows squeezing more of the incident energy into the focal region.
While contemplating the computationally optimized structures, a few insights can be obtained as to why the computer found them. Obviously, there is a tendency in the elements to consist of rather thin and lengthy waveguiding strings. They collect the BSW from an extended region and funnel it into a tiny central space at the upper part of the element. The phase accumulation of the BSW propagating in each of these strings on both sides with respect to the optical axis must be matched to result in constructive interference in the center. This allows efficient focusing of the BSWs. Although many details of the structure are hard to explain, they serve the anticipated purpose. We see this not only in the reduced focal width but also in the increased intensity/energy contained in the central spot. The ability of the designed elements to focus BSWs is superior when compared to the elements obtained by a rational design approach 18 . We stress that the samples presented here are only exemplary. Elements with comparable functionality and different design constraints have also been identified. To verify the suitability of our approach by experiments, we fabricated selected elements and characterized them by means of a SNOM. The results of this study are presented below.
Device characterization
Here, we focus on two different devices to demonstrate our concept. The first device is one from the simulations that have been discussed in the previous section. To be precise, we considered the device with a grid size of 667 nm that focuses the incident BSW at a distance of 5 µm behind the edge of the element and produces a focal width that corresponds to half the wavelength of the BSW. This corresponds to the optimized design result shown in Fig. 2c. To go beyond the fundamental half-wavelength barrier for the focal width, we also study a second device designed to focus the BSW directly behind the terminating edge of the functional element. As a result, we capitalize on near-field effects (with respect to the distance of the focus from the device edge) to focus the BSW into an even smaller region. Again, we used a grid size of 667 nm for the second device.
The elements are fabricated with standard procedures (see Materials and Methods section). To verify the presence of the designed structure and the quality of the fabrication, Fig. 3 shows scanning electron microscopy (SEM) micrographs of the fabricated elements. The a b Fig. 3 SEM micrographs of the fabricated samples for the two elements on which we concentrate here, and the elements are highlighted in light brown. a Element 1 serves the purpose of focusing the BSW 5 µm behind the element. b Element 2 serves the purpose of focusing the BSW directly behind the element with a focal width <0.5 λ BSW . The spatial extent covered by the element is 40 µm by 10 µm, and each pixel of the checkerboard has a side length of 667 nm. The white scale bar indicates 2 µm. The applied conductive layer to enable SEM imaging on the dielectrics results in visible artifacts as tiny particles, but the layer is removed before the SNOM measurements critical features of the structure patterns are well above what can be resolved with our e-beam lithographic system; hence, they are excellently reproduced from the original design. The conspicuous artifacts arise from a conductive anti-charging layer used for SEM imaging on dielectrics, which is removed before the SNOM measurements.
We use a scanning near-field optical microscope (SNOM) to visualize the interaction of the surface waves with the functional elements. The instrument measures, with the respective tip and the operational wavelength, the evanescent electric field distributions on the top of the PC multilayer stack. Hence, we can compare the measured intensity distribution to that of the simulated electric field intensity, i.e., E x j j 2 þ E z j j 2 . We perform two sets of measurements for each device. First, we map the light intensity over a 40 μm × 40 μm area to observe the overall field distribution of the surface wave after interaction with the functional element. The spatial resolution of this scan is 250 nm. In particular, mapping the light field over the entire structure allows us to locate the focal region with respect to the position of the functional element. Afterwards, to precisely measure the focal width, a scan with a resolution as high as 125 nm is performed over a 15 μm × 15 μm area around the focal region, which is cropped to a 5 μm × 5 μm area and displayed as an inset in Figs. 4b and 5b. Figure 4 shows the simulated and measured intensity data of the first device (see Fig. 3a), i.e., the one with the focus 5 µm behind the element. The simulated and measured intensity distributions are shown in Fig. 4a, b, respectively. The inset in b shows the intensity distribution around the focus in more detail, which is obtained from the high-resolution scan with 125 nm resolution. Figure 4c shows x-axis line plots through the focus of both the simulation and experiment to facilitate a quantitative comparison. Excellent agreement is observed between the simulation and the measurement. The only noticeable deviation in the measurement is a small tilt of the intensity pattern that emerges due to a minor alignment Fig. 3b) that focuses the incident BSW directly behind the element. The inset in b shows the intensity distribution obtained in a high-resolution scan in close proximity to the focal region. c Extracted x-axis line plots of the intensity through the focus. The intensities are normalized error with respect to the SNOM setup, which does not affect the quality of the measurements. Except for this discrepancy, the SNOM measurement demonstrates excellent reproducibility of the simulated result in a quantitative and qualitative manner. First, this justifies the 2D treatment of the simulation. Possible out-of-plane scattering losses at the corrugated surface, which are obviously not taken into account in the 2D simulation scheme yet present in the experiment, are negligible. They do not affect the focusing capability of the fabricated devices. In addition, the simulated and measured focal widths are in fairly good agreement. The focal widths were extracted as 708 nm (simulation) and 718 nm (measurement), which correspond to 0.50λ BSW and 0.51λ BSW , respectively. Such a value is expected for devices that are designed to preserve a certain working distance from the edge of the element. It clearly demonstrates that we can achieve a focal width that hits the lower boundary of what is feasible with far-field optical elements, i.e., in the far field with respect to the functional element along the propagation direction of the surface wave.
To surpass the canonical limit and achieve a focal width that is smaller than half the wavelength of the BSW, we place the focal point directly behind the terminating edge of the functional element. When we do this, it is possible to capitalize on the near-field effect, as the focal point is placed in the proximity of the edge of the functional element along the propagation direction of the BSW. This near-field effect can lead to subwavelength focusing and ultimately allow us to go beyond the canonical limit of 0.5 λ BSW . This performance is achieved with the second device. The simulated and measured intensity distributions of the second device with the focal point directly behind the functional element are shown in Fig. 5a and b. Figure 5c shows x-axis line plots through the focus of both the simulation and experiment. Again, excellent agreement is found between the simulation and the measurement. Moreover, the focal width in the simulation is 662 nm, which corresponds to a value of 0.47λ BSW . In the experiment, we even encounter a slightly smaller focal width of 625 nm, corresponding to a value of 0.44λ BSW . In this way, we experimentally demonstrate that it is possible not only to focus BSWs down to the classical diffraction limit but also to surpass it.
Discussion
We used a computational approach to design focusing elements for Bloch surface waves that are notoriously challenging to focus due to the limited index contrast. By relying on strategies geared towards the solution of the inverse problem, we defined material distributions on a checkerboard pattern that allowed focusing BSWs at a predefined location. Two-dimensional simulations were performed to optimize the structure pattern of the functional element. Our ideas have been verified in devoted experiments, in which selected devices were fabricated and the intensity distributions of those devices were investigated by means of a scanning near-field optical microscope. Excellent agreement between simulations and experiments was observed. For the device with a focal point farther away from the edge of the functional element, a focal width exactly half the wavelength of the BSW was simulated and verified in the experiment with the fabricated device. For the case of a focal point directly behind the functional element, the focusing of the BSW was predicted to be even below half the wavelength, which was experimentally confirmed. All together, our work lays a solid foundation to control the propagation of BSWs in integrated optical circuits placed on the top surface of a 1D PC multilayer stack. In the design of these circuits, we no longer have to strictly rely on the extrapolation of elements from established macroscopic systems, but we can rely on computational approaches to achieve functional elements, which have also been proven in other technical platforms, such as planar waveguide circuits 1,26-28 . The designed elements can find immediate use, for instance, in lab-on-chip systems where tightly focused BSWs interact with materials carried in fluidic channels to perform spectroscopic measurements.
In addition to the application to BSWs, our work demonstrates how electromagnetic fields can be efficiently controlled when only a limited index contrast is available for steering. This likely applies to many systems that support surface waves, since a large index contrast also frequently implies a large impedance mismatch. This often causes out-of-plane scattering losses that degrade the quality of the device as it limits the propagation length of the surface wave. In our approach, we circumvent the necessity of a large index contrast to control the propagation of the surface waves. Along these lines, we unlock novel opportunities to design integrated optical circuits that rely on surface waves, which is something that is urgently needed in the broader context of applications such as addressing single molecules 29 , photonic quantum devices for fundamental studies 30 , and enhancing sensing capabilities using the quantum nature of light 31 .
Materials and methods
Bloch surface wave multilayer platform
Numerical modeling
For the design of the functional elements, we rely on the finite-difference frequency-domain (FDFD) method as implemented in the frequency domain solver of the open source software package Meep 32 . In the simulations, we consider a finite two-dimensional (2D) spatial region that corresponds to the area occupied by the functional element and some space beyond. The space is assumed to be invariant in the third dimension. We consider the space to be filled with a medium according to the spatial distribution of the effective index of the BSW and study the propagation of plane waves with an electric field polarization in the x-direction. The electric field then also has a component in the z-direction, while the magnetic field only has a component in the invariant y-direction, corresponding to the experimental situation. Plane waves are launched into the computational domain that is surrounded by perfectly matched layers. The resulting field distributions are in excellent agreement with full-wave three-dimensional simulations, as reported in the Supplementary Information (see Section 5). Moreover, comparisons to the experiments justify our approach.
Optimization area
The coordinate system has its origin, i.e., (x, z) = (0, 0), 1 μm below the lower edge of the functional element and in its center. Therefore, in our particular case, we considered a x = −0.5 µm and b x = 0.5 µm as the spatial restriction in the x-direction. For the spatial restriction in the z-direction, we defined initial values of a z = 15.5 µm and b z = 16.5 µm to optimize the functional element to have a focal point 5 µm behind its terminating edge.
Optimization approach
The optimization procedure is written in Python 33 , using the Python API of Meep 32 for the FDFD simulations. When each inclusion is modified, the simulations to be performed are distributed over a computing cluster. Running on 100 cores (mixed Intel Xeon X5670 & X5570), this allows the optimization of material grids with half-wavelength feature sizes within as little as one hour, while larger feature sizes can be optimized within a few minutes. We report on further aspects of the optimization routine in the Supplementary Information.
Fabrication
First, the multilayer stack for the BSW and the device layer is fabricated by plasma-enhanced chemical vapor deposition (PECVD, PlasmaLab 80 Plus by Oxford). Silane, ammonia, and nitrous oxide are used as precursor gases at a process temperature of 300°C. Second, the spatially structured device layer has been defined on top of the PC multilayer stack. We use e-beam lithography and subsequent reactive-ion etching. We use a negativetone photoresist (ma-N 2403) and a conductive polymer (Espacer) to avoid charging the sample during the lithographic process. The exposure is performed in a JEOL JBX-5500FS e-beam writer with an accelerating voltage of 50 kV. After removal of the conductive polymer in deionized water and the development (MF-319) of the photoresist, the device layer is etched in CHF 3 and O 2 plasma (Oxford RIE 80). The resist is removed in an O 2 plasma. More details on the fabrication process are provided in the Supplementary Information.
Optical near-field measurements
We characterize the devices using a scanning near-field optical microscope (SNOM). To excite the BSWs, linearly polarized light at λ 0 =1555 nm is first collimated and then illuminated through the prism onto the surface of the BSW multilayer structure (see Fig. 1). The illumination angle is adjusted such that the incident light couples to the BSWs at the phase matching angle, θ = 49.5°, inside the glass prism. The evanescent field of the surface wave is collected point by point with a metallized 200-nm-aperture-size SNOM fiber probe, which can map the near-field with high spatial resolution beyond the classical diffraction limit. | 6,935.6 | 2018-12-01T00:00:00.000 | [
"Physics"
] |
A Huanglongbing Detection Method for Orange Trees Based on Deep Neural Networks and Transfer Learning
Huanglongbing (HLB) is one of the most threatening diseases for citrus production and it has caused significant economic damage worldwide. Hence, computer-vision systems that are based on convolutional neural networks (CNNs) can detect HLB accurately. Moreover, the detection system should be able to discriminate between HLB and other citrus abnormalities to ensure that any treatments are effective. Besides, the causal pathogen of HLB is usually detected and diagnosed by the quantitative real-time polymerase chain reaction (qPCR) test, which is costly. Consequently, it is difficult to collect large datasets to train CNN-based systems. In this case, transfer learning from pre-trained CNNs is a solution for building an HLB-detection system using small-sized datasets. This paper evaluates two kinds of CNN architectures: series network (represented by AlexNet, VGG16, and VGG19 models) and directed acyclic graph (DAG) network (represented by ResNet18, GoogLeNet, and Inception-V3 models). These pre-trained CNNs are fine-tuned to distinguish HLB, healthy cases, and 10 kinds of abnormalities of the Citrus sinensis species, which is commonly known as sweet orange. The dataset includes 953 color images, where the leaf samples were collected from orange groves in north Mexico. The 10-fold cross-validation results show that all the CNNs present a 95% or higher HLB sensitivity. However, the number of trainable parameters impacts HLB detection more than the network’s depth. Specifically, VGG19, with 19 layers and 144 M parameters, reached a perfect sensitivity for all cross-validation experiments; whereas Inception-V3, with 48 layers and 24 M parameters, reached 95% sensitivity to HLB detection. This outcome happens because a higher number of parameters compensates for the limited number of HLB cases, so VGG19 can successfully transfer the learned characteristics to new cases. This study gives guidance when choosing an adequate CNN to efficiently detect HLB and other orange abnormalities. Besides, a detection scheme is proposed to be further implemented in a portable system to detect HLB in situ, potentially helping to reduce economic losses for small growers from low-income regions.
I. INTRODUCTION
According to the Food and Agriculture Organization (FAO) of the United Nations, citrus is the second most harvested The associate editor coordinating the review of this manuscript and approving it for publication was Donato Impedovo . fruit in the world [1]. However, there are geographic regions where the production and quality of citrus are threatened by diseases caused by fungi, bacteria, pests, and viruses, among others [2].
Of all of the infectious diseases, Huanglongbing (HLB), or citrus greening, is one of the most destructive diseases Several automated plant disease detection systems have been developed with the rise of deep learning techniques for image classification [11]. In general, convolutional neural networks (CNNs) have shown superior generalization capacity when compared to conventional methods because CNNs are able to automatically learn high-level features from raw images [12], [13]. However, as reviewed in the next section, the use of CNNs still needs to be explored to detect HLB and other citrus-specific nutritional deficiencies, diseases, and pest symptoms. Additionally, the CNN models are usually unshared with the research community, limiting their reproducibility and improvement.
CNN architectures usually have many layers and convolution kernels, which dramatically increase the number of trainable parameters (i.e., convolution coefficients). This characteristic makes CNNs prone to overfitting when training from scratch using a small-sized dataset. In this regard, because qPCR is a costly diagnostic test to confirm HLB, collecting a large number of labeled images is challenging. Thus, this limitation should be considered when developing a CNN-based HLB-detection system. Transfer learning is a feasible alternative to overcome this issue, where a CNN model is trained on a large dataset and its learned parameters are then transferred to a smaller dataset through a fine-tuning procedure [14].
Given that HLB detection is critical to avoiding economic losses, mainly for small producers from low-income regions, this study aims to experimentally determine a CNN architecture using transfer learning that accurately detects 12 orange abnormalities, including HLB, which is one of the most threatening diseases for orange groves. We also propose an automatic detection system involving three main modules: 1) image acquisition of leaves with a portable studio, 2) automatic image segmentation, and 3) abnormality detection with a CNN model. According to our literature review, detecting 12 orange abnormalities has not been performed because current CNN-based HLB-detection approaches detect up to four classes. Besides, the proposed detection system can potentially be implemented in a portable system to detect HLB and other abnormalities in situ.
The significance of this work is to potentially reduce the economic losses of small citrus producers from low-income regions by introducing a CNN-based technological alternative to detect HLB, and other pathologies and deficiencies that affect orange trees. Table 1 shows an overview of current detection systems based on computer vision for classifying HLB and other abnormalities. In most works, conventional detection systems are proposed, although more CNN-based systems have recently emerged. The general pipeline of these methods is as follows.
II. LITERATURE REVIEW
First, the input image is drawn from an image acquisition system, which is often based on RGB cameras to obtain color images of leaves and fruits. Specialized acquisition systems TABLE 1. HLB detection systems based on computer vision, where n is the number of images in the dataset, and c is the number of classes. They are sorted in descending order regarding the number of classified abnormalities. The symbols ''♣'' and ''•'' indicate that the dataset includes leaf and fruit samples, respectively.
(e.g., fluorescence spectroscopy and hyperspectral cameras) can also be used to obtain images that enhance the reflectance properties of vegetal samples.
Next, in a conventional detection system, image classification is performed in two stages. The first stage extracts from the input image a set of hand-crafted features (e.g., color and texture) to create a feature vector that is classified in the second stage by an ML method (e.g., a support vector machine (SVM) or artificial neural network (ANN)) that determines the type of abnormality.
In contrast, in a CNN-based detection system, image classification is performed in a single process in which learned high-level features are extracted from the input image through several convolutional layers, and an output softmax layer gives the posterior probabilities to distinct classes of abnormalities.
Generally, conventional detection systems consider samples with HLB, healthy, other diseases, and nutritional deficiencies. However, it is common to train the detection system with two superclasses: HLB-positive and HLB-negative [24], [28]. In multiclass classification, two-stage methods have been proposed that first detect if a sample is normal or abnormal. In the case of abnormality, the classification between different diseases is performed in the second stage [17], [19]. Other approaches have performed multiclass classification in a single stage to differentiate between several classes of abnormalities and healthy cases, where the number of classes ranges from three to six [15], [16], [23].
It can be seen in Table 1 that conventional and CNN-based systems were evaluated using particular sets of images with a different number of samples and classes, and therefore a paired comparison between methods would be unfair. Consequently, in this study, we directly compare CNN-based models and a conventional method based on hand-crafted color and texture features using the same image dataset. Moreover, we increased the classification to 12 classes of orange leaves, which is the largest number of classes to be considered so far.
It is worth mentioning that CNNs have been widely used to detect a wide range of diseases in many crops. Recent works use the public PlanVillage dataset (up to 58 classes) to train CNN-based disease detection systems [31], [32], [33], [34]. Although these works present relevant advances in detecting diseases in a wide variety of plants using CNNs, the PlantVillage dataset only includes HLB images-it includes no other orange diseases, nutritional deficiencies, and pests whose symptoms could overlap with HLB. Consequently, it is crucial to distinguish between HLB and other kinds of abnormalities to apply an adequate treatment. Thus, because CNN models have demonstrated a remarkable classification performance, it is convenient to develop CNN-based methods by considering other citrus diseases, pest symptoms, and nutritional deficiencies that are endemic to the citrusproducing region.
III. PROPOSED APPROACH
A. DETECTION SYSTEM SCHEME Figure 1 shows a block diagram of the proposed HLB detection scheme, which includes three modules: 1) image acquisition, which is a portable studio with controlled illumination that is used to acquire leaf images in situ; 2) segmentation, which is a region of interest that is automatically obtained from the inner part of the leaf; and 3) CNN model, in which leaf classification is performed to discriminate between 12 classes of abnormalities, including HLB. The technical details of the modules are given in the following sections.
B. IMAGE ACQUISITION AND LEAF DATASET
The leaf samples that are used in this study were collected from orange trees of Citrus sinensis (L.) Osbeck species. The sample collection required the support of experts from the The protocol to obtain the image dataset consists of the following steps: 1) Cut leaf samples in full development from four branches of the orange tree. 2) Take color images of samples with an RGB camera using an image acquisition system with controlled lighting and dark background. 3) Place the samples on absorbent paper towels and put them in sealed plastic bags with their corresponding identification number. 4) Transport the samples in ice chests to the Molecular Detection Laboratory (Empacadora Santa Engracia, Tamaulipas, Mexico).
5) Perform the diagnosis of HLB (caused by
Ca. L. asiaticus) using qPCR analysis by following the protocol of the National Phytosanitary Reference Center of the General Directorate of Plant Health (CNRF, Mexico) [35]. In the second step of the collection protocol, the samples were photographed in situ using a portable studio. Every leaf sample is placed on the dark bottom of the box, which is internally illuminated by white LEDs. The box is closed with a lid with a hole for the camera lens, as shown in Figure 2. This image acquisition system has two purposes: 1) to control the illumination conditions and 2) to contrast the leaf's colors from the dark background. Hence, these elements help the segmentation method to automatically define a global intensity threshold, as detailed in Section III-C. The pictures were shot with a Samsung Galaxy phone (Samsung Electronics, Suwon, South Korea), model SM-J730 Pro, with 13 megapixels, without zoom and flash, and saved in JPEG format.
The generated dataset that is used in this study comprises 953 leaf samples and 12 classes, including distinct diseases, nutritional deficiencies, and pest symptoms (as summarized in Table 2). The observed class imbalance reflects the frequency of the abnormalities that are detected in the orange groves. Additionally, Figure 3 shows representative samples of each abnormality for didactic purposes to illustrate the expected differences in texture and color patterns that are present on the leaves.
C. REGION OF INTEREST SEGMENTATION
An image segmentation method was devised to automatically extract the maximum inscribed circular region of interest (ROI) inside the leaf. Because a CNN has a fixed-size input layer (as summarized in Table 3), the ROI's width and height are scaled to fit the specific dimensions of the input layer. Therefore, a circular ROI allows us to preserve the leaf patterns' aspect ratio when uniform scaling is performed to fit the CNN's input. The proposed ROI segmentation is based on the Otsu method to efficiently separate the leaf from its background and then detect the maximum inscribed circle within the leaf region. The Otsu method determines the optimum gray-level threshold that maximizes class variance between the foreground and background pixels. Hence, the image histogram should present a bimodal distribution to find an adequate threshold between modes [36]. In this regard, we evaluate different color spaces to determine the best chromatic component that produces a well-contrasted image such that the Otsu method provides a good leaf segmentation. The following color spaces are evaluated: RGB, HSV, YCbCr, LAB, LUV, and YIQ. Every component of these color spaces is considered separately to obtain an intensity image that is used as an input to the Otsu method. Finally, the Intersection over Union (IoU) metric (also known as the Jaccard index) compares the segmented image versus its reference, which is determined by the manual outlining of the leaf region.
The experiments on the 953 leaf samples of the dataset revealed that the B channel of the LAB color space is the best option, with an IoU value of 0.991 ± 0.006. We refer to the reader to Appendix to consult the complete results for all the evaluated chromatic channels. Based on these results, the proposed automatic ROI segmentation method follows the pipeline shown in Figure 4, which comprises the following steps: 1) Get an image of the leaf sample using the portable studio, as shown in Figure 2. 2) Convert the input image from RGB to LAB color space to decorrelate the luminance component from the chrominance components. 3) Separate the chromatic B component and reduce the noise with a Gaussian filter with σ = 1 and a 5 × 5 pixel kernel size. 4) Apply the Otsu method to obtain the binary mask of the leaf. 5) Calculate the distance map of the binary mask [37]. 6) Obtain the maximum distance value d max from the distance map. 7) Extract the (x, y) coordinates in the distance map with values d max . 8) Average the (x, y) coordinates to get the centroid (x,ȳ) of the circular ROI. 9) Calculate the Euclidean distance from the point (x,ȳ) to all pixels in the image. 10) Obtain the points whose Euclidean distance is less or equal to d max to define the circular ROI. 11) Mask and crop the input RGB image to obtain the ROI that a CNN will classify.
D. CNN ARCHITECTURES AND TRANSFER LEARNING
A CNN consists of multiple layers that perform convolution and pooling operations to extract feature maps, which are similar to neurons in a biological brain's primary visual cortex [38]. As the CNN's depth increases (i.e., the number of layers augments), the feature maps' dimension gradually decreases to activate more subtle features. Finally, a fully connected layer with the softmax function performs the classification task [12]. Thus, CNN automatically learns highlevel features to discriminate between different objects. Currently, CNN architectures can be divided into two main types [39]: • Series network: this architecture has a strictly sequential path, where single layers are arranged one after the other from input to output, as shown in Figure 5(a).
• Directed acyclic graph (DAG) network: this architecture allows path bifurcations, where layers have inputs from multiple layers and outputs to multiple layers, as shown in Figure 5(b). Training very deep series networks is computationally infeasible because the number of parameters grows rapidly as the depth increases. Thus, series networks have less depth and more parameters than DAG network architectures. In contrast, DAG networks have fewer parameters and are deeper than the series networks, which allows the efficient training of very deep CNN models. When the size of the training set is limited, designing a CNN from scratch has the risk of overfitting. This happens because the CNN contains many trainable parameters and in small datasets there is a higher chance of finding a solution that fits the training set but does not generalize well in new images. In this scenario, a CNN trained with millions of images can be reused to perform a new classification task, which is called transfer learning. It has been observed that the first layers learn general features, similar to Gabor filters, regardless of the training dataset. Therefore, in transfer learning, the convolutional layers of a pretrained CNN are kept to extract features, whereas the last fully connected layer is replaced with a new one matching the number of classes in the new classification problem. Next, a fine-tuning procedure is performed to update the network's parameters [14].
The characteristics of the six pre-trained CNNs that are used in this study are summarized in Table 3. These CNNs were initially trained to classify 1000 classes of objects in the ImageNet dataset [40]. To reuse these deep convolutional networks in our classification problem, the last fully connected layer and the softmax layer are replaced to match the 12 classes of orange leaves that are summarized in Table 2. The stochastic gradient descent algorithm then performs the fine-tuning of CNN parameters during 100 training epochs with a learning rate of 1 × 10 −4 and a momentum factor of 0.9. The mini-batch size is set to 16. At the input layer, the z-score standardization adjusts the distribution of pixel values in input images, which facilitates the activation and gradient descent progress [41].
Online data augmentation was used to reduce overfitting by applying random geometric transformations to the training set, including scaling, reflections, rotations, and translations. This procedure assumes that more information can be extracted from the original dataset through augmentation [42], [43].
IV. CONVENTIONAL HLB DETECTION SYSTEM
In a conventional HLB detection system, features are defined by an engineer, considering that they are discriminant for distinguishing different classes. These hand-crafted features are then calculated from an input image to form a feature vector that feeds a classifier [48].
The abnormality detection system based on hand-crafted features shown in Figure 6 is implemented for comparative purposes. This approach extracts 216 texture and color features using four methods that are usually employed to build HLB detection systems: 1) Ranklet co-occurrence matrix (RCM): Intensity invariant GLCM-based (gray-level co-occurrence matrix) features have been used to detect HLB in orange leaves. First, the input RGB image is converted to a gray-scale image, from which the ranklet transform is calculated. Four resolutions (2, 4, 8, and 16) and three orientations (horizontal, vertical, and diagonal) are considered to obtain 12 ranklet images. Next, 12 GLCM-based features are extracted from each ranklet image to get 144 RCM-based features [15] 2) Auto-mutual information (AMI): Similar to the RCM method, 12 ranklet images are obtained from the original gray-scale image. Next, the normalized mutual information measures the similarity between successive displacements of a single ranklet image, which is performed in the image's horizontal and vertical directions. Hence, a total of 24 AMI-based features are calculated [49]. 3) Local binary patterns variance (LBPV): Traditional local binary patterns (LBP) have been explored to detect HLB in citrus leaves [28]. However, in this study, we use LBPV, which is an improved version of the traditional LBP method that reduces the feature vector's dimensionality and incorporates local contrast information. Radius sizes of one (eight neighbors) and two (12 neighbors) are considered to obtain 24 LBPV-based features [50]. 4) Histogram statistics: The original RGB image is converted to the HSI color space. Next, eight histogram statistics (e.g., mean, standard deviation, entropy, etc.) VOLUME 10, 2022 are calculated from each HSI image channel. Hence, a total of 24 color-based features are calculated [28]. A Multilayer Perceptron (MLP) network with two hidden layers is trained to classify the 12 classes of orange leaves, as shown in Table 2. The input layer has 216 nodes that distribute the input feature vector to the first hidden layer. The optimal number of hidden nodes is determined by minimizing the validation error under a grid search scheme, where the search ranges are [5,165] and [5,125] for the first and second hidden layers, respectively, and where the maximum number of nodes corresponds to the 75% of nodes in the previous layer. The output layer is a softmax function with 12 nodes (one node per each class of orange leaves). The MLP network is trained with the backpropagation algorithm with a learning rate of 0.01, a momentum factor of 0.9, and a maximum number of training epochs of 1000. Moreover, before the training procedure, the texture features are rescaled to the range [−1, 1] by the softmax normalization to reduce the influence of extreme feature values [51].
V. CLASSIFICATION ASSESSMENT
For a classification problem with c classes, the corresponding confusion matrix C is a square matrix of size c-by-c whose ijth entry C ij is the number of elements of the actual class i that have been assigned to class j by the classifier.
Accuracy is probably the most frequently used measure to evaluate the overall effectiveness of a classifier, which is expressed by [52] where tr(·) is the trace operator and n is the total number of test observations. The accuracy should tend toward unity to indicate an adequate success rate. Because the leaf image dataset presents class imbalance, the accuracy tends to be optimistic due to the high hit rate of the majority class. Thus, to deal with imbalanced classes, the Matthews correlation coefficient (MCC) is used [53]: where C k is the kth row of C, C l is the lth column of C, and C T is the transpose of C. This index should tend toward unity to indicate an adequate classification performance. HLB detection capability can also be evaluated by dividing the image dataset into positive (HLB-infected) and negative (HLB-negative) classes. The latter includes all of the orange leaf classes except the HLB cases. In this case, sensitivity (SEN) and specificity (SPE) are calculated as [52] and where TP is a true positive, TN is a true negative, FN is a false negative, and FP is a false positive. SEN and SPE indices measure the classifier's effectiveness in identifying positive and negative classes. They should tend toward unity to indicate an adequate classification performance. The 10-fold cross-validation method creates disjoint training and test sets [54]. To determine statistical differences between methods, McNemar's test (α = 0.05) is used to check the disagreements between any two classification methods, where the null hypothesis is that the predictive performance of two models is equal [55]. Finally, the Holm-Bonferroni method performs the correction for multiple comparisons (i.e., post-hoc analysis) [56].
The computing platform employed an Intel i9 processor with eight cores at 3.60 GHz, 64 GB of RAM, and a graphic card NVIDIA GeForce RTX 2070. All of the programs were developed in MATLAB R2020a (The Mathworks, Boston, Massachusetts, USA). Figure 7 shows the classification performance results of the evaluated CNNs, including the conventional method based on hand-crafted features and an MLP classifier. It is observed that the VGG19 network obtained the best performance for the classification of 12 classes of orange leaves with ACC=0.991 and MCC=0.990. In contrast, the MLP-based method obtained the lowest classification performance with ACC=0.940 and MCC=0.935. It is also notable that all of the CNNs seem to have a similar classification performance. To verify if there are significant differences in accuracy between the methods, McNemar's test, with Holm-Bonferroni correction, revealed that AlexNet (p = 0.0073) and ResNet18 (p = 0.0349) performed statistically significantly differently from VGG19. Besides, the conventional method presented significant differences with all of the CNNs (p < 0.001). However, the other pairwise comparisons between CNNs did not present significant differences (p > 0.05). Table 4 summarizes the p-values of all pairwise comparisons between the classification methods. Figure 8 shows the multiclass confusion matrices obtained from the 10-fold cross-validation average of each classification method. The values on the diagonal of the matrices indicate the hit percentage per class, whereas the values off the diagonal represent the percentage of errors toward other classes. Note that VGG19 presented the highest success rate because it classified seven of the 12 orange leaves with 100% accuracy. In addition, it is confirmed that the MLP-based method obtained the lowest percentage of accuracy per class.
VI. RESULTS
It is important to remark that VGG19 was the only CNN that was capable of classifying HLB with 100% accuracy for all 10 cross-validation experiments. This behavior can be observed in Figure 9, which shows the binary classification performance to distinguish between HLB-infected and HLB-negative classes. Notably, the VGG19 network reached unity in the four measured indices (i.e., accuracy, MCC, sensitivity, and specificity), which indicates perfect classification performance.
The accuracy is almost unity for all classification methods, due to the high hit rate in the HLB-negative class, which is the majority class. However, because there is a high imbalance between the HLB-infected and HLB-negative classes (a ratio of 1/22), the MCC index helps measure overall classification performance. Therefore, according to the MCC, the performance of all of the methods' classification was reduced except for the VGG19 network. According to the sensitivity and specificity indices, this behavior is observed separately for the HLB-infected and HLB-negative classes. Note that the specificity is high for all of the methods (>0.99), although the sensitivity decreases substantially, except for VGG19.
VII. DISCUSSION
From the experimental results, VGG19 outperformed its counterparts in the classification of 12 classes of orange leaves, including HLB. VGG19 is the deepest series network evaluated in this study, so it is found that increasing the network's depth and the number of trainable parameters is decisive for improving accuracy. This effect is deduced from the lower classification performances of the other series networks: AlexNet and VGG16.
A disadvantage of series networks is that as depth increases, the computational cost of training increases dramatically due to the increment in trainable parameters. Therefore, DAG networks are designed to increase networks' depth without negatively impacting computational efficiency, which reduces feature maps' dimensionality. For example, GoogLeNet has modules called ''inception'' with stacked 1 × 1 convolutions, which reduces the number of trainable parameters.
In general, all the evaluated CNNs presented an acceptable performance in classifying 12 classes of orange leaves, with accuracy values greater than 97%. However, the number of network parameters has a noticeable impact on HLB detection. The VGG19 network, with 144 million parameters, reached a perfect sensitivity: all HLB cases for all crossvalidation experiments were classified correctly. The VGG16 network (with 138 million parameters) was the second-best method for detecting HLB because it reached an average sensitivity of 97.5%. In contrast, Inception-V3, with 48 layers and 24 million parameters, obtained an average sensitivity of 95%, which is the same level as AlexNet achieved with eight layers and 60 million parameters.
The number of trainable parameters has a more direct relationship with HLB detection than the network's depth. This behavior could happen because a higher number of parameters compensates for the limited number of HLB cases, so VGG19 can successfully transfer the learned characteristics to new cases. Therefore, increasing the number of HLB cases could improve the networks' classification performance with fewer trainable parameters.
Concerning the conventional method with hand-crafted features and an MLP classifier, it is notable that the accuracy depends on the quality of texture and color features extracted from the images. Determining these features is humandependent, and therefore this procedure involves subjectiveness. This study obtained 216 texture and color features from four description methods that were previously used for HLB detection. This strategy improved HLB detection results concerning our previous work [15], in which 144 RCM-based features were used. Consequently, the sensitivity of HLB detection increased from 83% to 92.5%. This result indicates that combining different feature description methods improves HLB detection. However, achieving competitive results between hand-crafted feature-based and CNN-based approaches is challenging. Nevertheless, an advantage of conventional classification systems is that they need fewer computational resources than CNNs, which require GPU-based platforms to calculate thousands of convolutions. Therefore, further research should evaluate and even create other handcrafted features that could improve the classification performance at a lower computational cost.
Notably, hand-crafted features with the MLP classifier achieved classification results around those of the other literature methods shown in Table 1. However, as expected, the experiments revealed that CNN-based methods achieved higher classification performance. Incorporating various types of nutritional deficiencies, pest symptoms, and diseases of orange leaves in the classification system could potentially decrease human errors due to the confusion of HLB characteristics with some other citrus abnormalities that can be treated with remedies. In addition, the proposed classification system can detect diseases that are typical of the citrus-producing region. In this study, we used leaf samples from the North of Mexico; hence, the generated CNN models involve a limited set of instances of the universe of orange abnormalities. Fortunately, it is feasible to improve the network models with new samples and classes through transfer learning. All of the CNN models generated in this study are available on request from the authors to fulfill this purpose. Besides, our image dataset is publicly available in [57].
VIII. CONCLUSION
This paper presented a comparative study of six different pretrained CNNs to classify 12 different abnormalities of orange leaves of the Citrus sinensis species, including HLB, nutritional deficiencies, and pest symptoms. In addition, a conventional method based on hand-crafted features and MLP was evaluated. From the experimental results, it is concluded that the best network is VGG19, which obtained an overall accuracy of 99% in detecting 12 classes of orange leaves. Moreover, VGG19 reached a sensitivity of 100% in detecting HLB-positive cases. In general, it was observed that the classification quality was better when the number of trainable parameters was higher. Conversely, the conventional method based on texture and color features presented the lowest classification performance, which demonstrates the difficulty of extracting subtle features with high discriminant power among classes.
This study gives guidance for choosing an adequate CNN to efficiently detecting HLB. We also provide an alternative solution for small citrus producers to detect in situ HLB and other orange abnormalities. Our future work considers implementing the proposed CNN-based detection system in an embedded system-on-module such as NVIDIA's Jetson family for the in-field detection of orange tree abnormalities. We also plan to include more leaf samples and other kinds of citrus abnormalities. Furthermore, the proposed method can be feasibly extended to other citrus crops affected by HLB, such as lemon, lime, and grapefruit. Figure 10 shows the Intersection over Union (IoU) results obtained with the Otsu method when the components of six color spaces are binarized independently (i.e., RGB, HSV, LAB, YCbCr, LUV, and YIQ). The boxplots concentrate the IoU results on the 953 images of the dataset. The best result is obtained by the B component of the LAB color space. | 7,007 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Investigation of the adsorption–desorption behavior of antibiotics by polybutylene succinate and polypropylene aged in different water conditions
Microplastics (MPs) are widely present in aqueous environments and aged by natural components of complex water environments, such as salinity (SI) and dissolved organic matter (DOM). However, the effects of multicondition aging on the physicochemical properties and environmental behavior of MPs have not been completely investigated. In this study, the degradable MP polybutylene succinate (PBS) was used to investigate the environmental behavior of sulfamethoxazole (SMZ) and was compared with polypropylene (PP). The results showed that the single-factor conditions of DOM and SI, particularly DOM, promoted the aging process of MPs more significantly, especially for PBS. The degrees of MP aging under multiple conditions were lower than those under single-factor conditions. Compared with PP, PBS had greater specific surface area, crystallinity, and hydrophilicity and thus a stronger SMZ adsorption capacity. The adsorption behavior of MPs fitted well with the pseudo-second-order kinetic and Freundlich isotherm models, indicating multilayer adsorption. Compared with PP, PBS showed relatively a higher adsorption capacity, for example, for MPs aged under DOM conditions, the adsorption of SMZ by PBS was up to 5.74 mg/g, whereas that for PP was only 3.41 mg/g. The desorption experiments showed that the desorption amount of SMZ on MPs in the simulated intestinal fluid was greater than that in Milli-Q water. In addition, both the original PBS and the aged PBS had stronger desorption capacities than that of PP. The desorption quantity of PBS was 1.23–1.84 times greater than PP, whereas the desorption rates were not significantly different. This experiment provides a theoretical basis for assessing the ecological risks of degradable MPs in complex water conditions.
Introduction
Plastics are widely used in many fields such as agriculture, commerce, industry, and daily necessities because of their low price, easy processing, and stable performance (Chan et al. 2022). However, plastic products exposed to the environment are further broken down by heat, salt, and dissolved organic matter (DOM) into many different particle sizes (Liu et al. 2022a), and those smaller than 5 mm are called microplastics (MPs) (Bakir et al. 2014).
Because of their light weight and small dimensions, MPs can easily enter water environments through atmospheric deposition and effluent discharges (Perumal and Muthuramalingam 2022). In this regard, Nizzetto et al. (2016) showed that about 1.5-4.5% of the total plastics produced globally are released directly into the ocean. The amount of marine plastic waste is huge, difficult to degrade, and gradually accumulates, causing damage to the marine environment (Davis et al. 2022). In actual marine environments, plastics undergo a series of aging processes, including physical wear and tear, ultraviolet radiation, biodegradation, and chemical oxidation (Tian et al. 2023). During the aging process, MPs release additives and intermediates, further increasing their ecological risks Luo et al. 2020). Aged MPs have increased specific surface areas and greater hydrophilicity and fluidity, which increase their ability to adsorb pollutants and act as pollutant carriers in the marine environment. In addition, because of its small dimensions, they are easily ingested by marine life, posing health risks to the latter (Bhatt et al. 2021;Gola et al. 2021).
Many studies have already looked into the physicochemical properties and environmental behavior of aged MPs. However, most of these are limited to aging in pure water, ignoring the influence of the natural components of water on the aging process (Fan et al. 2021;Sun et al. 2022). These natural components in the water may change the MP's microstructure, surface morphology, and environmental behavior (Liu et al. 2022a).
Salinity (SI) and DOM are the main components of seawater (Ding et al. 2020;Schmidt et al. 2017). A previous study has shown that functional groups (e.g., -OH, C-H) of conventional plastics aged in seawater change significantly with an increase in oxygen-containing functional groups, resulting in the high hydrophilicity and fluidity of the aged MPs (Ding et al. 2020). Meanwhile, Cao et al. (2022) showed that DOM promoted the weathering of aliphatic polypropylene (PP). However, the current research on MPs ignores the influence of the co-existence of SI and DOM during the aging process . Therefore, the influence of environmental factors in water on the aging process of MPs must be explored.
Degradable plastics are widely used daily to reduce the pollution caused by traditional plastics (Sato et al. 2017). However, there have only been a few investigations on degradable MPs . In this respect, polybutylene succinate (PBS) is a petroleum-based degradable plastic widely used in food packaging films, plant protection films, bone tissue engineering, and other applications. Degradable plastics can be completely degraded by the action of microorganisms in water (Zhu and Wang 2020). During degradation, PBS releases millions of plastic fragments (Wei et al. 2022). Compared with conventional MPs, degradable MPs have larger specific surface areas and can absorb more pollutants (Fan et al. 2021;Yu et al. 2019). However, although there have been some studies on degradable MPs, there is still a lack of research on degradable MPs that age in seawater. Thus, the investigation of the environmental behavior of biodegradable plastics is a matter of great necessity.
In recent years, antibiotic compounds were widely detected in the aqueous phase and classified as a new type of pollutant because of their overuse (Liu et al. 2022b;Roy et al. 2021). For instance, sulfamethoxazole (SMZ) is a type of antibiotic that can be found in concentrations of up to 1390 ng/L in the marine environment . MPs in the ocean can be carriers of SMZ and spread through the marine environments. Because of their small dimensions, marine organisms confuse MPs with food, which leads to accidental ingestion, causing MPs to accumulate in the food chain (Liu et al. 2022d). Antibiotic-loaded MPs may migrate through the food chain and adversely affect individuals and different communities of organisms (Capolupo et al. 2020). For example, SMZ can desorb in the intestinal fluid and cause toxicity to marine life (De Liguoro et al. 2009). Therefore, the interaction mechanisms between MPs and antibiotics must be understood to determine the potential ecological risks of degradable MPs on pollutants.
In this study, PBS and PP were used as target MPs. Meanwhile, SMZ was used as the target pollutant. The main objectives of this research were to (1) investigate the effect of aging on the physical properties of MPs under different conditions (i.e., DOM, SI, and DOM/SI), (2) study the behavior of PBS and PP in the adsorption of SMZ after aging, and (3) explore the difference in MPs' desorption behaviors for SMZ in Milli-Q water and simulated intestinal fluid. This study broadens the scope of research on MPs and contributes to the more comprehensive assessment of the potential ecological risks of MPs.
Materials
The PBS and PP used in this experiment were purchased from the Shanghai Guanbu Electromechanical Technology Co. (Shanghai, China). The average particle size of these two MPs was 40 μm. Chemicals needed for simulating seawater, such as NaCl, MgCl 2 , Na 2 SO 4 , CaCl 2 , and fulvic acid (FA), were purchased from Aladdin Industrial Co. (USA), with purity ≥ 98%. The drugs used in the experiment, sodium taurocholate (ST) and bovine serum albumin (BSA) with purity ≥ 98%, were purchased from Aladdin Industrial Corporation (USA).
MPs aging experiment
Four different solutions were prepared to simulate the aging behavior of MPs in water: pure water, a 4% SI solution, a 6-mg/L DOM solution, and a mixture of the two solutions. The 4% SI solution was prepared using an inorganic salt concentration of 28 g/L NaCl, 4.67 g/L Na 2 SO 4 , 6.05 g/L MgCl 2 , and 1.32 g/L CaCl 2 . The PBS and PP in were then placed in quartz tubes containing the solution described above. During the aging process, the MPs were stirred using a magnetic stirrer so that they were evenly distributed in the solution and fully aged. We placed the quartz tubes in a radiation chamber with 30 W/m 2 irradiance to better simulate the aging process of MPs in nature. The MPs were aged under UV irradiation for 48 h. This avoids inadequate MP aging and prevents the complete degradation of degradable plastics.
Characterization
The surface morphology of the MPs was characterized using scanning electron microscopy (SEM, Hitachi S-4800). The specific surface areas of the samples were measured using an ASAP 2020 instrument (Micromeritics, USA). The functional groups of original and aged MPs were characterized using X-ray photoelectron spectroscopy (ULVAC-PHI Inc., Japan) and Fourier transform infrared spectrometer (FTIR,Tensor 27.,Bruker). Variations in the crystallinity of the reaction sample were performed using X-ray diffraction (XRD, X'Pert PRO MPD., Netherlands). By measuring the contact angle (Dataphysics OCA20, German) of the samples, the changes in hydrophilicity were investigated.
Adsorption experiments
In the adsorption kinetics test, the time intervals were set to 30-2880 min, and experiments were conducted at 25 °C by mixing the 50-mg MP samples with 50 mL of the SMZ solution in the centrifuge tubes. The concentration of SMZ was set to 50 mg/L to highlight the adsorption quantity differences between the MPs and to reduce experimental errors. Centrifuge tubes were shaken at 150 rpm in a dark air bath thermostatic shaker. They were then removed at the set time intervals, and the sample solution was filtered through a 0.45-μm membrane filter. The concentration of SMZ in the filtrate was determined using high-performance liquid chromatography (HPLC).
Adsorption isothermal experiments were conducted in 100-mL centrifuge tubes. Referring to previous studies, the SMZ concentration was set to 1-20 mg/L . In this study, 50-mg samples were placed in the centrifuge tubes containing 50 mL of the solution for the experiment. The experiments were performed at 25 °C with a shaking speed of 150 rpm. The equilibrium time was set to 48 h. After the oscillation, the sample solution was filtered through a 0.45-μm membrane filter and then loaded into a brown injection bottle for testing.
Desorption experiments
In the desorption experiment, 500 mg samples were mixed with the 500-mL SMZ solution. The concentration of SMZ was set to 15 mg/L on the basis of the results of the isotherm experiments. After the adsorption saturation of the MPs, the samples were filtered and then dried at low temperatures. The desorption experiments were conducted in Milli-Q water and simulated intestinal fluid. The simulated intestinal fluid consisted of 5.0 g/L BSA and 10 mM ST in a 100-mM NaCl solution (Liu et al. 2020). Centrifuge tubes containing 50 mL of pure water and simulated intestinal fluid were taken and placed in an air bath constant temperature oscillator shaker (150 rpm) in the dark at 25 °C. The samples were shaken for 1-48 h. Thereafter, they were filtered through a 45-μm membrane filter and then measured using HPLC. On the basis of the experience of previous studies, we selected the 48-h desorption results for analysis (Cui et al. 2022b;Song et al. 2022).
Date analysis
The calculation formula of the carbonyl index (CI) is as follows (Prata et al., 2020): The calculation of the carbonyl index was determined on the basis of the absorbance at 1720-1725 cm −1 for carbonyl groups, and the absorbance reference peak depended on the type of MPs. The absorbance reference peaks of PBS were 3441, 2960, 1725, and 1152 cm −1 , whereas the peaks of PP were 3426, 2964, 1100, and 738 cm −1 .
For the adsorption kinetics studies, the pseudo-first-order and pseudo-second-order kinetic models were selected, and their equations are as follows : where Q t (mg/g) is the SMZ adsorption capacity of the MPs at time t(min). k 1 (min −1 ) and k 2 (g/mg·min) are the pseudo-first-order and pseudo-second-order rate constants, respectively. An intraparticle diffusion model was used to understand the rate at which SMZ was adsorbed by two different types of MPs.
where k p (mg/[g min 0.5 ]) is the internal diffusion rate and C is the mean boundary-layer thickness.
Adsorption isotherms can be fitted using the following models.
Langmuir isotherm: Freundlich isotherm: where Q max (mg/g) is the maximum value of the SMZ adsorption capacity of the MPs. k L (L/mg) and k F ([mg/g] [L/mg]) are the Langmuir and Freundlich distribution coefficients, respectively. Meanwhile, 1∕n F is the Freundlich model parameter that reflects the adsorption intensity and heterogeneity of the adsorbent. Table 1 compares the changes in crystallinity of PBS and PP before and after aging. The crystallinity of PBS increased from 24.82 to 26.31% (pure water-aged), 29.24% (DOMaged), 28.56% (SI-aged), and 26.68% (DOM/SI-aged). The crystallinity of PP increased from 14.14 to 19.48-25.98%. The crystallinity of the MPs increased during aging. These findings were consistent with those of a previous study (Cui et al. 2022a). The single-factor conditions (i.e., DOM and SI) promoted MP aging. The increase in crystallinity was due to the destruction of the noncrystalline structures of the MPs by reactive oxygen species (ROS) during the aging process, resulting in localized areas of secondary crystallization . Under severe photodegradation, the crystallinity of the MPs increased significantly (Arvaniti et al. 2022). Compared with PP, PBS was more crystalline and had a greater degree of surface breakage. Many studies have analyzed the surface morphology of PP, but there are fewer analyses of PBS (Luo et al. 2022). Therefore, the surface morphology of PP was not studied. The results of previous experiments show that degradable plastics generally degrade first (Fan et al. 2021). Therefore, an in-depth study of PBS was performed. The fragmentation of the PBS surface was observed using SEM. Figure 1 shows the SEM images of the original PBS and the PBS aged under different water conditions (i.e., pure water, DOM, SI, and DOM/SI). The surface of the original PBS is smooth (Fig. 1a). Meanwhile, the PBS aged in pure water for 48 h shows small cracks on the surface (Fig. 1b). According to Fig. 1c-e, the PBS aged in different water conditions had rough surface morphologies. The PBS aged in DOM had cracks and holes in the surface (Fig. 1c). In Fig. 1d, some of the plastic fragments came off the whole. This is because MP aging generally has two modes of degradation (i.e., cracking and flaking) , and PBS is degraded mainly by cracking (Liu et al. 2022c). In Fig. 1e, a small number of holes formed on the surface of the MPs. The change in surface roughness in Fig. 1c-e was not significant, probably because of the short aging time. The degradation rate is positively correlated with time . Figure 1 shows that complex water environments accelerate the formation of pores and cracks on the PBS surface, which increases the adsorption sites for pollutants and thus its pollutant adsorption ability.
Degree of crystallinity, morphology and surface properties
The specific surface area (S BET ) figures for PBS and PP are presented in Table 1. The S BET of the original PBS and PP MPs were 0.35 and 0.37 m 2 /g, respectively, which increased to 0.43 and 0.45 m 2 /g after aging in pure water. As can be seen in Table 1, natural components in the water column promoted the aging process of MPs. For example, the S BET of PBS increased to 0.51, 0.47, and 0.45 m 2 /g in the DOM, SI, and DOM/SI, respectively. Meanwhile, the S BET values of PP after aging were 0.82, 0.67, and 0.45 m 2 /g in DOM, SI, and DOM/SI, respectively. The S BET of the MP surface increases due to photodegradation. MPs aged under single-factor conditions had a high degree of aging. The S BET of the aged MPs increased, which was similar to the results of previous studies .
The degree of MP aging varied in different water environments. According to Fig. 1 and Table 1, the single-factor conditions promoted MP aging. However, multiple aquatic environmental factors slowed down the aging process of MPs, which may be related to the interaction between DOM and salt. In the coexistence of DOM and salt, inorganic salt ions such as Ca 2+ and Mg 2+ in the solution reduce the solubility of DOM in water (Strehse et al. 2018), which in turn enables solidification or precipitation. Because of the small concentration of DOM in the solution (only 6 mg/L), the observation of solidification and precipitation with the naked eye was difficult. The concentration of environmental components involved in the aging process and the concentrations of dissolved substances in the water were reduced. As such, the aging process of MPs was slowed down. This suggests that single-factor conditions in water promote the aging process of MPs. The results for the specific surface area agree with the XRD and SEM results.
Surface functional groups and contact angles
Figure 2 reveals the changes in the functional groups of MPs before and after aging. For PBS (Fig. 2a), the peak near 3441 cm −1 was the absorption peak caused by the hydroxyl group (− OH), whereas the peak at 2960 cm −1 was the absorption peak caused by the antisymmetric stretching vibration of the methyl group (-CH 3 ). The absorption peak of the carbonyl group (C = O) was located near 1725 cm −1 , whereas the absorption peak of − CH groups was located near 1152 cm −1 ). According to Fig. 2a, the intensity of the carbonyl peak of PBS increased after aging because of the oxidation of C-H. For PP, the main characteristic peaks were around 3426, 2964, and 1720 cm −1 , corresponding to the functional groups -OH, -CH 3 , and C = O, respectively. The peak near 738 cm −1 was produced by the -CH 2 bending vibration . As PP is a nonbiodegradable plastic and is more difficult to degrade than PBS, the change in its functional groups before and after aging was slight . During the aging process, PP releases plasticizers and the carbonyl peak signal is therefore weakened Yan et al. 2021). After aging, the oxygen-containing functional groups of PP increased . The increase in oxygen-containing functional groups in MPs after aging was probably due to the C-H bond breaking in the presence of UV light and reacting with oxygen to form oxygen-containing groups. These groups then combine with hydrogen from the surrounding environment to form hydrogen peroxide groups (-COOH), which then further decompose into other products (C = O, C-OH, and O-C = O) Qiu et al. 2022).
The C = O stretching vibration was more pronounced in aged PBS but was hardly observed in PP. This is because conventional plastics are more difficult to degrade (Tong et al. 2022). At the same time, the aging time of PP is short, and it does not age sufficiently; thus, changes in C = O are difficult to observe ). The aging conditions contain Cl −1 , which is more susceptible to substitution reactions and therefore has a higher degree of oxidation . Qiu et al. (2022) point out that DOM can release ROS and promotes the aging process of MPs. The interaction of PP with the aromatic structure in FA results in a more pronounced change in the characteristic peak under the aging condition of DOM (Cao et al. 2022). This change is mainly reflected in the carbonyl peaks (1720 cm −1 ). FA usually contains more carbonyl and fatty functional groups, interacting with PP through π-π bonds (Abdurahman et al. 2020). However, the co-existence of DOM and SI conditions did not show higher degrees of oxidation, probably because DOM and SI were mutually constraining, which reduced the aging process of the MPs. Meanwhile, the increase in the number of oxygen-containing functional groups increased the hydrophilicity of the aged MPs. Figure S1 shows the changes in contact angles of the MPs before and after aging. The contact angle of the original PBS was 81.11. After aging, the contact angle of the PBS was reduced to 55-64°. The contact angle of the original PP was 125.83°, whereas all the contact angles were less than 80° after aging. This phenomenon indicates that PP changes from hydrophobic to hydrophilic, similar to the results of previous studies (You et al. 2021). These results suggest that aging increases the oxygen-containing functional groups and therefore increases the hydrophilicity of MPs. This agrees with the FTIR results. This study shows that UV radiation and water environment factors are conducive to hydrophilicity enhancement and accelerate the aging process of MPs.
The O/C ratio and CI index
The O/C ratio and CI are used to indicate the degree of MP aging (Fan et al. 2021). As can be seen from Table 2, the O/C ratio of PBS increased from 0.412 to 0.416 (pure water), 0.432 (DOM), 0.420 (SI), and 0.417 (DOM/SI). Meanwhile, the O/C ratio of PP increased from 0.281 to 0.348-0.383. Under the same conditions, the O/C ratio of PBS was higher than that of PP, which means that PBS is more susceptible to aging than PP.
The CI index is used as a metric to indicate the degree of MP aging. The CI index of PBS increased from the original 0.077 to 0.082 (pure water), 0.220 (DOM), 0.190 (SI), and 0.162 (DOM/SI) (Table 2). Meanwhile, that of PP increased from 0.129 to 0.171-0.254. The pattern of change in the CI index was consistent with the O/C ratio. The MPs aged under DOM conditions had higher degrees of aging. This may be attributed to the fact that ROS produced by DOM under UV irradiation promotes the aging process of MPs (Qiu et al. 2022). The aging of MPs was more influenced by DOM than by SI. This was probably because the presence of Cl − could impede the photo-aging process of MPs . However, the level of MPs aged in SI remained large compared with those aged in DOM/SI and pure water, possessing a large CI index. The CI index and O/C ratio can reflect the degree of MP aging (Fan et al. 2021). Generally, the water environment factor promotes MP aging, and the degree of MP aging depends mostly on the aging conditions ).
Adsorption kinetics
Kinetic equations were used to fit the experimental data to better analyze the adsorption process and elucidate the adsorption mechanism, and the resulting parameters are summarized in Table S1. The pseudo-second-order model (R 2 > 0.99) was better fitted to the SMZ adsorption data for PBS and PP. This suggests that the adsorption process of MPs on SMZ was mainly chemisorption (Bao et al. 2021).
The k 2 values of the aged MPs decreased during the aging process. The low values of k 2 indicated that the adsorption rate decreased with time, and the absorption rates were proportional to the number of unoccupied sites (Gupta et al. 2010). As shown in Fig. S2, the adsorption quantity of MPs increased gradually during the first 1500 min; after 1500 min, the adsorption quantity hardly varied with time, so saturation was reached at this point.
According to Fig. S2, the adsorption quantity of SMZ by the aged MPs increased with the increase in time. The MPs aged under DOM condition showed a strong carrier capacity for pollutants, especially for PBS (Fig. S2). For example, the adsorption quantity of PBS increased from 4.56 to 5.74 mg/g, whereas that of PP increased from 2.80 to 3.41 mg/g. The adsorption capacity of MPs was related to their physicochemical properties. Compared with that of PP, FTIR results showed that PBS had more oxygencontaining functional groups on its surface, which facilitated the adsorption of SMZ on PBS. At the same time, PBS had a larger specific surface area (Fig. 1, Table 1), which increased the adsorption sites on the surface of the MPs and enhanced their ability to adsorb pollutants. Previous studies have also confirmed these results (You et al. 2021). In addition, the crystallinity of the MPs increased during aging, which facilitated the enhanced SMZ adsorption . Thus, compared with PP, PBS can adsorb more antibiotics, has a greater carrier capacity, and poses a greater ecological risk to water environments. An intraparticle diffusion model was used to fit the adsorption kinetic data to clarify the adsorption mechanism of SMZ on the original and aged MPs. The results are summarized in Fig. 3 and Table S2. Figure 3 clearly shows that the internal particle diffusion model has a linear relationship, where k p1 > k p2 > k p3 . The slope determines the rate of adsorption, and the fitting parameter ( C ) reflects the causes that influence the rate of adsorption. Therefore, the adsorption process of SMZ on MPs is divided into three main stages : Stage I, external mass transfer; Stage II, interfacial diffusion; Stage III, intraparticle diffusion.
At Stage III, the adsorption of MPs reached equilibrium. The k pi values of the aged MPs were higher than that of the original MPs. This may be due to the increased number of MP adsorption sites after aging, which facilitated faster access of SMZ to the adsorption sites. Meanwhile, the k pi values for aged PBS were higher than that of aged PP, indicating that the antibiotics diffused more rapidly in aged PBS. PBS developed large cracks and higher crystallinity during the aging process, which enhanced antibiotic adsorption Yao et al. 2021).
Adsorption isotherms
The results of the Langmuir and Freundlich isothermal sorption models fitted to the sorption isotherm data are shown in Table 3. In comparing the R 2 of the two models, the Freundlich isotherm model (R 2 > 0.93) better describes the adsorption behavior of SMZ on MPs than Langmuir (R 2 < 0.89). This indicated that the adsorption of SMZ on For PBS, k F and 1∕n F increased after aging. Of these changes, the most pronounced effect was seen for the PBS aged in DOM, where k F increased from 0.5390 to 1.722 mg/g(L/mg) 1/n , whereas 1∕n F increased from 0.4285 to 0.7125 mg/g(L/mg) 1/n . This suggests that MPs aged in DOM have a strong adsorption capacity. This was because DOM accelerated the aging process of the MPs, creating cracks and pits on their surfaces and increasing their adsorption sites. Compared with PBS, PP had a weaker adsorption capacity. The k F values increased from 0.1959 to 0.3910-0.9108 mg/g(L/mg) 1/n . However, the values of 1∕n F did not differ much, which may have been due to the short aging time.
PBS had a stronger adsorption capacity for SMZ than PP, which may have been due to the different physicochemical properties of PBS and PP. The reasons for the different physicochemical properties are as follows: (1) According to Fig. 2, the aged MPs contained more oxygen functional groups. The oxygen-containing functional groups increased the hydrophilicity of MPs as they make hydrogen bonds with water (Fig. S1), which enhanced the adsorption of antibiotics (Shi et al. 2022); (2) the aged MPs had larger specific surface areas and more adsorption sites, which increase the ability of degradable MPs to adsorb antibiotics . Consequently, PBS is more likely to carry higher levels of contaminants such as antibiotics in water environments.
Desorption kinetics
The data for the desorption of MPs in pure water and intestinal fluid are shown in Figs. 4 and S3. The amount of desorption may be related to several factors: (1) the environment of desorption (Ho and Leung 2019), (2) the types of MPs, and (3) the aging conditions of the MPs. As can be seen in Fig. 4, the amount of desorption in the intestinal fluid is significantly greater than that in the water.
According to Fig. 4a, the desorption of the original PBS in the intestinal fluid was 1.14 mg/g, which increased after aging to 1.75 mg/g (pure water-aged), 3.15 mg/g (DOMaged), 2.94 mg/g (SI-aged), and 2.46 mg/g (DOM/SI-aged). From Fig. S3a, the desorption rate increased from 29.00 to 36.44%, 54.83%, 53.90%, and 46.51%, respectively (Fig. S3a). The amount of original PP desorbed in the simulated intestinal fluid increased from 1.10 to 1.71 mg/g, and the desorption increased from 39.36 to 50.59%. The quantity and rate of desorption of antibiotics in Milli-Q water were much smaller than those in the intestinal fluid (Fig. S3b).
The quantity of MPs desorbed in the simulated intestinal fluid was large compared with that in the desorption in Milli-Q water. A previous study has shown that ST in the intestinal fluid can facilitate the desorption of some pharmaceuticals (McDougall et al. 2022). It increased the desorption quantity of MPs by increasing the solubility of the SMZ (Liu et al. 2020). The active agent on the intestinal surface can increase the desorption of the polymer by increasing the diffusion rate of the pores within the particles. In addition, hydrophobic organic pollutants can be easily desorbed from the MPs after adsorption onto the surface of the MPs (Hartmann et al. 2017;Song et al. 2022).
The adsorption of pollution by MPs is a reversible physical process . The quantity of MPs desorbed in the simulated intestinal fluid depends on the adsorption ability of the MPs (Ito et al. 2022). Consequently, PBS had a stronger desorption capacity for SMZ than that of PP. Thus, PBS may release more antibiotics in organisms than PP, causing more serious adverse effects in organisms.
Conclusions
In this study, the changes in the physicochemical properties and environmental behavior of PBS aged under different water environments were systematically investigated. PBS and PP MPs were used in this research, and the key findings are as follows: (1) The patterns of variation in crystallinity and O/C ratio indicated that PBS was more susceptible to aging and degradation than PP. The hydrophilicity and specific surface area of MPs increased during the aging process. The aging process of the MPs was encouraged under single-factor conditions (i.e., DOM and SI). The coexistence of multiple aqueous environmental factors (i.e., DOM and SI) did not have a synergistic action on the aging process of MPs, which may have been mutually constrained, and the degree of aging was lower than those under single-factor conditions. (2) The adsorption kinetics tests showed that the adsorption of SMZ by MPs occurs mainly through surface adsorption and intraparticle diffusion. The results of the adsorption isotherms indicated that the adsorption of SMZ by MPs was multilayer on nonuniform surfaces. Compared with PP aged under the same conditions, PBS showed a better adsorption capacity. The aging process increased the adsorption capacity and strength of MPs, as well as their ecotoxicity. (3) The desorption experiments showed that the quantity of MPs desorbed on the simulated intestinal fluid was approximately 10 times higher than that in Milli-Q water. Aging enhanced the desorption ability of SMZ by MPs. Compared with PP, PBS showed a higher desorption capacity. This research provides a theoretical basis for assessing the ecological risks of degradable plastics in complex water conditions. | 7,142.8 | 2022-12-23T00:00:00.000 | [
"Environmental Science",
"Chemistry"
] |
C-Phycocyanin-a novel protein from Spirulina platensis- In vivo toxicity, antioxidant and immunomodulatory studies
A pigment-protein highly dominant in Spirulina is known as C-Phycocyanin. Earlier, in vitro studies has shown that C-phycocyanin is having many biological activities like antioxidant and anti-inflammatory activities, antiplatelet, hepatoprotective, and cholesterol-lowering properties. Interestingly, there are scanty in vivo experimental findings on the immunomodulatory and antioxidant effects of C-phycocyanin. This work is aimed at in vivo evaluation of the effects of C-phycocyanin on immunomodulation and antioxidant potential in Balb/c mice. Our results of in vivo toxicity, immunomodulatory and antioxidant effects of C-Phycocyanin suggests that C-phycocyanin is very safe for consumption and having substantial antioxidant potential and also possess immunomodulatory activities in Balb/c mice in a dosage dependent manner. C-phycocyanin doesn’t cause acute and subacute toxicity in the animal model (male, Balb/c mice) studied. We have reported that C-phycocyanin exhibited in vivo immunomodulation performance in this animal model.
Introduction
Use of immunomodulatory agents are becoming popular in the management of diseases like cancer, AIDS and autoimmune or inflammatory diseases. However, it also true that the allopathic immunostimulants and immunosuppressants are not free from many limitations associated with them. For example, immunostimulants like levamisole and tetramisole have side effects like skin toxicity and agranulocytosis. Similarly, immunosuppressants like cyclophosphamide, cyclosporine and azathioprine are reported to have side effects like renal toxicity, hepatic toxicity and bone marrow suppression etc. Unlike allopathic immunomdulators, the available natural immunomodulatory products have a greater chance of variation in the active constituents present when crude extracts are used, that minimizes overall therapeutic properties. This limitation can be reduced by the use of pure fractions and extracts (Nandini et al., 2016). Exploring natural products based immunomodulators is growing as an area of great interest and may prove to be very important one for preventing occurrence of various infectious diseases.
Almost all the metabolic disorder is linked with symptoms linked with inflammation and oxidative stress . Recently, it has been established that the gutmicrobiota plays an significant role in controlling the physical and physiological conditions and thus plays important role in manifestation of metabolic syndromes in humans. The gutmicrobiota has been an effective target for nutraceuticals. In this context, its worth mentioning that Spirulina sp. have been reported to improve the growth of probiotics and also for in vitro antimicrobial activity (Finamore et al., 2014).
Many functional food items and beauty products are based on algae (Chen et al., 2014). Spirulina, belonging to the blue green algal group is used as a food supplement because of high-content of protein. Vitamins, minerals and carotenoids are also present in Spirulina in sufficient quantities (Ichimura et al., 2001). Moreover, Spirulina sp has been reported to be effective against hypercholesterolemia, oxidative stress, hyperglycemia etc. and also possesses antihypertensive activity (Torres-Duran et al., 2007).
C-Phycocyanin from Spirulina, a protein, having a brilliant blue color, is composed of two subunits is sold as a colorant for food items as well as for cosmetics (Ichimura et al., 2001). Upto 20% phycocyanin is present in Spirulina protein fraction (Vonshak, 1997;Silveira et al., 2007). It exists as monomers, trimersor hexamers and also as oligomers in small quantities. Phycocyanins include both C-phycocyanin and allophy-cocyanin (Chen et al., 2014).
Hitherto, there are scanty in vivo experimental findings on the immunomodulatory and antioxidant effects of C-phycocyanin (Finamore et al., 2014) and further in vivo studies are definitely required while going for human studies.
The rationale of the present study is to assess the role of Cphycocyanin on immunomodulation in in vivo animal model along with its antioxidant potential.
Animals and experimental design
The study was conducted according to the OECD guidelines after having approval by the Institutional Ethical Committee, Institute of Nuclear Medicine and Allied Sciences. For the study, six weeks old Balb/c mice (36nos, male) were considered and issued from the animal house facility, Institute of Nuclear Medicine and Allied Sciences (INMAS), Delhi. The animals were acclimatized and allowed to become 8 weeks old before initiating acute and subacute toxicity studies. The animals were given food with standard pellets along with distilled water, kept in plastic cage (n = 3), with sawdust bedding (22°C, 12 h of day/dark light cycles) (Zulkawi et al., 2017).
Acute toxicity test
As per OECD-420 guidelines, acute oral toxicity test was carried out. Balb/c mice (male) were taken from the INMAS Animal House facility. Six nos of mice were subjected to limit test at 2000 mg/kg BW. Three animals each was given this dose (p.o.) and were observed for 48 h for any toxic symptoms or death. Thereafter observation continued for 14 days. This was followed by three more animals each at the same dose. The LD50 was determined based on OECD guidelines (OECD, 2001).
Sub-acute toxicity test
In subacute toxicity study, phycocyanin was given orally to the mice (30nos Balb/c mice) for a period of 30 days. After acclimatization, the animals were segregated into 5 groups. Control group was given normal diet only along with drinking water (Naidu et al, 2009). The four experimental groups in addition to normal diet and drinking water were given C-phycocyanin at 100, 200, 500, 1000 mg/kg body weight (w/w) of mice. Phycocyanin was obtained from Hash BioTech Labs Private Limited (Chandigarh, India). Food intake was recorded daily and body weights were measured weekly. After the treatment, from each group, three mice were euthanized through carbon dioxide overdose. Blood samples were used for hematological analysis. Serum obtained after centrifugation, was stored at À 80°C until use for immunological and biochemical analysis. Weights of various organs like brain, heart, liver, lung, kidney, spleen were also noted. The tissues were then embedded in paraffin after fixing into formaldehyde (10%). Histological examination of the tissue sections were done after staining with hematoxylin-eosin (HE) (Lillie, 1965).
Blood Hematological studies
Hematological studies (Gomaa, 2018) of the blood parameters were done using a Horiba Medical analyzer (model MICROS 60, USA).
Preparation of sera samples and serum biochemical analysis
Three mice from each group were euthanized (Moghaddam et al., 2016) through carbon dioxide overdose at fasting state. After blood collection, Serum after separation by centrifugation was stored at À 80°C until use for immunological and biochemical analyses. Serum Biochemical Analysis was done using Erba Chem-7 (Transasia) Biochemistry Analyzer (Erba, Germany).
Body weight and organs relative weight
After subacute toxicity study, final body weight of the animal in all the groups was noted (Nassef, 2017). Upon being killed liver, kidney, spleen were taken out aseptically and weighed. The relative organ weights were measured as organ weight/final body weight.
In vivo antioxidant assessment
The serum antioxidant enzyme activity (Catalase, SOD) levels were compared with normal control and standard control (Vit E). Serum antioxidant enzyme activity (Catalase, SOD) were calculated using kits (Oxiselect, Cell Bio Lab).
Briefly, 80 lL of mixed substrate was added to 10 lL of sample, mixed well. Following that, 10 lL xanthine oxidase was added to the reactions and absorbance measured at 490 nm. SOD activity was calculated and expressed as U/L (Oxiselect, Cell Biolabs, Inc.). Catalase activity (Bahrami et al., 2016) was calculated spectrophotometrically by kit as per manufacturer's instruction. Briefly, 20 lL of sample was incubated with 50 lL 12 mM of H 2 O 2 for 1 min. By rapidly adding 50 lL of Catalase Quencher, the reaction was ended. Pink complex of Chromogenic solution and H 2 O 2 was measured at 520 nm (Park et al., 2005).
Statistical analysis
Results were expressed as group mean ± S.E. One-way ANOVA was used for test comparison with controls (p-values < 0.05 con-sidered as significant) and significance assessed with Duncan's multiple range test.
Acute toxicity study
Through acute toxicity studies, it was found that the LD50 of the extract was above 2000 mg/kg BW for all the animals used and the drug was safe or non-toxic to mice. There was no mortality or behavioural changes at a dose of 2000 mg/kg, thus indicating a wide margin of the safety of the drug used and there were no observations of obvious toxic symptoms throughout the period of the study. Body weight (BW), organ weight and relative organ weights didn't show any significant differences (Table 1) when compared between phycocyanin-treated and control mice.
Sub-acute toxicity study
No significant deviations were found in the body weight and relative organ weights (Table 2) between control and Cphycocyanin treated mice after 30 days of subacute toxicity study. Physical observations didn't indicate any signs of abnormality like changes in behavior patterns, changes in skin or fur colour, or changes in eyes and mucus membrane. There were no tremors, salivation, diarrhea etc. All the C-PC treated mice survived without significant changes of body or organ weight, sign and symptom of toxicity. Between control and the test groups, significant differences were not observed, so far as clinical observations and biochemical parameters are concerned (Tables 3, 4, 5 and Fig. 1).
In vivo antioxidant assay
C-phycocyanin at 500 mg/kg and 1000 mg/kg resulted in significant enhancement of serum SOD activity that is higher than that of vitamin E (Fig. 2a) while C-PC at 200 mg/kg has SOD activity compared to Vit E at 200 mg/kg (P < 0.05). In a similar manner C-phycocyanin at 500 mg/kg and 1000 mg/kg resulted in significant enhancement of serum catalase activity that is higher than that of vitamin E (Fig. 2b) while C-PC at 200 mg/kg has catalase activity compared to Vit E at 200 mg/kg (P < 0.05).
Immunomodulatory study
Serum cytokines levels were also estimated to understand the immunity of the C-PC treated healthy mice. Expression of twelve nos. of cytokines (viz., IL1a, IL1b, IL2, IL4, IL6, IL10, IL12, IL13, IFN-c, TNFa, GM-CSF, RANTES) in the serum were determined using ELISA kits (Qiagen). It was found that C-phycocyanin suppresses the synthesis of pro-inflammatory cytokines, interferon-c (IFN-c), and tumor necrosis factor-a (TNF-a) in a concentration dependent manner (Fig. 3). The levels of TNF-a and IFN-c were significantly decreased in 500 and 1000 mg/kg treated groups in comparison with controls. The levels of IL-2 and IL-1b were not significantly affected in the C-Phycocyanin treated groups. However, C-phycocyanin enhances the levels of anti-inflammatory cytokines, such as IL-10 in a concentration-dependent manner (Fig. 3).
Discussion
C-Phycocyanin has a wide margin of safety as there was no mortality or behavioural changes at a dose of 2000 mg/kg, and there were no observations of obvious toxic symptoms throughout the period of the study. No significant deviations were found in the body weight between control and C-phycocyanin treated mice after 30 days of subacute toxicity study.
Also, after the treatment period, the C-PC treated mice didn't show any signs and symptoms of toxicity. The clinical and biochemical parameters study as well as histopathological evaluations of the kidney and liver revealed normal status. Thus, the results of the present study showed that C-phycocyanin from Spirulina platensis did not bring on any detrimental effects in Balb/c mice. These findings provides sufficient evidence to conclude that the orally administered C-phycocyanin was safe and showed no toxicity even at the maximum dose of 2,000 mg C-phycocyanin per Kilogram of body weight.
C-phycocyanin, is gaining popularity because of their many bioactivities already reported from time to time. C-phycocyanin has been known for anti-tumor, antioxidant and antiinflammatory properties based on few earlier studies (Strasky et al., 2013;Eriksen 2008;Bhat and Madyastha, 2000;Romay et al., 1998aRomay et al., ,b, 2000 and is considered as a raw material for making various nutraceutical products (Silveira et al., 2007). Even anticancer bioactivity has been attributed to C-phycocyanin (Li et al., 2010;Wang et al., 2007).
Also, C-phycocyanin was able to lessen the production of macrophages (RAW 264.7) with growing dosages (Reddy et al., 2003). C-phycocyanin was able to particularly curb COX-2 and PGE2 expression when RAW 264.7 macrophages were activated by lipopolysaccharides (LPS), thus, establishing the antiinflammatory nature of C-phycocyanin. Oxidative stress plays significant roles in processes of ageing and pathogenesis of numerous diseases like diabetes, cancer, neurodegenerative and respiratory tract disorders (Anderson et al., 2000). Halliwell et al. (1996) opined that the sum of endogenous and food derived antioxidants correspond to the total antioxidant capability of a system. The role of antioxidant is to detoxify reactive oxygen intermediates in the body (Delay, 1993). Therefore, improved antioxidant status can minimize oxidative stress and associated damages. This delays or decreases the risk of developing free radical induced diseases. Protective antioxidants bestowed by many plant extracts and products make these agents promising therapeutic drugs for free radicals induced pathologies. In vitro antioxidant properties of Cphycocyanin has been demonstrated earlier (Romay et al., 1998b). Similarly, Chen et al., (2014) has shown that LPS stimulation of J774A.1 macrophages rapidly stimulate ROS production in comparision with control cells. LPS-induced ROS was reduced when pretreatment was done with N-acetylcysteine (NAc). The ROS (H 2 O 2 ) content decreased within 2 h when C-phycocyanin was present. .
In this study, C-phycocyanin at 500 mg/kg and 1000 mg/kg resulted in significant (P < 0.05) enhancement of serum SOD and catalase activity that is higher than that of vitamin E while at 200 mg/kg C-PC has SOD and catalase activity (Sharma and Mahajan, 2013) compared to that of Vit E (P < 0.05). CAT, SOD and GSH-Px enzymes are known to be very important scavenger of hydrogen peroxide as well as of superoxide ion. These enzymes play vital role in shielding the cellular constituents from oxidative damage (Scott et al., 1991). Superoxide dismutase (SOD) is considered as an enzyme widely used as a biochemical marker of disease condition caused by oxidative stress (Dündarz et al., 2003). Many harmful oxidative changes are associated with decrease in the levels of CAT, SOD. C-phycocyanin in our studies has shown to have added to the SOD and CAT activities and thus in turn has a higher free radical absorbing capacity. Thus, C-phycocyanin can have a beneficial action against the harms caused by O2 Å and OH Å .
Moreover, due to toxicity of some of the synthetic drugs, there is high demand for the natural immunomodulators. Immunomodulation by herbal way is the most acceptable one owing to the demerits of allopathic immunomodulators. The worldwide demands for the natural immunomodulators are difficult to meet and many biotechnological methods are also under development.
Cytokines plays crucial role in regulating the immune response. Under natural conditions, pro-inflammatory cytokines plays Table 2 Body weight, organ weight, relative organ weight of normal and 100, 200, 500, 1000 mg/kg body weight of C-phycocyanin treated mice for subacute toxicity study.
Table 3
Hematological values in BALB/c mice treated with different doses of C-phycocyanin in comparision with control. Values are mean ± SEM of 5 Balb/c mice. No significant difference was observed at p < 0.05. Values are mean ± SEM of 5 Balb/c mice. No significant difference was observed at p < 0.05. important role in the development of suitable defence system. In first-line of immunological defense in mammals, macrophages clean out tumor cells (Park et al., 2009) through the release of diverse cytokines (Adams and Hamilton, 1984). For example, TNF-a has cytotoxic effects (Bowdish et al., 2007;Striz et al., 2014;Biswas and Mantovani, 2014). IFN-a, promotes type 1 immune responses and obstruct the growth of cancer cells (Werneck et al., 2008;Lee et al., 2011). IL-2 controls the functions of white blood cells. IL-1b enhances the production of T-cells, stimulate B-cells along with few other functions. The beginning of the innate immune response is mainly by inflammatory cytokines like TNF-a, IL-1b and IL-6 and also plays important role in determining the extent of acquired immune response (Netea et al., 2003). Earlier in vitro studies have shown that, C-PC possess antioxidant as well as immunomodulatory performance (Chen et al., 2014). In our study, the immunity of the C-PC dosed healthy mice was detected by estimating the quantities of various cytokines in the serum. Expression of twelve nos. of cytokines (viz., IL1a, IL1b, IL2, IL4, IL6, IL10, IL12, IL13, IFN-c, TNFa, GM-CSF, RANTES) in the serum were determined using ELISA kits (Qiagen). It was found that C-phycocyanin suppresses the synthesis of pro-inflammatory cytokines, interferon-c (IFN-c), and tumor necrosis factor-a (TNF-a) in a concentration dependent manner (Fig. 3). The levels of TNF-a and IFN-c were significantly decreased in 500 and 1000 mg/kg treated groups in comparison with controls. The levels of IL-2 and IL-1b were not significantly affected in the C-Phycocyanin treated groups. However, C-phycocyanin enhances the levels of anti-inflammatory cytokines, such as IL-10 in a concentration-dependent manner (Fig. 3). Therefore, our study concludes that C-phycocyanin suppresses the production of TNFa and IFN-c without putting any inhibitory effect on the production of anti-inflammatory cytokines like IL 10.
Conclusion
In conclusion, our results of in vivo toxicity, immunomodulatory and antioxidant effects of C-Phycocyanin confirms that Cphycocyanin is very safe for consumption as it doesn't cause acute and subchronic toxicity. Moreover, we found that C-phycocyanin strengthens immunity as well as have a very potent effect on serum antioxidant level. It may have the potential to be considered as an important nutraceutical supplement to get rid of various infectious as well as oxidative stress induced diseases.
Declaration of Competing Interest
None. | 3,966.2 | 2020-12-30T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
A 14-day limit for bioethics: the debate over human embryo research
Background This article explores the reasons in favour of revising and extending the current 14-day statutory limit to maintaining human embryos in culture. This limit is enshrined in law in over a dozen countries, including the United Kingdom. In two recently published studies (2016), scientists have shown that embryos can be sustained in vitro for about 13 days after fertilisation. Positive reactions to these results have gone hand in hand with calls for revising the 14-day rule, which only allows embryo research until the 14th day after fertilisation. Main text The article explores the most prominent arguments in favour of and against the extension of the 14-day limit for conducting research on human embryos. It situates these arguments within the history of the 14-day limit. I start by discussing the history of the 14-day limit in the United Kingdom and the reasons behind the decision to opt for a compromise between competing moral views. I then analyse the arguments that those who are generally in favour of embryo research put forward in support of extending the 14-day rule, namely (a) the argument of the beneficence of research and (b) the argument of technical feasibility (further explained in the article). I then show how these two arguments played a role in the recent approval of two novel techniques for the replacement of faulty mitochondrial DNA in the United Kingdom. Despite the popularity and widespread use of these arguments, I argue that they are ultimately problematic and should not be straightforwardly accepted (i.e. accepted without further scrutiny). I end by making a case for respecting value pluralism in the context of embryo research, and I present two reasons in favour of respecting value pluralism: the argument of public trust and the argument of democracy. Conclusion I argue that 14-day limit for embryo research is not a valuable tool despite being a solution of compromise, but rather because of it. The importance of respecting value pluralism (and of respecting different views on embryo research) needs to be considered in any evaluation concerning a potential change to the 14-day rule.
Background
In August 2016, in a letter in Nature and in an article published in Nature Cell Biology, two groups based in different research centres in the United Kingdom (Cambridge and London) and in the United States (The Rockefeller University, New York) presented the results of their experiments on in vitro human embryos. For the first time, the embryos were sustained in vitro for 12-13 days after fertilisation [1,2]. Prior to this, scientists were only able to sustain embryos in vitro for about seven days [3].
Many members of the scientific and bioethics communities reacted enthusiastically to these advances, due to the novelty of the results and to the potential benefits that they could bring about [3][4][5]. Research involving human embryos allows us to increase our understanding of the first stages of embryo development and it is considered instrumental to shedding light on the causes of early miscarriages, of problems related to infertility and of birth defects [6]. In addition to this, embryo research has been instrumental to the development of human embryonic stem cells, cells derived from embryos have proved to be clinically useful to cure certain degenerative diseases [6][7][8]. Sustaining embryos in vitro for a longer period of time could allow an even greater understanding of the causes of embryo defects and early miscarriages, and it could prove especially clinically beneficial for women who have experienced multiple early pregnancy losses. Due to the current benefits of embryo research and to the potential future benefits of it, the positive reactions to these experiments went hand in hand with a call for revising and extending the so-called 14-day rule. This rule allows research involving human embryos up until the 14th day after fertilisation, a statutory binding limit in over a dozen countries [3,9].
This article explores the arguments for and against extending the 14-day limit for research on human embryos. In the following section, I briefly present the history of how the 14-day rule came about in the United Kingdom and the reasons behind the decision to opt for a solution of compromise. In section 3, I discuss the arguments that those who are generally in favour of embryo research put forward in support of extending the 14-day rule, namely the argument of the beneficence of research and the argument of technical feasibility (further explained below). I show how these two arguments played a role in the process that led to the approval of mitochondrial replacement techniques in the United Kingdom. In section 4, I discuss why I find these arguments wanting. In the last section (5), I present two arguments in favour of compromise, namely the argument of trust and the argument of respect for value pluralism. I conclude that the importance of respecting value pluralism needs to be taken into account in any evaluation concerning a potential change of the 14-day rule.
The 14-day limit and the Warnock report
The publication of the aforementioned two articles in Nature and Nature Cell Biology triggered a resurgence of the debate on embryo research and on the 14-day limit to carry out research on in-vitro human embryos. The 14day limit came about in the United Kingdom at the beginning of the 1980s. Its birth is closely linked to another, non-metaphorical, British birth: the first test-tube baby (i.e. a baby conceived via in-vitro fertilisation), Louise Brown, was born in the United Kingdom in 1978. As noted by historian Duncan Wilson, after the initial excitement surrounding Louise Brown's birth, public attitudes towards IVF shifted from an initially more favourable stance to a more critical view of the practice [10][11][12]. These predominantly negative attitudes, and the necessity to decide upon the fate of embryos 'left over' after IVF procedures, 1 contributed to calls for a tighter oversight of the practice. They also underscored the importance of deciding whether it was permissible to use these spare embryos for research [10][11][12].
At that time, embryo research was the most debated matter concerning the ethics of IVF [13][14][15]. Two conflicting positions dominated the public debate: on the one hand, those of whom were outright against embryo research. On the other, those of whom were in favour of doing research on embryos up until it was technically feasible. The first group appealed to the need to respect human life from its very beginning and argued that life starts in the moment of fertilisation (i.e. when sperm cells fertilise oocytes) and must be protected. Interestingly, not all the opponents of embryo research holding the view that embryos are persons were arguing from a religious standpoint [15]. Some of those arguing against embryo research in principle referred to the potentiality of the embryos to become fully developed persons and concluded that human life, no matter at what stage of development, should be granted full protection, and that embryos should not be used for research [16][17][18]. The opposing view, held by those in favour of legalising embryo research, found support from those appealing to the potential benefits of such research, and from those who granted inexistent or low moral status to the embryos. This group also referred to the potentiality of embryos to become fully developed persons, but concluded that potential persons (i.e. embryos) were different from actual persons and that this was a sufficient reason to allow research on human embryos [13]. Unsurprisingly, according to them, the potential benefits of such research, for instance an increased understanding of early human development, better IVF procedures and treating infertility and pregnancy losses outweighed the costs of embryo research [13].
There are some differences between the 1980s debate on embryo research and today's newly emerged debate. Perhaps, the main difference is that, whereas previously research beyond the 14-day mark was scientifically untenable, it has recently become technically possible. When the limit was decided upon, scientists were not able to keep the embryos alive in vitro for longer than the limit allowed. The experiments reported in the two recent articles prove that scientists are now able to keep embryos alive for up to 12-13 days and possibly longer. In addition, IVF as an assisted reproductive technique has significantly improved and many of the technical advances in this technique are owed to embryo research. It is in this sense that, while the 1980s debate focused on the question of whether embryo research should be allowed, the current debate occurs against the backdrop of the advances that allow embryo research to be made possible. Moreover, while in the past it was not possible to preserve the viability of the embryos employed for research, today there are technical solutions that allow scientists to obtain embryonic stem cells for research that do not result in the destruction of the embryo (e.g. embryo biopsy 2 ). Lastly, whilst previous research was carried out on early human embryos only, today, and potentially increasingly in the future, embryo research could be done on artificial entities that bear sufficient resemblance to embryos to be suitable for such research.
To name a few methods, these entities would be created through, for instance, altered nuclear transfer (ANT) or parthenogenesis of oocytes [6,7,19]. 3
Conflicting moral views on embryo research
Today's discourses on the moral status of human embryos are not so different from the discourses that, in the 1980s, resulted in the establishment of the IVF Inquiry, a committee appointed to produce an advisory report on the moral, legal and social issues raised by IVF, embryo research and other practices. Oxbridge philosopher Mary Warnock was appointed its chair. As I show in the next sections, the procedural work of the committee, the views of the chair, and the way the recommendations on how to proceed about embryo research were drafted represent an important precedent for the current debate on embryo research.
The members of the committee, including Warnock herself, were aware of the conflicting moral views on embryo research, and of the difficulty of reconciling them and establishing which one should prevail [20][21][22]. In addition to this, they tried to review as many different points of view as possible: the committee considered evidence from experts working in the field of human reproduction (around 300 individuals and organisations) as well as from the public (695 letters and submissions). Although the evidence collected in this way was never published 4 and although it was never made transparent how this evidence influenced the final recommendations, it is presumed that the committee considered all the submitted evidence and took into account the different views that it reflected [15].
Legitimating embryo research would have likely caused uproar from those who accorded full moral status to human embryos. At the same time, an outright ban on embryo research was perceived as problematic for two reasons: due to a concern for the loss of potential benefits of embryo research, and due to the perceived need to allow IVF to go forward only if backed up by studies on the development of early human embryos. A solution to this impasse was to find a compromise between these two positions: this is how the idea to introduce a cut-off point until which research would be permissible came about. Introducing a cut-off was a solution of compromise, as it would have enabled embryo research, but only until a certain stage of development. Different possible limits were examined, including the 5th day (i.e. beginning of implantation in utero) and the 11th day (i.e. the end of implantation) after fertilisation.
It was developmental biologist Anne McLaren, a member of the committee, who proposed using a peculiar biological event in the embryo development to mark the end of the permitted period of research [11]. McLaren suggested limiting research to the 14th day of development because this moment signals the emergence of the primitive streak in the human embryo, a precursor of the brain and the spinal cord. At the same time, the emergence of this streak marks the beginning of gastrulation, a process whereby the embryonic inner cell mass starts to differentiate into three layers (endoderm, mesoderm, and ectoderm). This process also corresponds to the last point in which the embryo could cleave into twins (i.e. twinning) or in which two embryos could merge into one (e.g. tetragametic chimerism). McLaren argued that: "If I had to point to a stage and say 'This is when I began being me", I would think it would have to be here" [23]. In order to endorse the 14-day limit and the decision to allow research up until this stage of embryo development, the term 'pre-embryo' was coined. It designated the embryo before the emergence of the primitive streak, and it marked a distinction from the 'unborn child' (i.e. the embryo after the 14-day) [12,23]. It was therefore a term with ethical and political significance, a term that designated the boundary between acceptable and non-acceptable research.
Eventually, in 1990, the recommendations of the IVF-Inquiry comprised in the Warnock Report [21] were enshrined into law, in what became the Human Embryology Act [22].
How the 14-day limit came about: Compromise and its critics Introducing a cut-off date -in this case the 14-day limitrepresented an instance of favouring compromise between competing moral views, beliefs and values over questions of rightness and wrongness [10,15,24]. Questions regarding whether or not the embryo has moral status, what moral status stands for and entails, and questions regarding the core features of personhood and the beginning of human life were overridden by other considerations. These considerations included the moment from which the embryo should be granted legal protection, what kind of society can be praised and in what kind of society people can live with clear conscience [12,14]. The decision to shift the focus from ontological questions concerning rightness and wrongness to more practical questions is linked to a conception of morality whose role is to address moral matters arising in the context of public policy. The IVF-Inquiry was not created to produce perfect philosophical reasoning and give a lesson in moral expertise, but rather to facilitate a process whereby scientists' work would become more "socially palatable" and whereby workable regulations would be delivered [12,25].
The committee favoured a moral relativistic approach to embryo research and to the conflicting positions present in the debate. Instead of trying to establish which position was the most accurate one and what view came closest to an absolute moral truth, the committee worked under the assumption that the views of those for and against embryo research deserved to be equally respected and taken into consideration. Thus, the view of those who believed that the embryos are to be treated as if they were persons (and hence, they deserve full moral status) and research on them should be banned, and the view of those who believed that embryos are not more than a cluster of cells (no moral status at all) and research on them should go forward were equally taken into account. In this sense, the committee followed the assumption that the truth and standing of moral judgments is not universal, but relative to the social, political and cultural context in which these moral judgements arise [26]. Warnock and her committee experienced first-hand the diversity of views both in her committee and in society at large. Their strategy was to exercise tolerance in matters of morality and moral disagreement, and to respect value pluralism [14,27]. Warnock understood the role of her committee in these terms: starting from the acknowledgement of the different and competing moral positions, she tried to find the path of greater social consensus among them [10]. In addition to this, Warnock and her committee opted to take into account not only moral arguments based on scientific evidence and philosophical reasoning, but also moral feelings and beliefs [18]. In this sense, they followed Hume's idea that feelings, and not pure calculating rationality, need to be considered in the assessment of ethical dilemmas and that morality is 'more properly felt than reasoned' [28,29].
Perhaps unsurprisingly, given the existing disagreement on the matter, the committee recommendation to allow embryo research up until the 14th day was highly criticised. Three committee members were outright against embryo research and refused to endorse the final recommendations concerning this matter [15,21]. Members of the conservative party, of the pro-life group LIFE and Christian scientists such as Ian Donald, publicly criticised the decision and lobbied against the report recommendation during the parliamentary debate on the matter [12,15]. Generally, reactions from the more conservative side of the debate opposed this solution because it employed a sort of utilitarian calculus (i.e. the potential benefits of embryo research) instead of foregrounding considerations concerning how we ought to treat unborn persons.
Interestingly, both those against and in favour of conducting research on human embryos agreed on some of the reasons why the 14-day limit was at least problematic, if not completely wrong, namely arbitrariness and dodging the most fundamental question. Those that criticised the decision on the grounds of its arbitrariness argued that it was impossible to draw a morally and legally significant distinction between an embryo that was 13, 14 or 15 days old. However, supporters and critics of embryo research drew different conclusions from this impossibility to draw morally consistent lines: supporters argued that embryo research should have been allowed until it was technically feasible (i.e. until when the scientists could keep the embryo alive in vitro), while critics argued that embryo research should have been banned altogether. Another point of convergence between supporters and critics was the fact that Warnock and her committee did not address the questions of when life begins and when an embryo becomes a person. The decision to focus instead on the legal and moral rights of the embryo, without addressing the issue of what an embryo really is, was seen as extremely problematic by both sides. According to them, it was impossible to decide whether or not the human embryo deserved protection without establishing why it/she/ he deserved protection, in other words whether or not the embryo was a person [13,18].
In addition to these critiques, philosopher John Harris criticised Warnock and the committee for taking into account people's feelings. Harris argued that not all feelings were moral feelings and not all of them deserved respect. According to him, moral feelings should be evaluated on their capacity to make the world a better place, to save lives and postpone deaths [13].
These reactions are important because they show that, back then as today, there is indeed a fundamental moral disagreement concerning early human life, how to treat human embryos and about the legitimate role of feelings and passions in public and regulatory discourses [30]. The reactions that followed the committee's recommendations show the extent to which these views were in fact incompatible. However, it is important to note that those who criticised the decision on the grounds of arbitrariness and inconsistency in a certain sense missed the point of the role and function of the committee. The committee was put together in the first place in order to maintain public trust and be a reliable means for external oversight of scientific research. For this reason, the recommendations were meant to be a solution of compromise rather than a means to find the most consistent moral view.
In the next section, I briefly outline the reasons that advocates of embryo research currently put forward in favour of extending the limit, and show how these same reasons have played an important role in the debate on whether to introduce two new techniques into the clinic.
The reasons in favour of extending the limit Scientists (Robin Lovell-Badge and Azim Surani quoted in [4]) and ethicists [3,5] reacted to the results reported on Nature and Nature Cell Biology by publicly calling for an extension of the 14-day limit and for revising the current regulation of embryo research. The argument that they used strikes familiar chords: embryo research is beneficial and now technically possible, therefore it should be allowed. The two publications in Nature and Nature Cell Biology [1,2] partially changed the narrative of the debate on embryo research: whereas in the 1980s it was a matter of legalising such research, today the debate is about extending the 14-day limit for reasons grounded in beneficence and technical feasibility, and thus merely adjusting the regulatory framework of an already legalised practice. These reasons draw upon consequentialist premises and the principle of utility. They imply that being able to carry out potentially beneficial research and not doing so would be morally impermissible. 5 According to the advocates of embryo research, the reasons in favour of extending the 14-day limit are stronger today than they were in the past. In 1984, these reasons relied on positive provisions of the potential benefits (i.e. the beneficence of research) and positive provisions of the future feasibility (i.e. technical feasibility). In the past, it was about faith in science and managing the uncertainties of potential future benefits of embryo research with certain regulations. Today, Harris, Lovell-Badge and Surani argued, it is about certainties concerning the benefits and certainties of technical feasibility: embryo research has proven to be both beneficial and feasible [4,5].
The use of beneficence and feasibility in the debate on technical innovations recalls another debate where similar arguments have been advanced in response to scientific breakthroughs. Early in 2015, the United Kingdom became the first country in the world to allow two novel techniques that allow women with mitochondrial DNA diseases to have genetically related children with a decreased risk of developing mitochondrial diseases. Mutations in the mitochondrial DNA are the cause of many diseases including, for instance, mitochondrial myopathy, Leigh disease and diabetes mellitus, and they are normally inherited through the maternal line [31]. Up until the approval of these two techniques, prospective mothers needed to turn to oocytes donors, PGD or adoption in order to have children free from these genetically inherited mutations [32]. Although these techniques (maternal spindle transfer, MST, and pronuclear DNA transfer, PNT) have been depicted as involving the 'replacement' of the affected mitochondrial DNA of the oocyte of the prospective mother or of the fertilised oocyte with the mitochondrial DNA of a female donor, this description is inaccurate. What really happens is that the oocyte's, or zygote's, nucleus previously housed in a cell with deleterious mitochondria is rehoused in an enucleated cell with healthy mitochondria. The embryo that results from these techniques will have the genetic makeup of the prospective father, the mitochondrial DNA of a donor and the nuclear DNA of the prospective mother.
Despite the similarities between the arguments in favour of the extension of the 14-day limit and the arguments in favour of allowing mitochondrial replacement techniques (MRTs), it is important to note that there are differences between the current debate on extending the limit for embryo research and the recent debate on MRTs. 6 These differences concern both the content of these debates (i.e. the specific arguments in favour and against and the object of the controversy) and their potential outcomes (i.e. extending an existing limit for embryo research instead of allowing two new techniques to be introduced into the clinic). With respect to the content, the arguments against MRTs focused on concerns regarding the implementation of newly developed techniques and the risks that their implementation may pose to future children. On the contrary, the arguments against the extension of the 14-day limit focused on basic research rather than clinical implementation. In particular, they pertain to the ethics of using intrinsically valuable beings such as human embryos for instrumental purposes. In addition, these debates differ in terms of what proponents and opponents wanted to achieve (i.e. in terms of outcome). The potential outcome of the debate on MRTs was to establish whether these new techniques were sound from a technical and moral point of view. On the contrary, the debate on embryo research is about setting a new limit for continuing existing research and for possibly gaining new insights into embryo development. These are just a few of the differences between the two debates and a detailed analysis of such differences is beyond the scope of this article. However, it is important to note that despite these differences, some similarities with respect to the argument in favour of MRTs and embryo research can be drawn. In particular, those in favour of MRTs and of extending the 14-day limit appealed to beneficence and technical feasibility arguments in both instances.
One of the most contested issues concerning the ethics of MRTs is whether these techniques would bring about changes to the human germline (i.e. changes in human oocytes, sperm cells or embryos that do not only appear in the children resulting from the procedure, but also in succeeding generations) [33]. Ethicists and scientists are divided over whether MRTs amount to germline modifications as changes introduced in the oocyte (in the case of MST) or in the zygote (in the case of PNT) concern the mitochondrial rather than the nuclear DNA [34]. In addition, as mitochondrial DNA is inherited from the maternal line, if only male embryos are transferred in utero, the modifications introduced with MRTs will not be present in the succeeding generations 7 [35]. An assessment of these arguments is beyond the scope of this article, 8 but what matters for the present analysis is that up until the approval of these techniques, modifications of the genetic makeup of sperm cells, eggs and embryos were only legally possible in-vitro and never for clinical purposes in-vivo. Modification of the human germline (i.e. gametes, and embryos) has traditionally been considered a line that should not be crossed. This line was recognised as morally relevant in 1978 with the publication of Splicing Life, a report of the US President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research appointed to regulate gene therapies, the reasons given were partly scientific (i.e. it was not technically feasible) and partly moral (i.e. it was seen as immoral to introduce changes that would have been inherited by future generations) [36,37]. Modifying the human germline is seen as problematic because of the unforeseen effects on future generations, the risk of engaging in a form of new eugenics, the risk of sliding down a slippery slope to human enhancement, and other similar arguments [38][39][40]. These arguments were already put forward at the very early developments of gene therapy and rehearsed in recent debates on MRTs and gene editing [34]. However, both historically and more recently they have not remained unchallenged. Questions related to eugenics, enhancement and unforeseen effects on future generations have been widely discussed during the months prior to the approval of MRTs and they are still a matter of ethical inquiry, as shown by the increasing number of articles and reviews that address these issues [41][42][43][44]. In addition, the public consultation (2012) and the extensive reviews of the scientific methods of MRTs carried out by the HFEA (respectively in 2016, 2014, 2013, 2011), the work of the Nuffield Council 9 [44] and the parliamentary debate on these techniques have considered such concerns. The 2015 approval of these techniques by the UK Parliament could be seen as a first instance of crossing an internationally recognised ethical and legal limit due to reasons of beneficence (i.e. children born with these techniques will be free from mitochondrial diseases), but also due to the technical feasibility of germline modifications (prior to the parliamentary vote on MRTs, these techniques were not considered safe enough to be introduced into the clinic) [45]. It is in this sense that the sum of the arguments in favour of extending the 14-day limit echoes, albeit only partially, those in favour of allowing MRTs. Mitochondrial replacement techniques represent an interesting case study and set an important precedent for the ethical assessment of technical innovation. In contrast with other instances of internationally recognised bans such as the ban on human cloning, the approval of MRTs shows that longstanding limits such as the ban on germline modifications can be redefined once scientific advances make it possible. The argument of beneficence to allow research on human embryos for longer than 14 days is the same as the one made in the 1980s. What has changed is that while before it was technically difficult to introduce changes in reproductive cells and embryos that would be inherited by future generations, and to keep the embryos alive in vitro for a longer time span, now both actions are theoretically possible. The question, therefore, is whether the potential benefits of embryo research and the feasibility of keeping the embryos alive for longer than ever before are sufficient reasons to extend the limit.
There is more to beneficence and technical feasibility than meets the eye In this section, I will show that technical feasibility and beneficence of research as reasons in favour of extending the limit of embryo research are not as fundamental as those who advocate this change in the law claim. Accordingly, I scrutinise the arguments in favour of the extension of the 14-day limit, while I leave unchallenged those presented by the advocates of a more restrictive regulatory framework for embryo research. The rationale behind this choice does not rest on my own view on embryo research, as I do not necessarily share the beliefs and values of those against this practice. However, it is often argued by proponents of technological changes that the burden of justifying one's own claims rests solely on those who take a precautionary approach to technological progress [46][47][48]. Against this view, I propose that both those in favour and against embryo research ought to share the burden of justifying their moral views.
Facts, values and rationality
Technical feasibility as a reason in favour of extending the limit relies (i.e. practice x is now technically feasible, so there are good reasons to change the rule) on the premise "practice x is technically feasible" to infer the conclusion "there are good reasons to change the rule". However, appealing to the beneficence of research and to its technical feasibility is more problematic than those in favour of extending the limit for embryo research suggest it is. This line of arguing is problematic because it relies on what eighteenth-century philosopher David Hume considered an "inconceivable deduction" of what ought to be done from a set of is-premises [28]. Hume believed that it was logically fallacious to infer a normative judgment (oughtconclusion) from a set of factual claims (is-premises). Thus, following Hume, the normative conclusion "there are good reasons to change the 14-day rule" cannot be rightly inferred from the factual premise "embryos can now survive in vitro for longer than before" (i.e. technical feasibility of extending the time span for embryo research). This critique of inferring normative conclusions from factual claims is similar to the critique that philosopher George Edward Moore moved to moral naturalists (i.e. those who argue in favour of a link between moral philosophy and the natural sciences). Moore argued that anyone who infers that practice x is good from any preposition about the natural properties of x commits the "naturalistic fallacy" [49]. According to Moore, this fallacy shows how premises about some factual or natural features of practices do not support normative conclusions about these practices. Thus, anyone who supports an extension of the 14-day limit for embryo research on the basis of the technical feasibility of this research would commit the naturalistic fallacy. According to Moore, one of the main problems of moral naturalists was that they relied on purely factual premises concerning the natural features of certain practices to infer normative conclusions concerning these practices. To counter this tendency, Moore suggested instead that normative conclusions ought to be inferred from both factual and normative premises.
The argument of the beneficence of research (i.e. embryo research should be allowed for longer than 14 days due to the benefits of such research) is also more problematic than those in favour of extending the limit suggest it is. According to this argument, the 14-day limit should be extended because of the potential benefits of such research and because these benefits outweigh the costs of embryo research [5,13,50,51]. This appeal to beneficence is common in bioethics and it is often used by those who take a utilitarian stance on the ethical assessment of scientific progress, technologies and practices [3,46,[52][53][54]. Proponents of what I have called the argument of the beneficence of research rely on historical evidence to support their claim: they argue that since technological and scientific progress in medicine proved to be beneficial to humankind, it should be allowed to continue. Returning to embryo research, those who appeal to the beneficence of research to extend the 14-day limit ground their argument on the past benefits that embryo research brought about, and on the potential benefits that the extension of the limit could bring about [4,5].
At first sight, it seems fairly obvious that if something is beneficial, even only potentially beneficial, it should be allowed. However, this approach is problematic for a number of reasons and scholars have criticised bioethicists, institutions and scientists for their often-hyped claims concerning the benefits of new technical possibilities [55][56][57][58]. Firstly, the argument of beneficence and its proponents rely on an optimistic view of scientific progress, research and technologies [55,59,60], a view that echoes the post-illuminist positivistic ideas of science and technology, and that often overemphasises the potential benefits of scientific research [56,58,59] and its understating as a progressive and linear endeavour [61,62]. Secondly, the argument is problematic because it relies on a misleading estimation of costs and benefits. The benefits taken into consideration for the costbenefit assessment are not the benefits of embryo research for the embryos, as embryo research does not benefit embryos. Instead, the benefits considered are those to society, to existing and future individuals. On the contrary, the costs taken into account for the costbenefit assessment are not those to society, but to the embryos used for research. Those who emphasise benefits of embryo research over its costs do not grant moral status to the embryos, nor do they believe that embryos are capable of experiencing pain (i.e. being harmed). Hence, they do not really see any cost associated with embryo research, and they thus conclude that benefits outweigh these (inexistent) costs. The substantial disagreement over the moral status of the embryos and the criticism moved against research on human embryos show that embryo research is a controversial and notsettled issue [15,63]. For this reason, the costs of extending the limit beyond the 14th day, and of embryo research more generally, might be higher than proponents of embryo research like to admit. Embryo research has a societal cost of offending certain moral feelings on the value of early human life, and not respecting certain strongly held convictions on how we ought to treat human embryos. Thus, individuals who hold such views may find themselves feeling alienated from or devalued by society [17,18,64]. Possibly, proponents of embryo research who argue from a utilitarian standpoint, and who rely on the argument of the beneficence of such research, are aware of the possibility of offending moral feelings and strongly held beliefs, but they still consider the benefits of embryo research greater than the costs of offending the people who hold these feelings.
One of the reasons why many proponents of embryo research do not grant moral worth to these feelings, and to the opponents' arguments, is that they consider their views to be fundamentally flawed, irrational and not grounded in scientific evidence. Most advocates of embryo research thus dismiss the view that embryos are (future) persons and that embryo research would violate these future persons' dignity on the grounds of the irrationality of such ontological claims. For to them, these claims are based on faith rather than reason and factual considerations. However, it is important to note that those in favour of embryo research who argue from supposedly rational positions do not live up to the very same standards of rationality that they require of their opponents. In this sense, dismissing questions related to human dignity and the moral status of the embryos on the basis of their irrationality and lack of scientific support, becomes problematic [65,66]. Scientific evidence is often interpreted according to one's own pre-existing moral convictions, socalled evidence-based claims are still influenced by these moral convictions and by the way bioethicists react and argue about new technical possibilities [56,67,68]. Thus, irrational beliefs are not an exclusive ownership of those arguing against embryo research: similar irrational beliefs play a role in assessments of embryo research put forward by those in favour of embryo research on the grounds that it can save future lives. 10
Slippery slope
The slippery slope argument offers a last reason of caution against embryo research [69][70][71]. The slippery slope argument entails that allowing practice x (in this instance, allowing embryo research or extending the limit for embryo research) would initiate a process leading to unethical practices w, y, z. The slippery slope argument against embryo research is approximately like this: embryo research should not be allowed/the limit should not be extended because allowing research on embryos in a very early stage of their development/extending the limit beyond day 14 will lead to the permissibility of research on foetuses and new-borns. The argument voices the concern that once we become accustomed to research on pre-embryos, we will extend the permission for research on embryos on a later stage of development; once we become accustomed to this too, then we will allow research on foetuses and babies. 'Slippery slopers' believe that morally problematic practices such as embryo research should not be allowed, or the limit should not be extended, because of the difficulties of drawing a line between practices currently considered less morally problematic, such as research on pre-embryos, and practices currently considered highly immoral, such as research on foetuses at a late stage of their development. These arguments are widely criticised in the philosophical arena for their lack of empirical evidence, and for not considering that government regulations can be used to prevent such scenarios from coming into being [72][73][74]. In spite of these critiques, they are still used in debates on technological advances, scientific research and policy making [68,69,71,75]. The persistence of slippery slope arguments in academic works and policy making seems to suggest that attempts from philosophers to discredit this argument have been unsuccessful. The charge of starting a slippery slope towards inadmissible practices is still a powerful one [63,68]. An analysis of the theoretical fallacies and merits of this argument is beyond the scope of the paper, as is a final assessment of its validity. However, it is important to note that extending the limit beyond the 14th day of development will provide support to those who rely on the slippery slope argument to oppose embryo research. This might have non-negligible social consequences. For example, extension of the limit for embryo research would show that what is feared by 'slippery slopers' (i.e. that once a practice becomes legal it is difficult to prevent the permission of its future developments) can eventually become a reality.
Even if the limit was extended only for a few days, 'slippery slopers' might take this extension as a sign that their fears are well grounded, contrary to what their critics argue.
Is compromise the best way forward?
Let me take stock of what I have said thus far. In the previous section, I have shown how the arguments of beneficence and technical feasibility in favour of embryo research and of extending the 14-day limit are less straightforward than their proponents seem to suggest. I have also suggested, using the slippery slope argument as an example, that extending the limit for embryo research might undermine public trust in scientists, regulators and overseeing bodies. In order to show the importance of compromise and the value of respecting pluralism in the context of embryo research, I will not juxtapose the arguments of the beneficence of research and of technical feasibility with arguments pertaining to the sanctity of human life and human dignity. These arguments arise in the context of fundamental disagreements concerning the beginning of human life, the value of personhood, and concerning what respect human dignity ought to entail. They are portrayed as factual questions by both advocates and critics of research (i.e. research beyond the 14-day should not be allowed/should be allowed because human embryos are/are not persons and doing research on them would/would not violate their dignity); however, they are not merely a matter of fact, but they are informed and shaped by values, feelings and beliefs. Regardless of one's opinion regarding the values and beliefs of those defending the sanctity of life view, the burden of justifying one's claim should rests both on those defending this view and on those advocating technological progress, contrary to what seems to be normally believed [48].
What I intend to argue in this last section is that even if the question of the moral status of the embryos cannot be easily settled, there are two arguments in favour of reaching a compromise and respecting value pluralism in the context of embryo research: the argument of trust and the argument of respect. I argue that the argument of trust in favour of compromise, albeit being sound and widely used, could, in certain instances, assume instrumental and paternalistic forms. I then argue that in the context of embryo research and more generally in the governance of scientific and technical breakthroughs it would be helpful to employ what I call the argument of respect.
The argument of trust and the argument of respect The first argument in favour of reaching a compromise that, other things being equal, respects value pluralism is what I define as "the argument of trust". It is structured as follows: a) Scientific research is important because it improves people's lives and it should be allowed to carry on b) Public trust is necessary to carry on scientific research c) Therefore, public trust in scientific research ought to be preserved Given competing views concerning the moral status of the embryo, this argument provides a reason in favour of finding a solution of compromise that accommodates as much as possible these views and avoids the risk of overriding those of one camp with those of the other. The argument of trust relies on premise a) to show that people's lives are improved by scientific research [76]. It relies on premise b) to show that public trust is a necessary condition for scientific research to be carried on [77,78]. Trust is needed to ensure public acceptance of concrete applications of research; to preserve public confidence in policies informed by scientific research; and to allow the investment of public resources in scientific research [77,78]. In the context of embryo research, the argument shows that, given the potential benefits of embryo research (premise a), and given the importance of public trust to carry on this type of research (premise b); there are good reasons to preserve public trust (conclusion c). Following this argument, it is possible to draw two conclusions: on the one hand, if the extension of the 14-day limit for embryo research is strongly opposed by the public, 11 then there are good reasons not to extend the limit. On the other, if opposing views coexist in the public understanding of embryo research, then there are good reasons to find a solution that strikes a compromise between these views. The 14-day limit was a solution of compromise between conflicting moral views designed to maintain public trust whilst allowing research to go forward [12,24,79]. Today, there are two questions that need to be addressed, an empirical and a normative-theoretical question. The empirical question is whether the public (or at least a vast majority of it) is against the extension of the 14-day limit for embryo research. The normative-theoretical question is whether public opinion should influence the decision to change or retain the current 14-day rule, and if so, to what extent. An implication of taking into account the empirical question is that, if the public view of embryo research has become more favourable, then there is at least one good reason in favour of revisiting the 14-day rule. 12 In January 2017, a YouGov poll commissioned by the BBC in the United Kingdom, asked respondents' views on an extension of the limit up to the 28th day. Interestingly, 48% of the 1740 respondents said that they would be in favour of extending the limit, while 19% wanted to keep the current limit. In addition to these respondents, 10% maintained that they would want embryo research to be banned altogether, while 23% did not express any of the aforementioned preferences [80]. In addition to the empirical question regarding public attitudes towards the extension of the 14-day limit, one may wonder how such attitudes would be towards therapies and scientific results obtained thanks to research on embryos beyond this limit in countries that may extend it. Currently, the 14-day limit is either enshrined in the laws (for instance in the United Kingdom, Canada and Spain) or specified in the scientific guidelines (for instance in Singapore, China and in the United States) of many countries. However, these regulatory frameworks may change in the future. Hence, if this becomes the case, it would be interesting to investigate public attitudes towards those therapies and other advances of basic research that are made possible by research in countries that allow embryo research beyond day 14. 13 I will not provide an answer to these empirical questions here, if only because of the dearth of empirical data on public attitudes towards the extension of the limit, and embryo research more generally. Regarding, instead, the normative-theoretical question (i.e. whether public opinion should influence the decision to change or retain the current 14-day rule) the argument of trust would indicate that the answer is yes: public opposition to extending the 14-day rule should prevent its extension, while public agreement to a proposed change (i.e. the 28-day limit or other future proposals) should facilitate its extension. The risk of proceeding regardless of public attitudes towards an extension of the limit is that policies derived by embryo research will not be backed up by public consensus and applications of embryo research (e.g. therapies developed thanks to the knowledge yield by embryo research) not accepted. If the importance of maintaining public trust in scientific research (premise b) is motivated by these considerations, then it seems that public trust is only valued for instrumental and extrinsic reasons. In other words, this understanding of the importance of maintaining public trust in scientific research does not value public trust for its own sake, but only for its role in allowing research to go forward. What is problematic of this approach to public trust is that it offers a consequentialist reason in favour of respecting value pluralism, a reason that pertains to the better tangible outcomes of respecting value pluralism over other strategies of governance. In addition to this, when the instrumental justification of maintaining public trust is associated with a representation of the public as illinformed and with little or no understanding of the potential benefits of research, it could be motivated by paternalistic considerations. Scientists and ethicists may risk misinterpreting public concerns and views over embryo research as the result of a lack of expertise or evidence-based information rather than a matter of legitimate and genuine disagreement over values [81,82].
The second premise of the argument of trust, however, could be also motivated by a concern for a deliberative conception of democracy. This conception of democratic governance requires to both citizens and their representatives to provide public justifications of their views and to engage in deliberative processes. Public trust becomes then fundamental to allow these deliberative processes to take place and to foster better strategies for policy-making [82,83]. These deliberative processes of mutual exchange between experts and the public, together with a commitment to respecting conflicting moral views (i.e. respect for value pluralism) provide a reason in favour of finding a solution of compromise that, given competing views concerning the moral status of the embryo, respect this plurality of views and values regarding embryo research. These considerations concerning the importance of maintaining public trust echo other considerations employed to defend democracy as a political system and as a valuable form of governance. These include, for instance, equality: given the existence of conflicting views, values and beliefs, a good reason to respect them is that people or groups holding these different views will be respected by being granted an equal say on matters of common concern [84,85]. Mertens and Pennings [8] have argued in favour of the benefit of compromise in the context of different policies regulating embryonic stem cell research and have concluded that there is a moral obligation to respect conflicting moral views [8]. Similarly, Devolder argued that in spite of the epistemic costs of compromise, middleground positions could still be defended in the context of policy-making [6]. What I suggest here is that the commitment to a democratic decision-making process entails a fundamental respect for value pluralism [86]. In Warnock's and the IVF-Inquiry's time, this respect for value pluralism translated into a deliberation resulting in the 14-day rule. Today it translates into favouring an assessment of the rule and of the potential reasons to change it that once again takes into account the conflicting moral views held in society; an assessment that cannot rest on the argument of the benefice of research and of scientific feasibility alone.
Conclusions
In this article, I have argued that the 14-day limit for embryo research is not valuable in spite of being a solution of compromise, but rather because of it. The idea of a democratic society is that even those who do not accord intrinsic value to the human embryo should respect value pluralism and accord moral worth to opposing views. For this reason, any proposal to change the 14-day rule needs careful evaluation of the scientific feasibility and effective benefits of embryo research; it needs an extensive inquiry into public attitudes concerning embryos; and it needs a deliberative process that takes these elements into account. It does not need positions that consider only the beneficence of research and its technical feasibility. This would be undemocratic and potentially a move not backed up by a rigorous assessment of the science behind embryo research. Warnock and the other members of the IVF-Inquiry, albeit possibly guided by utilitarian-inspired views, opted for valuing a solution of compromise over other solutions [87,88] They did so behind closed doors. In this sense, the recent experiments published in Nature and Nature Cell Biology and the newly sparked debate on embryo research represent a valuable opportunity to begin a truly deliberative and democratic debate on this issue [82,86]. All in all, greater technical potential translates into greater responsibilities and need for deliberation. Endnotes 1 These embryos are not implanted in utero but frozen for further implantation. When a successful pregnancy is established, it had to be decided what do with these supernumerary frozen embryos. 2 For a detailed analysis of this alternative and of its limits, see the work of Katrien Devolder [6]. 3 It must be noted that these two alternatives have been criticised for a number of reasons. For instance, it is unclear whether parthenotes are significantly different from human embryos and whether ANT really escapes the ethical challenges of embryo research and whether it is a scientifically realistic alternative [6]. 4 The submissions from the experts can be found at the House of Commons Library, but they have never been published. 5 I commented elsewhere that this line of argument is problematic [61]. 6 I am grateful to one of the reviewers for raising this point. 7 In the United Kingdom, the law regulating MRTs allows both female and male embryos to be transferred in utero. This is different from the American approach to the clinical implementation of these novel techniques: the National Academies for Science, Engineering and Medicine (NASEM) Report recommended that only male embryos should be implanted in utero [35,89]. 8 For insightful analyses of the MRTs debate and of the ethics of these techniques, see [41-43, 90, 91]. 9 The Nuffield Council on Bioethics is an UK-based independent institution that examines ethical issues arising in the field of biotechnology and biomedicine. 10 For a detailed discussion of such position in another context (i.e. the debate on human enhancement), see [48]. 11 It must be noted that the idea that 'the public' is against scientific developments and breakthroughs is criticised for being artificially constructed (see for instance [58]). 12 Other good reasons include technical feasibility, public utility and so forth. 13 A case in point is Germany, which allows research on embryonic stem cells that are produced abroad (i.e. in countries with less restrictive legislations) before January 2002 (when the German Stem Cell Law was issued), but does not allow to derivation of stem cells from supernumerary embryos [6]. | 12,383 | 2017-05-30T00:00:00.000 | [
"Law",
"Philosophy"
] |
Recent Advances in Underlying Pathologies Provide Insight into Interleukin-8 Expression-Mediated Inflammation and Angiogenesis
Interleukin-8 has long been recognized to have anti-inflammatory activity, which has been established in various models of infection, inflammation, and cancer. Several cell types express the receptor for the cytokine IL-8 and upon its recognition produce molecules that are active both locally and systemically. Many different types of cells, in particular monocytes, neutrophils, epithelial, fibroblast, endothelial, mesothelial, and tumor cells, secrete IL-8. Increased expression of IL-8 and/or its receptors has been characterized in many chronic inflammatory conditions, including psoriasis, ARDS, COPD, and RA as well as many cancers, and its upregulation often correlates with disease activity. IL-8 constitutes the CXC class of chemokines, a potent chemoattractant and activator of neutrophils and other immune cells. It is a proangiogenic cytokine that is overexpressed in many human cancers. Therefore, inhibiting the effects of IL-8 signaling may be a significant therapeutic intervention.
Introduction
IL-8 is secreted by multiple cell types, including monocytes, neutrophils, epithelial, fibroblast, endothelial, mesothelial, and tumor cells. It is released from several cell types in response to an inflammatory stimulus [1]. IL-8 plays an important role in inflammation and wound healing [2] and has a capacity to recruit T cells as well as nonspecific inflammatory cells into sites of inflammation by activating neutrophils [3]. It also stimulates α-smooth muscle actin production in human fibroblasts [4]. Furthermore, IL-8 is chemotactic for fibroblasts, accelerates their migration, and stimulates deposition of tenascin, fibronectin, and collagen I during wound healing in vivo [4]. This paper summarizes current knowledge on the central role of IL-8 in different pathologies. The experimental results and questions posted in research work on IL-8 are covered here, and the potential roles of IL-8 as part of a complex cytokine network in wound healing, angiogenesis, and several cancers are discussed here.
Expression of IL-8 in Immune System
In many cell types, the synthesis of IL-8 is strongly stimulated by IL-1 and TNF-α. In human skin fibroblasts, the expression of IL-8 is enhanced by leukoregulin. The synthesis of IL-8 is induced also by phytohemagglutinins, concanavalin A, double-stranded RNA, phorbol esters, sodium urate crystals, viruses, and bacterial lipopolysaccharides. The expression of IL-8 from resting and stimulated human blood monocytes is upregulated by IL-7 [5].
In chondrocytes, the synthesis of IL-8 is stimulated by IL1-β, TNF-α, and bacterial lipopolysaccharides. In human astrocytes, the synthesis and secretion of IL-8 is induced by IL-1 and TNF-α. Glucocorticoids, IL-4, TGF-β, inhibitors of 5 lipoxygenase, and 1.25(OH)2 vitamin D3 inhibit the synthesis of IL-8. IL-8 is constitutively and commonly produced by various carcinoma cell lines, and this synthesis may be related to the elevation of serum IL-8 in patients with hepatocellular carcinoma. In epithelial, endothelial, and fibroblastic cells, secretion of IL-8 is induced by IL-17 [6].
Protein Characteristics
IL-8 is an 8.4 kDa nonglycosylated protein produced by processing of a precursor protein of 99 amino acids belonging to the CXC subfamily of chemokines which is characterised by two essential cysteine residues, separated by a third intervening amino acid [7,8]. There are two major forms of IL-8, that are the 72-amino acid monocyte-derived form, predominant in cultures of monocytes and macrophages, and the endothelial form which has five extra N-terminal amino acids, predominating in cultures of tissue cells such as endothelial cells and fibroblasts [9,10].
Longer forms of IL-8 (79 and 77 amino acids) and shorter forms (69 amino acids) have been isolated also from conditioned medium of lymphocytes stimulated with bacterial lipopolysaccharides, fibroblasts stimulated by IL-1 or TNF, and polyI: C-stimulated endothelial cells. The predominant form of IL-8 produced by endothelial cells (and also by anchorage-dependent cells and human glioblastoma cells) is the 77-amino acid variant. IL-8 (6-77) has a 5-10-fold higher activity on neutrophil activation, IL-8 (5-77) has increased activity on neutrophil activation, and IL-8 (7-77) has a higher affinity to receptors CXCR1 and CXCR2 as compared to IL-8 (1-77), respectively [11].
IL-8 Structure
The human IL-8 gene (SCYB8) has a length of 5.1 kb and maps to human chromosome 4q12-q21. The mRNA consists of a 101-base 5 untranslated region, an open reading frame of 297 bases and a long 3 untranslated region of 1.2 kb. The 5 flanking region of the IL-8 gene contains potential binding sites for several nuclear factors including activation factor-1, activation factor-2, IFN regulatory factor-1, hepatocyte nuclear factor-1, a glucocorticoid-responsive element, and a heat shock element [12,13].
Biological Functions and Expression of IL-8
The activities of IL-8 are not species specific. Human IL-8 is also active in animal cells. The biological activities of IL-8 resemble those of a related protein, NAP-2 (neutrophilactivating protein-2). It differs from all other cytokines in its ability to specifically activate neutrophil granulocytes where it causes a transient increase in cytosolic calcium levels and the release of enzymes from granules. IL-8 also enhances the metabolism of ROS (reactive oxygen species) and increases chemotaxis and the enhanced expression of adhesion molecules [16].
IL-8 alone does not release histamines. It actually inhibits histamine release from human basophils induced by histamine-releasing factors, CTAP-3 (connective tissue activating protein-3), and IL-3 [17]. IL-8 is involved also in pain meditation [18]. The intravenous administration of IL-8 in baboons causes a severe granulocytopenia followed by granulocytosis which persists as long as sufficient IL-8 levels are maintained [19].
IL-8 is chemotactic for all known types of migratory immune cells. IL-8 inhibits the adhesion of leukocytes to activated endothelial cells and therefore possesses antiinflammatory activities. The 72-amino acid form of IL-8 is approximately tenfold more potent in inhibiting adhesion of neutrophils than the 77-amino acid variant [20].
IL-8 is a mitogen for epidermal cells, and in vivo it strongly binds to erythrocytes. This absorption may be of physiological importance in the regulation of inflammatory reactions since IL-8 bound to erythrocytes no longer activates neutrophils. Macrophage-derived IL-8 supports angiogenesis and plays role in disorders such as rheumatoid arthritis, tumor growth, and wound healing that critically depend on angiogenesis [21]. Simonet et al. (1994) have studied transgenic mice overexpressing IL-8. Elevated serum IL-8 levels were found to correlate with increases in circulating neutrophils and decreases in L-selectin expression on the surface of blood neutrophils. The accumulation of neutrophils was observed in the microcirculation of the lung, liver, and spleen. Neutrophil extravasation, plasma exudation, or tissue damage was absent [22].
IL-8 has been implicated in a number of inflammatory diseases, such as CF [23], ARDS (adult respiratory distress syndrome) [24], COPD (chronic obstructive pulmonary disease), and asthma [25]. The airway epithelium is one of several sources of IL-8 in the airway, and it serves as a barrier against invading microorganisms. Airway epithelial release of IL-8 contributes to host defense by promoting neutrophil chemotaxis and airway inflammation [26].
Clinical Significance
Inflammation is the single greatest cause of pain. The first inflammatory mediators recognized to have potent hyperalgesic properties was bradykinin [27], since then a host of inflammatory medicators have been identified which can produce hyperalgesia, including prostaglandins, International Journal of Inflammation 3 leukotrienes, serotonin, adenosine, histamine, IL-1, IL-8, and NGF (nerve growth factor).
Cytokines are produced by leukocytes in response to exposure to bacterial toxins or to inflammatory medicators [28]. IL-8 has also been found to produce a sympatheticdependent hyperalgesia which does not appear to be medicated by prostaglandin [18,29].
IL-8 was shown to be angiogenic factor in 1992 [21,30]. Kitadai et al. Found high levels of IL-8 in six of eight carcinoma cells and lines and 32 of 39 gastric carcinoma specimens as compared to normal mucosal control. The levels of IL-8 correlated strongly with the specimen vascularity [31]. IL-8 was shown to be major inducer of neovascularisation of squamous cell carcinoma by lingen et al. [32]. IL-8 also plays a significant role in other cancer by mediating angiogenesis and tumorigenesis. IL-8 is produced by a wide panel of human cancer cells including colon [10], melanoma [33], prostate [34], ovary [35,36], or breast [37][38][39][40].
IL-8 and Inflammatory Diseases
7.1.1. Proinflammatory Effects of IL-8. IL-8 is an oxidative stress-responsive proinflammatory chemokine, released from epithelial cells following particle-induced oxidative stress leading to neutrophil influx and inflammation [41,42]. IL-8 is a potent chemoattractant and activator of neutrophils, the transcription of which is NF-κB dependent [43].
Proinflammatory stimuli are considered to be a major regulator of IL-8 levels in response to injury. IL-8 is involved in many of the wound healing processes. It not only serves as a chemotactic factor for leukocytes and fibroblasts but also stimulates fibroblast differentiation into myofibroblasts and promotes angiogenesis [44,45].
IL-8 is a proinflammatory cytokine that is upregulated by different cellular stress stimuli [46]. Human cells are characterized by their marked capacity for varying the expression levels of IL-8, allowing modulating the concentration of this cytokine to control the degree of neutrophil infiltration in the injured tissue [46]. The expression of IL-8 is regulated at both transcriptional and posttranscriptional levels [47], and the main MAPK pathways (p38, MEK1/2, and JNK) play a significant role in the release of IL-8 during the inflammatory process [46].
Enhancement of Corneal Wound
Healing. The induction of IL-8 facilitates an early innate immune response to infection in the corneal stroma and represents an elementary defense mechanism in corneal wound healing [48]. It enhances healing by rapidly chemoattracting leukocytes and fibroblasts into the wound site, stimulating the latter to differentiate into myofibroblasts. In turn, myofibroblasts are critical for wound contraction and closure and for the production of extracellular matrix molecules, which leads to development of granulation tissue [44].
The role of PDGF in corneal wound healing [49] and IL-8-mediated neutrophil chemotaxis [50] has been previously documented. It enhances healing by rapidly chemoattracting leukocytes and fibroblasts into the wound site [2], where PDGF increases IL-8 chemokine secretion twofold in human corneal fibroblasts, indicating that IL-8 is involved in PDGFmediated corneal wound healing. Both human corneal keratocytes and epithelial cells have been shown to synthesize and release IL-8 following cytokine stimulation and/or infection [51].
Proliferation in
Arthritis. IL-8 may be of clinical relevance in psoriasis and rheumatoid arthritis. Elevated concentrations are observed in psoriatic scales, and this may explain the high proliferation rate observed in these cells. IL-8 may be also a marker of different inflammatory processes [52].
IL-8 (and also IL-1 and IL-6) probably plays a role in the pathogenesis of chronic polyarthritis since excessive amounts of this factor are found in synovial fluids [53]. The activation of neutrophils may enhance the migration of cells into the capillaries of the joints. These cells are thought to pass through the capillaries and enter the surrounding tissues thus causing a constant stream of inflammatory cells through the joints [54].
Role in Myelodysplastic Syndrome.
Human recombinant IL-8 has shown that the lesion responsible for defective functions of neutrophils in patients with myelodysplastic syndrome can be restored without stimulating myeloid progenitor cells. IL-8 may be able, therefore, to reduce the risks of lethal infections in these patients without the potential risk of stimulating leukemic clones [55].
Gastric Mucosal Injury and Cancer Progress.
In the human gastric mucosa, elevated levels of ROS are associated with Helicobacter pylori infection [56], and it leads to oxidative DNA damage in the gastric mucosa thereby contributing to mucosal injury and promoting carcinogenesis [57,58]. H. pylori infection is also associated with increased gastric mucosal cytokine expression including IL-8 [59,60] and TNF-α [61,62].
TNF-α is an endogenous mediator of proinflammatory cytokine stimulation and can induce ROS [63] and stimulate the induction of various genes involved in inflammation [64] including IL-8. IL-8 is an important mediator of H. pyloriassociated neutrophil infiltration and gastric inflammation [57]. ROS modulates IL-8 secretion in gastric epithelial cells, suggesting that IL-8 gene expression in the gastric mucosa is redox sensitive [65].
Although a regulatory role of TNF-α in epithelial cell repair has been described [66], it is well established that TNFα stimulates IL-8 and contributes to epithelial cell injury and apoptosis [63].
7.1.6. Increased BAL Fluid in ALI and COPD. In human ALI (acute lung injury), neutrophil infiltration is an early and important pathophysiological event, and IL-8 appears to have an important role in mediating this process [67,68]. Clinical research demonstrated increased IL-8 levels in serum International Journal of Inflammation and BAL (bronchoalveolar lavage) fluid of patients with ALI [69,70]. Increased BAL fluid levels of IL-8 predicted the development of ALI in at-risk patient populations and are associated with increased mortality in patients with ALI [71]. In animal models of ALI, administration of IL-8 antibody conferred protection [72]. IL-8 is also produced by respiratory epithelium [26].
Studies involving alveolar macrophages, U937 cells, isolated peripheral blood monocytes, and human whole blood demonstrated that hyperoxia modulates IL-8 gene expression [73]. Oxidant stress other than hyperoxia was previously described to induce IL-8 expression in respiratory epithelial cells. DeForge et al. [74] and Lakshminarayanan et al. [75] hyperoxia alone had a minimal effect on IL-8 gene expression. However, combination of hyperoxia and TNF-a synergistically increased IL-8 gene expression.
COPD (chronic obstructive pulmonary disease) cigarette smoke can also induce airway inflammation. It has been shown to activate proinflammatory transcription factors NF-κB and activator protein (AP)-1 [76] as well as to upregulate the expression of TNF-α and IL-8, proinflammatory mediators associated with COPD [77].
Prevention of Lung Epithelial Cells
Injury. TNF-α levels are markedly elevated in BAL fluid from patients with ARDS [78], and TNF-α levels are associated with increased IL-8 levels. TNF-α is a major inducer of IL-8 expression in lung epithelial cells [26]. Neutralizing IL-8 antibodies prevented lung injury in animal models of lung disease, indicating IL-8 is an important mediator of lung injury [79,80]. IL-8 gene expression is induced by a wide variety of agents including cytokines, growth factors, bacterial and viral products, oxidants, and others [81]. Induction of IL-8 gene expression is subject to both transcriptional and posttranscriptional regulation in a cell/tissue-and stimulusspecific manner [46,81].
In lung epithelial cells, TNF-α activates IL-8 promoter activity via recruitment of NF-κB to a TNF-α response element consistent with a role for transcriptional mechanisms in the induction of IL-8 gene expression in lung epithelial cells [82].
Signalling in Cystic
Fibrosis. IL-8 drives the inflammatory response in cystic fibrosis (CF) which is an autosomal recessive disorder caused by mutations in the gene encoding the cystic fibrosis transmembrane conductance regulator (CFTR) [83].
BAL fluid in patients with CF contains increased levels of proinflammatory cytokines and neutrophils. IL-8 levels attribute to activate NF-κB [84]. Prostaglandin E2 (PGE-2) is a potent mediator of inflammation produced by cyclooxgenation of arachidonic acid, and its hypersecretion results in elevated IL-8 secretion through unidentified signaling pathway [85].
In human T lymphocytes, PGE-2 induces C/EBP homologous protein (CHOP) transcription factor that binds to the IL-8 promoter [85]. CHOP is a growth arrest and DNA damage-inducible gene 153 (GADD153) protein. PGE-2 mediates the IL-8 inflammatory response in CF cells through the CHOP transcription factor. The inflammatory response in CF contributes to neutrophil-driven lung destruction [86,87]. Several cytokines, such as IL-1b, TNF-α, IFN-δ, and bacterial products, induce IL-8 release from airway epithelial cells [88], thus exacerbating the baseline inflammatory milieu in CF.
Much of the PGE-2 in airways is likely to be derived from the epithelium [89], and the stimulation of chloride secretion in airway epithelial cells by proinflammatory mediators such as bradykinin (BK) occurs through the induced release of PGE-2 (26). Moreover, BK induces IL-8 secretion in non-CF and CF human airway epithelia [90].
The physiologic as well as pathologic concentrations up to 100 mM PGE-2 upregulate endogenous IL-8 expression in human intestinal epithelial cells [91] and enhance IL-8 production in human synovial fibroblasts stimulated with IL-1b [92].
7.1.9. Increased Expression in Asthma. IL-8 plays an important role in inflammatory lung diseases like bronchial asthma or severe infections caused by respiratory syncytial virus (RSV), and during infancy it might lead to the development of recurrent wheezing and/or bronchial asthma [93]. Increased concentrations of IL-8 are found in the BAL fluid and sputum of asthmatic patients [13]. In addition, repeated administration of IL-8 into the airways induces bronchial hyperreactivity in guinea pigs [94]. Genetic association of IL-8 has been described with asthma [95] and RSV bronchiolitis [96].
IL-8 binds with high affinity to two different receptors: IL-8 receptor α (IL-8RA, CXCR1) and β (IL-8RB, CXCR2). These closely related proteins are members of the super family of receptors, which couple to guanine nucleotide-binding proteins. IL-8RA is localized on chromosome 2q35 [13], where linkage to total serum IgE levels in asthmatics has been described [94]. Association of IL-8RA polymorphisms has recently been described with asthma and chronic obstructive pulmonary disease [95]. However, IL-8RA polymorphisms do not play a major role, neither in the development of severe RSV infections nor in asthma.
Increased Expression in the Colon Mucosa with Inflammatory Bowel Disease. IL-8 is produced in the colonic lamina propria of patients with inflammatory bowel disease.
There is no difference in IL-8 protein concentrations between inflamed mucosa of patients with Crohn's disease or ulcerative colitis. IL-8 does thus not permit the differentiation between these two diseases entities. Mucosal IL-8 protein and IL-8 mRNA concentrations are correlated with the degree of inflammation. IL-8 mRNA is strongly expressed by intestinal inflammatory cells but not by intestinal epithelial cells suggesting that virtually all IL-8 is produced by interstitial inflammatory cells [96].
An imbalance of the intestinal immune system with a shift towards proinflammatory mediators is a characteristic feature of inflammatory bowel diseases [97]. Among the proinflammatory cytokines, IL-8 together with IL-l and tumour necrosis factor play an important part.
IL-8 is synthesised by various colonic cancer cell lines like HT-29 cells or Caco-2 cells [98]. Evidence has also been provided that isolated normal intestinal epithelial cells may synthesise IL-8 [98].
An increased synthesis of IL-8 has been described in the mucosa of patients with inflammatory bowel disease. Where Mahida and coworkers [99] found enhanced mucosal tissue concentrations of IL-8 essentially only in patients with ulcerative colitis but not in patients with Crohn's disease, Izzo et al. [100] detected increased concentrations of IL-8 also in the colonic mucosa from patients with Crohn's disease.
Promotion of Endometriosis
Pathogenesis. IL-8 is representative of α-chemokine group and is a chemotactic and angiogenic factor [101]. It acts as an endometrial autocrine and paracrine factor and regulates many physiologic processes such as menstruation and remodeling of endometrium [102]. In addition, IL-8 also contributes to the pathogenesis of endometriosis by promoting a vicious cycle of endometrial cell attachment, invasion, immune protection, cell growth, and further secretion [103].
The presence of inflammation and neovascularization observed in and around ectopic endometrial implants and the presence of inflammatory neutrophils in these lesions [104] is compatible with the biological actions of IL-8 [105]. IL-8 is detectable in the peritoneal fluid of most women with an active ovarian cycle, and it is a normal constituent of peritoneal fluid in women with and without endometriosis. The concentration of IL-8 in the peritoneal fluid was higher in women with endometriosis compared to women without, and that difference was statistically significant as has been reported previously [106,107].
Peripheral blood macrophages from endometriosis patients produced increased concentrations of IL-8 [101]. In women with early endometriosis (American Fertility Society Stage 1), IL-8 concentrations is much high as compared to women with later stages of the disease. It can be speculated that this may implicate IL-8 in the induction of the disease, and it is conceivable that other chemokines participate in the chronic phase of endometriosis. There are a number of potential cellular sources of IL-8 in endometriosis. Enhanced production by peritoneal macrophages has been found [107], but normal endometrial gland cells [108] and stromal cells [109] also produce IL-8 that can be enhanced by proinflammatory mediators. In normal nonpregnant endometrium, IL-8 was found to be localized perivascularly [102], suggestive of a direct role upon endothelial cells as well as a function in presenting a fixed chemotactic stimulus to circulating leukocytes.
Intervertebral Disc Causing Low Back Pain.
Human NP (nucleus pulposus) produces IL-8. Significant quantities of IL-6, IL-8, and PGE2 were produced by both the sciatica and low back pain groups.
Burke et al. studied the production of inflammatory mediators in disc tissues in a similar group of patients [110] and compared the levels of IL-6, IL-8, and PGE2 in their disc tissue from patients undergoing discectomy for sciatica with those from patients undergoing fusion for discogenic low back pain which showed that more IL-6, IL-8, and PGE2 are produced by discs from patients with low back pain compared with discs from patients with sciatica. There was a trend towards less exposure of the NP in the group with low back pain only compared with those with sciatica introducing a bias towards higher levels of mediator production in the latter [111].
The rates of production of IL-6 and IL-8 in the AI and EXT categories of discs in low back pain are much higher than those found in those with sciatica. A combination of the innervation of the NP and increased production of proinflammatory mediators suggests that the mechanism for discogenic low back pain may be the induction of hyperalgesia in the newly innervated degenerating NP. Both IL-8 and PGE2 are known to induce hyperalgesia [112].
IL-8 and Cancer.
The extensive effects of increased IL-8 activity on tumor pathogenesis make it a unique therapeutic target in cancer therapy. For example, IL-8 promotes tumor growth, angiogenesis, and metastasis in murine models of several cancers [113]. Moreover, blocking IL-8 activity with a monoclonal antibody has been shown to decrease tumor growth in two murine cancer models [114]. Blockade of IL-8 expression in some human melanoma cell lines by antisense RNA has shown that IL-8 functions as an autocrine growth modulator for these cells [33].
High Expression in Ovarian
Cancer. IL-8 is expressed at high levels in ovarian cancer cells where expression is correlated with tumorigenicity [115]. IL-8 is overexpressed in most human cancers, including ovarian carcinoma [116]. Induction of IL-8 expression is mediated primarily by the transcription factor NF-κB [117]; however, the Src/signal transducer and activator of transcription 3 (Stat3) pathways may also promote IL-8 production independent of NF-κB [118]. High tumor IL-8 expression is significant in ovarian cancer associated with advanced tumor stage and high-tumor grade. The higher the IL-8, the poorer the survival rate. IL-8 overexpression in ovarian cancer is associated with decreased patient survival and is an independent prognostic factor for poor clinical outcome that targeted therapy with IL-8 siRNA-DOPC in combination with chemotherapy effectively reduced tumor growth in both chemotherapy-sensitive and chemotherapy-resistant ovarian cancer models [119]. these antitumor effects are likely due to a reduction in proangiogenic factors present in the tumor microenvironment that led to decreased angiogenesis and tumor cell proliferation following silencing IL-8 expression. IL-8 may be a potential therapeutic target in ovarian cancer. IL-8 overexpression is reported in multiple malignancies and is frequently associated with poor clinical outcome [120].
Several studies have examined the utility of IL-8 as a diagnostic or prognostic marker in patients with ovarian cancer [121][122][123]. For example, increased IL-8 expression in ovarian cyst fluid, ascites, serum, and tumor tissue from ovarian cancer patients is found to be associated with highgrade and advanced-stage cancers, as well as with decreased disease-related patient survival [121,123]. Collectively, these data provide the rationale for targeting IL-8 as a therapeutic approach in ovarian carcinoma.
Decrease in IL-8 expression, especially when combined with taxane-based chemotherapy, led to a statistically significant reduction in orthotopic tumor growth. Xu and Fidler [36] reported that IL-8 overexpression was directly associated with increased tumor vascularity and tumor cell proliferation in ovarian carcinoma.
Enhancement of Cancer Mechanism in Melanoma.
In melanoma, increased IL-8 levels are associated with increased tumor angiogenesis; conversely, a reduction in tumor microvessel density occurred following treatment with an anti-IL-8 antibody [124]. IL-8 has also been shown to increase tumor cell proliferation and to prolong the survival of human endothelial cells and enhance their ability to form tubules, which supports the theory that the proangiogenic effects of IL-8 are due to activation of both tumor and endothelial cells [113,124].
Members of the MMP family of proteins promote tumor angiogenesis as well as cellular detachment, invasion, and metastasis, and some MMP family members, including MMP-2 and MMP-9, are reported to be regulated by IL-8 expression [125,126]. IL-8 induces MMP-2 and MMP-9 expression in bladder cancer and melanoma cell lines, which contributed to increased tumor cell invasion in vitro [113,126].
Increase of VEGF and Neuropilin Expression in
Pancreatic Cancer. IL-8 is upregulated in both cancer and chronic inflammatory diseases of the pancreas [127]. It is linked to pancreatic cancer tumorigenesis primarily through its regulation of angiogenesis and metastasis [120]. Human umbilical vein endothelial cells (HUVECs) proliferation and angiogenesis are both increased when cocultured with pancreatic cancer cells, or with exogenous in IL-8. The increase of cell proliferation and angiogenesis of HUVEC can be blocked by IL-8-neutralizing antibodies [128].
IL-8 is associated with chronic diseases of the pancreas [127]. It is overexpressed in most human pancreatic cancer tissues [120]. Higher IL-8 levels in pancreatic cancer patient serum are associated with significant weight loss [129]. The expression levels of IL-8 appear to correlate with their tumorigenic and metastatic potential in an orthotopic xenograft model [130]. Furthermore, treatment with exogenous IL-8 increases the invasiveness of human pancreatic cancer cell, while blocking IL-8 inhibited the growth of another human pancreatic cancer cell [131]. Blocking IL-8 in pancreatic cancer cells decreased their growth and their ability to attach to endothelial cells, suggesting that IL-8 is an autocrine mitogenic factor important for metastasis [132].
The expression of IL-8 can be induced by many stimuli including lipopolysaccharide, phorbol 12-myristate 13acetate (PMA), IL-1, and TNF. Several stress factors, such as hypoxia, acidosis, nitric oxide (NO), and cell density, also significantly influenced the production of IL-8 in human pancreatic cancer cells [133].
IL-8 is involved in cancer hypoxia pathway where its expression is regulated by hypoxia-inducible factor-1 (HIF-1), NF-κB, and KRAS [128]. IL-8 overexpressed in pancreatic cancer increases MMP-2 activity and plays an important role in the invasiveness of human pancreatic cancer [130,131,134], and human pancreatic cancer is associated with increased expression of IL-8 [127].
IL-8 as a proantigenic cytokine that helps the spread of distant metastasis by neovascularization and promotes the survival of the tumor mass in general by maintaining a rich capillary network to accommodate the heavy nutrient requirements of this aggressive cancer. Blocking IL-8 and IL-8 receptor CXCR2 significantly inhibited angiogenesis [118,135].
Both IL-8 and VEGF are important components in cellular response to hypoxia, a common event in cancer, including human melanoma, colon cancer, and pancreatic cancer [117]. IL-8 acts as a direct growth and survival factor on pancreatic cancer cells, and IL-8 as multifaceted regulator of gene expression can regulate multiple pathways including angiogenesis, metastasis, and response to hypoxia in pancreatic cancer [136].
Expression in the Neuroendocrine and Nonneuroendocrine Compartments of Prostate Cancer.
Moore et al. demonstrated that IL-8 is a positive regulator of tumor formation in severe combined immunodeficiency (SCID) in mice injected with the prostate cancer cell line PC-3 [137]. Patients with PC (prostate cancer) have high serum levels of IL-8 which correlates with the stage of the disease. Additionally, in PC serum IL-8 levels have been determined to be an independent prognostic variable from the serum levels of free and total prostate-specific antigen (PSA) [138]. The combined use of free and total PSA ratio and IL-8 levels has been found to be more accurate in distinguishing between prostate cancer and benign prostatic hypertrophy.
In PC, serum IL-8 levels increase with progression of the disease [139]. The PC cell line 3 expresses and secretes IL-8 [140] and expresses IL-8 receptors CXCR1 and CXCR2 [140]. IL-8 is a mitogenic [21] and angiogenic factor [141]. PC cell line LNCaP does not express IL-8, but selection of the cells in androgen-deprived media led to the emergence of a cell line that produces IL-8 and is more tumorigenic than the parental cells [141].
The IL-8 receptor CXCR1 is rarely expressed in benign epithelial cells, its expression is increased in PIN (pancreatic invasive neoplasm), and further increase in invasive tumor suggests paracrine mechanism where IL-8 produced by the NE tumor cells may promote the proliferation of the non-NE tumor cells in the absence of androgen [142].
Metastatic Factor in Breast
Cancer. Estrogen receptor (ER) status is an important parameter in breast cancer management as ER-positive breast cancers have a better prognosis than ER-negative tumors. IL-8 is overexpressed in most ER-negative breast, ovary cell lines, and breast cancer, International Journal of Inflammation 7 whereas no significant IL-8 levels are found in ER-positive breast or ovarian cell lines. IL-8 is considered as a potential metastatic factor in breast cancers [38]. IL-8 is found not only in normal but also in cancerous breast [37,143].
Metastasis represents the major remaining cause of mortality in human breast cancer, which suggests that invasiveness is associated with lack of ER and changes in IL-8 expression. However, there was no correlation between ERβ expression and IL-8 level, indicating ERα, the main estrogen receptor in ER-positive breast and ovarian cancer cells, is the receptor linked to IL-8 expression. Patients with recurrent prostate, breast, or ovarian cancer exhibit higher levels of IL-8 in serum or peripheral blood leukocytes [138,144,145] and in cancer tissues [34]. Several studies show that IL-8 expression in breast tumors is identical between normal and cancer tissue [146,147].
Concerning IL-8 receptors, CXCR1 expression is extremely low in all lines, whereas most of the cells show a nice expression of CXCR2, without any correlation with ER status [143]. CXCR1 and CXCR2 which are encoded by two distinct genes [148,149] are expressed in most cancer cells with no apparent correlation with the grade of the tumor [150,151].
Exogenous expression of IL-8 increases by twofold the invasion rate of ER-positive breast cancer cells, without affecting the in vitro proliferation rate of these cells which is the proinvasive role of IL-8. When IL-8 is transfected in cancer cells, both tumor inhibition [152,153] and promotion [126,154] have been observed in vivo depending on the cell type [155]. Tumor). Several chemokines secreted from EBV-infected NPC cells are increased upon EBV reactivation into the lytic cycle, and IL-8 is upregulated most significantly [156].
BEV (Epstein Barr Virus) and NPC (Nasopharyngeal
The most frequent histological type of NPC is closely associated with Epstein-Barr virus (EBV) infection [157]. NPC exhibits several inflammatory features in the tumor tissues, including intensive leukocyte infiltration, abundant expression of inflammatory cytokines, and constitutive activation of inflammation-associated transcription factors [158]. Expression of several chemokines has been demonstrated in NPC tumors, including IL-8, macrophage inflammatory proteins (MIPs), macrophage chemoattractant proteins (MCPs), and RANTES [159].
EBV reactivation in NPC cells is associated with the induction of certain chemokines where IL-8 was upregulated most significantly and consistently [147].
Neutrophils first invoke into inflamed tissues, then they produce a variety of chemokines with potentials to direct sequential recruitment of other leukocytes [160]. Therefore, by initial recruitment of neutrophils, IL-8 may trigger the subsequent influx of leukocytes in NPC. Notably, neutrophil infiltration promoted by tumor-derived IL-8 has been linked to the poor prognosis of bronchioloalveolar carcinoma and to increased genetic instability of Mutatect tumors [161], suggesting that IL-8-attracted neutrophils may contribute to tumorigenesis.
IL-8 is associated with the level of vascularization in NPC [159,162]. Moreover, NPC is a highly metastatic cancer, and IL-8 may be involved in the phenotype since it can promote tumor invasion or metastasis through induction of certain metalloproteinases [154].
IL-8 is a converged target gene of gammaherpesviruses in both latent and lytic infection states. EBV utilizes the lytic protein Zta and the latent protein LMP1 to induce IL-8 expression, while Kaposi's sarcoma-associated herpesvirus (KSHV) can upregulate IL-8 by either the lytic protein K15 or the latent protein K13 [163]. Since KSHV-associated Kaposi's sarcoma also exhibits several inflammation-like features, induction of IL-8 is likely to be critical for the virus-mediated "inflammatory tumorigenesis"; [159,162].
Blockage of IL-8 or IL-8 receptors may be considered a potential therapeutic approach for treating NPC or other inflammation-related malignancies [156].
Conclusion
IL-8, a potent angiogenic, proinflammatory, growth-promoting factor, properties which may be shared by other chemokines [164], is also a chemoattractant for neutrophils and induces expression of several cell adhesion molecules [164]. It also lead to neutrophil activation [165] and hence might contribute to the pathogenesis of inflammatory diseases. IL-8 specifically chemoattracts several cell types, which is the basis for inflammation. Neovascularization is a crucial step in tumor growth and metastasis. Regulation of IL8 production is a key mediator of inflammation by NF-κB. The receptors for IL-8 are widely expressed on normal and various tumor cells.
IL-8 induces proinflammatory, chemotactic, and matrix, degradative responses in many pathologies. More research will certainly help to achieve a much better understanding of the function of IL-8 in different pathologies. However, knowledge gained from IL-8 data might be applied in a foreseeable future to cure the low back pain that often accompanies disc degeneration and therefore be beneficial for the patient. Despite exciting advances on IL-8, significant technical obstacles still have to be overcome before such approaches become realistic alternative therapeutic options to conventional surgical intervention procedures. Studies to understand IL-8 gene expression in the various cell types may lead to new therapeutics to enhance or inhibit IL-8 production. Many outstanding questions regarding IL-8 and inflammation exist. Further examination pin pointing the role of different IL-8 expressing subsets will allow us to better understand this cytokine. | 7,728 | 2011-12-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
A head and neck treatment planning strategy for a CBCT‐guided ring‐gantry online adaptive radiotherapy system
Abstract Purpose A planning strategy was developed and the utility of online‐adaptation with the Ethos CBCT‐guided ring‐gantry adaptive radiotherapy (ART) system was evaluated using retrospective data from Head‐and‐neck (H&N) patients that required clinical offline adaptation during treatment. Methods Clinical data were used to re‐plan 20 H&N patients (10 sequential boost (SEQ) with separate base and boost plans plus 10 simultaneous integrated boost (SIB)). An optimal approach, robust to online adaptation, for Ethos‐initial plans using clinical goal prioritization was developed. Anatomically‐derived isodose‐shaping helper structures, air‐density override, goals for controlling hotspot location(s), and plan normalization were investigated. Online adaptation was simulated using clinical offline adaptive simulation‐CTs to represent an on‐treatment CBCT. Dosimetric comparisons were based on institutional guidelines for Clinical‐initial versus Ethos‐initial plans and Ethos‐scheduled versus Ethos‐adapted plans. Timing for five components of the online adaptive workflow was analyzed. Results The Ethos H&N planning approach generated Ethos‐initial SEQ plans with clinically comparable PTV coverage (average PTVHigh V100% = 98.3%, Dmin,0.03cc = 97.9% and D0.03cc = 105.5%) and OAR sparing. However, Ethos‐initial SIB plans were clinically inferior (average PTVHigh V100% = 96.4%, Dmin,0.03cc = 93.7%, D0.03cc = 110.6%). Fixed‐field IMRT was superior to VMAT for 93.3% of plans. Online adaptation succeeded in achieving conformal coverage to the new anatomy in both SEQ and SIB plans that was even superior to that achieved in the initial plans (which was due to the changes in anatomy that simplified the optimization). The average adaptive workflow duration for SIB, SEQ base and SEQ boost was 30:14, 22.56, and 14:03 (min: sec), respectively. Conclusions With an optimal planning approach, Ethos efficiently auto‐generated dosimetrically comparable and clinically acceptable initial SEQ plans for H&N patients. Initial SIB plans were inferior and clinically unacceptable, but adapted SIB plans became clinically acceptable. Online adapted plans optimized dose to new anatomy and maintained target coverage/homogeneity with improved OAR sparing in a time‐efficient manner.
INTRODUCTION
Conventional radiotherapy (RT) consists of generating an initial treatment plan based on a pre-treatment CT and associated relevant anatomy/disease delineated by the physician, and subsequently treating patients with this initial treatment plan throughout the entire treatment course, usually lasting a few weeks.This is based on the assumption that the initial patient model/plan remains applicable for the full treatment course.However, daily changes in patient positioning and/or patient anatomy (e.g., weight, tumor size, organ filling, etc.) can lead to uncertainties and even result in missing the tumor and/or depositing additional doses in surrounding normal tissues potentially resulting in treatment failures and/or unwanted toxicities. 1 Margins to account for these uncertainties are incorporated into the treatment planning process; however, this may limit the ability to deliver tumoricidal radiation doses without surrounding normal tissue toxicity. 2,3he concept of adaptive radiotherapy (ART) was introduced in 1997, where initial treatment plans are adjusted accounting for variations throughout a treatment course. 4ART can be offline (conventional treatment planning process described above is repeated) or online (plan adaptation is performed with the patient on the table, prior to treatment delivery, based on anatomy of the day). 57][8] Similarly, Online ART (oART) has shown improvement in target coverage and OAR sparing for standard fractionation and stereotactic RT for H&N, 9 abdomen, 10-12 pelvic [13][14][15] and, ultra-central lung. 16he newly developed Ethos CBCT-guided ring-gantry oART system (Varian Medical Systems, Palo Alto, CA) is based on the Varian Halcyon treatment machine with a closed-bore allowing faster four revolutions per minute (RPM) gantry rotation and a 6MV flattening filter free beam.The multi-leaf collimators are dual-stacked and staggered, providing 0.5 cm effective leaf width at the isocenter, without additional collimating jaws.Currently, Ethos uses iteratively reconstructed kV-CBCT for image guidance. 17Ethos provides automated planning with the intelligent optimization engine (IOE) using physician clinical goals to automatically generate the optimization objective functions for the photon optimization (PO) algorithm. 18The IOE automatically creates helper structures (e.g., dose-shaping rings for targets), objectives to control monitor units (MU), and objectives for dose fall off from targets to spare normal tissue. 18The IOE also creates cropped non-overlapping structures for dose optimization to help achieve clinical goals based on assigned priority levels.There are four priority levels for clinical goals, from P1 (most important) to P4 (least important).Additionally, there is an "R" priority, which will only report a given dosimetric value for evaluation purposes.The IOE reassigns weights for the objectives to most closely achieve user-provided clinical goals.Note, this is a novel automated planning paradigm that differs from traditional manual manipulation of optimization objective functions to indirectly achieve physician goals, as is done in most conventional treatment planning systems (TPSs).As such, one focus of this work was to develop/evaluate a planning strategy within this paradigm for H&N that would produce clinically acceptable initial plans and was robust enough to maintain plan quality during oART.
The online adaptive workflow in Ethos (v1.1) generates a synthetic CT for dose calculation using deformable image registration (DIR) of the initial planning CT to the CBCT of the day. 18,19During the adaptive process, only certain auto-segmented (via artificial intelligence (AI) models and/or DIR) structures are reviewed and edited when needed.This includes a subset of OARs called "influencers" that are used for structureguided DIR to generate remaining OARs and targets. 19argets as well as OARs with high priority (P1 or P2) are reviewed and edited, while remaining structures are not visible or editable.Ethos generates a "scheduled plan" (initial plan re-calculated on the anatomy of the day) and an "adapted plan" that has the same beam geometry as the initial plan but is re-optimized to adapt the dose based on the anatomy of the day using the IOE with the same clinical goals as the initial plan. 19n this study, we evaluated the capabilities of Ethos for H&N online adaptive radiotherapy by developing an optimal strategy to generate clinically acceptable initial plans as well as efficiently/effectively provide online plan adaptation to improve overall treatment quality.Clinical data from previously treated H&N patients that received offline adaptation during their treatment course were used to provide extreme scenarios where real anatomical changes were appreciable enough to clinically justify an offline plan adaptation with complete replanning.These significant changes in patient anatomy were chosen to push the limits of the Ethos online adaptive system.][22] Demonstrated success of such adaptation in this work will emphasize the robustness of the optimizer and its ability to account for not only minor but major changes.Use of cases with clinical offline adapted data also provided benchmark clinical dosimetry for adapted plans.
Initial planning
For this study, retrospective clinical data, such as simulation-CT and physician defined structures, were used to re-plan, in Ethos, 20 previously treated H&N patients.In total, 30 H&N initial plans were created in Ethos for the 20 patients' cohort: 10 simultaneous integrated boost (SIB) patients (10 plans) and 10 sequential boost (SEQ) patients (20 plans: 10 base and 10 boost plans).The distribution of these patients was: 60% Oropharynx, 15% Oral cavity and 5% of each hypopharynx, nasopharynx, larynx, maxillary sinus, and unknown primary.These patients were selected because they all required offline adaptation during their treatment course due to considerable anatomical changes.A summary of the initial planning and online adaptive process for H&N patients within Ethos highlighting what data were analyzed, is in Figure S1 and has generally been described previously in the literature. 1,20,23Manual contouring of GTV and CTV Med/Low was performed by subspecialized H&N radiation oncologists where CTV High = GTV + 5 mm and PTV High/Med/Low = CTV High/Med/Low + 3 mm.Following expansion of GTVs, clinical CTV High was manually cropped from regions where physicians were confident that disease had not spread (e.g.,bone,air,fascial planes, etc.) and PTVs were cropped 5 mm from the skin to allow for dose build-up.In this study, plans were stratified based on the boost being incorporated into all fractions of a given plan, which is the case for SIB patients, or the boost being treated in 10 fractions that do not include treatment of PTV low , which is the case for SEQ patients.Ten patients were planned based on SIB with different PTV dose levels.These were distributed as follows (PTV High /PTV Med /PTV Low [Gy]): 70/-/56 (n = 5), 66/-/52.8(n = 2), 69.3/-/54.12(n = 1), 70/63/56 (n = 1), 66/60/54 (n = 1).The remaining 10 patients were planned based on SEQ with the total (base plus boost plan) PTV dose levels as follows (PTV High /PTV Med /PTV Low [Gy]): 66/-/41.4(n = 4), 66/59.4/41.4(n = 3), 66/-/40 (n = 1), 66/46/41.4(n = 1), 66/(59.4& 46)/41.4(n = 1).Note that SEQ base plans had multiple PTV dose levels as well, similar to SIB plans.
For this work, an Ethos emulator software package was utilized (Ethos v1.1, Varian Medical Systems, Inc., Palo Alto, CA).The emulator provided the Ethos TPS for initial planning and capabilities to simulate the online adaptive workflow in silico without requiring a fully functioning clinical system.For initial plan generation, clinical structures (GTVs, CTVs, and OARs) were imported into a templated Ethos planning directive for efficiency and consistency.PTVs and CTVs were derived from GTVs using the above clinical margins.Similarly, planning organ at risk volumes (PRVs) were created using Ethos derivation so that they propagated appropriately when GTV/critical OARs were modified during adaptation.The nomenclature suggested by AAPM TG263 was used, where structure names ending in "_PRVxx" represented OAR expansions of xx millimeters. 24pecific institutional guidelines and associated clinical goals/priorities for the initial planning approach in Ethos are in Table 1.The general planning approach was as follows.PTV and critical OAR (spinal cord, brainstem) goals were assigned higher priority (P1-P2), and other OAR goals were assigned lower priority (P3-P4).To minimize clinical goals, only the most important goal for each OAR was used, and goals for OARs intersecting or very close to PTVs that may cause conflict and prevent the IOE from generating a clinically acceptable plan were excluded altogether (these were decided on a case-by-case basis by identifying regions within PTVs that had unacceptably high hot spots or appreciable under-coverage and were in close proximity to OARs with conflicting objectives).For hotspot control, a goal of D 0.03cc ≤ 105% was used when planning, with 107% acceptable variation, which was based on what was found to be achievable for H&N plans using a similar dose calculation algorithm. 25dditional structures and goals were introduced to control undesired hotspots in PTV high, medium and low, such as V 105% ≤ 5% with intermediate priority, as well as additional goals (D 0.03cc ≤ 105% and V 105% ≤ 5%) to cropped structures of PTV Med, Low defined as follow: PTV Med and/or PTV Low were cropped a certain distance from PTV High .This distance was determined based on the prescription dose to PTV Med and/or PTV Low relative to the prescribed dose to PTV High .This distance was either 1.5 or 2.5 cm away from PTV High for prescribed doses of 90% or 80% relative to that of PTV High , respectively (Table 2).To limit hotspots outside of the targets, additional cropped body structures from all PTVs were created and assigned a goal of D 0.03cc < PTV High prescription dose.Anatomically-derived helper structures for isodose shaping, such as low dose in the posterior neck and oral cavity, were defined with lower priority goals (Table 2).For the posterior block, a margin around the union of the brainstem and the spinal cord was taken 9 cm laterally and posteriorly with the resulting structure then cropped 1.5 cm away from the target.For the anterior block, the oral cavity was cropped 1.5 cm away from the target.It is critical to note that the above helper structures must be anatomically derived (i.e., not edited manually as is common practice in more standard optimization-based planning approaches) so that these helper structures adapt appropriately during the oART process (Figure 1).In Ethos version 1.1, plan normalization must be set prior to plan generation.This was used sparingly to normalize to PTV High D min,0.03cc= 95% in cases that required this to ensure appropriate target coverage.Note, D min,0.03ccrefers to the minimum dose that covers all but 0.03cc of a given structure.However, the initial plan normalization was applied to subsequent adapted plans as well (unless a plan revision was created offline to remove plan normalization) and thus could lead to unwanted plan degradation, depending on anatomical changes during online adaptation.In technical structures, a separate module in Ethos, density correction with manual override of any desired area was possible.For a subset of cases with particularly complex targets, air in/near the PTV was manually contoured and overridden to water density in order to aid dose calculation and optimization.
In Ethos, once clinical goals have been defined, the user moves on to dose preview.Here, the IOE and PO generate preliminary dose calculation for a generic 9field beam geometry based on Fourier transform (FT) dose calculation (rather than final plan beam geometries and Acuros XB, respectively).This provides a rapid, albeit less accurate, dose calculation with the achieved dosimetric goals, isodose lines and dosevolume histogram (DVH).In dose preview, fine tuning is available by re-ordering clinical goals within each level, with real-time dose calculation to observe interplay between different clinical goals.Once clinical goal prioritization was approved within dose preview, the IOE generated up to five candidate plans in plan preview (7/9/12 equally spaced fixed-field IMRT and 2/3 full-arc VMAT), from which the user may select the optimal initial plan and associated clinical goals for use in online plan adaptation.
Different planning approaches were tested on SIB plans due to the complexity of such plans (e.g., varying clinical goals and priorities, including/excluding goals for OARs intersecting PTVs, etc.).The approach that resulted in clinically acceptable SIB Ethos plans was used in this study to plan all patients.Ethos plans were assessed based on our institutional guidelines, since both clinical and Ethos plans were planned and approved based on these guidelines.To provide objective plan quality evaluation, an established plan quality metric (PQM) was used. 26This PQM is based on specific metrics such as DVH points with their corresponding score functions that translate the achieved goal to a numerical score.The score functions have a failure region with zero score, a transition region from the minimum acceptable achieved goal to the ideal goal with an increasing score from zero to maximum, and a region exceeding the ideal goal with a maximum score.The composite PQM (%), which represents the sum of all metric scores divided by the combined maximum possible, was calculated for Ethos-initial and Clinical-initial plans in PlanIQ (v.2.1, Sun Nuclear Corp, Melbourne, FL).
The version of Ethos used did not have tools to sum multiple plans (e.g., base plus boost composite dose) or compare multiple plans simultaneously; therefore, all SEQ plans (base and boost) were exported to Eclipse (Varian Medical Systems, Palo Alto, CA) where dosimetric data were analyzed.The Ethos plans TA B L E 2 List of additional anatomically-derived helper structures with their derivation, associated planning goals and priorities used in the ethos treatment management system for hotspot control and isodose shaping."PTVs" is the combination of all targets.
Helper structure name
Helper structure function: Helper structure derivation Goal Priority F I G U R E 1 Illustration of the anatomically-derived helper structures described in Table 2 for isodose shaping and hotspot control.PTV High (red), PTV Low (blue), union of brainstem and spinal cord PRVs (green), and anatomically-derived helper structures (yellow).
were calculated using the Acuros XB linear Boltzmann transport equation (LBTE) solver, whereas the clinical plans were generated using Pinnacle (Philips Medical Systems, Fitchburg, WI, USA) collapsed cone (CC) superposition/convolution.
Online adaptation
Online Adaptive RT was simulated in Ethos for all patients using retrospective simulation-CT, originally used clinically for offline adaptation, to represent the patient's online CBCT (referred to as "pseudo-CBCT" in this work) at a time point during a treatment course where adaptation was required due to significant changes in patient anatomy.This clinical offline adaptive simulation-CT was cropped craniocaudally to match the length of the expected CBCT (24.5 cm) based on maximum length in a single plan with single isocenter as well as laterally to place the "image acquisition isocenter" (set by the emulator as the center of the image dataset) in a location approximating the anatomical location of the isocenter from the initial Ethos plan (which was automatically placed at the centroid of the box encompassing all targets).This approximation of where the CBCT image acquisition center might be in a true clinical scenario is important since, for the Ethos adapted plans, the treatment isocenter will be defined as the image acquisition center.For Scheduled plans, a rigid shift will be determined to place the treatment isocenter at the center of the box surrounding the new targets, in order to approximately match the initial plan isocenter.Following "image acquisition" (i.e., loading of pseudo-CBCT into the emulator), influencer structures such as parotids, mandible, and spinal canal were used for the structure-guided DIR and deformable propagation of target(s)/OARs from initial simulation-CT to pseudo-CBCT.These influencer structures were autogenerated (initial global DIR) for review/edit followed by auto-generation of targets as well as OARs with priority 1 or 2 (via influencer-guided DIR) for physician review/edit.Structures with lower priority (3 and 4) were not available to review/edit during the oART workflow.In this H&N adaptive workflow (Ethos v1.1), structure autosegmentation is not AI-based, unlike other anatomical sites such as the abdomen and pelvis.Only underived targets such as GTV and/or CTV Med/Low that were used to derive PTV were auto-generated for review/editing.PTVs and PRVs were adaptively auto-expanded, as specified in the initial plan derivation rules, based on relevant OAR/GTV/CTV structures of the day.The adapted plan utilizes the same beam geometry selected for the initial plan but beam intensities are re-optimized based on initial clinical goals applied to structures and scan of the day.The scheduled plan is the initial plan re-calculated, without beam intensity changes, on the structures and scan of the day.The "reference" (initial plan on initial scan/contours), "scheduled", and "adapted" Ethos plans are calculated and achieved clinical goals; isodose lines/color wash and DVH are available for evaluation.Dosimetric comparison of clinical-initial versus Ethos-initial plans and Ethosscheduled versus Ethos-adapted (online) plans were performed based on institutional guidelines.Additionally, the composite PQM was calculated for the offline clinical-adapted and online Ethos-adapted plans.For SEQ plans, offline clinical-adapted plans were only available for the boost phase, therefore this PQM comparison was only evaluated for offline clinical and online Ethos adapted boost plans (see Table 3).To evaluate the logistical efficiency of online adaptation in these cases with Ethos, timing data for the five different components, from influencers to plan generation, of the adaptive workflow were collected as well.
All data gathering and statistical tests were performed on Microsoft Excel (Microsoft, Redmond, WA).Dosimetric indices for Ethos and clinical plans were compared via two-sided student paired t-test (p < 0.05 considered significant).
Initial planning
In Ethos (v1.Overriding air density to water within targets in Ethos plans was used to help decrease undesired hotspots when the optimizer was working to achieve homogeneous coverage within targets containing air.This approach was used sparingly and only after all other approaches failed to achieve clinical goals, which resulted in application to 30.0% of plans: 6/10 SIB, 2/10 SEQ base, 1/10 SEQ boost.Normalization to PTV High was often necessary to achieve the D min,0.03ccgoal in initial plans and it was used for 75% of the plans. Figure 2 shows the difference from institutional guidelines of (a) PTV and (b) OAR achieved goals for Ethos and clinical initial SIB plans.All boxplots in this study were based on the difference between the achieved goal in Ethos/clinical plans and institutional clinical goals with one acceptable variation: D 0.03cc ≤107% instead of ≤105%.This allowed variation was associated with plan quality changes observed when moving from collapsed cone calculation approaches to more rigorous algorithms. 25For most goals, the PTV and OAR dose differences from institutional guidelines in both Ethos-and clinical-initial SIB plans were comparable.However, PTV High achieved goals in Ethos were clinically unacceptable in some cases.PTV High coverage in Ethos was slightly below clinical plans with significantly higher hotspots: average V 100% = 96.4%/97.1%,D min,0.03cc= 93.7%/96.2%,and D 0.03cc = 109.26%/105.4% in Ethos and clinical plans, respectively.Ethos initial plan global hotspot was successfully placed within the targets as verified by ensuring a lower normal tissue (Body-PTVs) D 0.03cc with an average of 107.44% in all plans except for one plan with 3.5 cGy higher hotspot outside of the PTV High .OARs in both plans were comparable with a trend towards slightly higher doses in Ethos plans, but with no statistical significance.The difference was statistically significant (favorable for clinical versus Ethos plans) for PTV High D 0.03cc (p = 0.002) and PTV Low V 100% (p = 0.025).The statistically significant difference seen for PTV Low V 100% (as well as the other trends above that were not statistically significant) was mainly due to the automated planning paradigm in Ethos.The traditional approaches in conventional planning (i.e., that used for clinical plan generation) that allow for manual manipulation of small regions of unwanted high or low doses is no longer possible within the Ethos planning paradigm, nor is it practical within the online adaptive setting.This results in slightly diminished plan results for metrics that are governed by changes in dose to smaller volumes.
Figure 3 shows (a) PTV and (b) OAR dose differences from institutional guidelines for Ethos and clinical initial SEQ plans.Ethos initial SEQ plans were high quality with comparable PTV coverage (higher for some goals) with acceptable hotspots (average D 0.03cc 105.5% versus 104.3% in Ethos versus clinical plans).The use of the normal tissue high dose helper structure (Table 2) was found necessary to ensure that the highest dose (global hotspot) was inside the PTV.This was verified by ensuring that D 0.03cc was higher for PTV High than normal tissue (Body-PTVs) in all plans except for one plan with a negligibly higher hotspot (6.6 cGy for the entire course) outside of the PTV High , with average achieved normal tissue D 0.03cc of 104.5%.Both Ethos and clinical initial SEQ plans similarly spared OARs, except for spinal cord PRV where the average dose was significantly higher in Ethos plans (D 0.03cc = 42 Gy vs. 36 Gy in clinical plans) but still below the guideline (50 Gy).The difference was statistically significant (favorable for clinical versus Ethos initial SEQ plans) for PTV High D 0.03cc (p = 0.03) and PTV High V 100% (p = 0.02).
The composite PQM statistics are shown in Table 3 for the Ethos/clinical-initial SIB/SEQ plans.Higher PQM signifies higher target coverage and better OAR sparing.Both Ethos-initial and clinical-initial SIB and SEQ plans had comparable PQM averages (slightly higher in clinical plans due to specific dose metrics illustrated in Figure 2 and Figure 3, with no statistically significant differences).
Online adapted plans
Online adapted plans were generated by simulating the online adaptive workflow with physicist and physician participation for a single session in the Ethos emulator as described above for all 30 plans.Adapted plans were approved over scheduled plans for 29 of 30 instances.One scheduled base plan was selected to avoid undesired hotspot placement (in the mucosa) in the adapted plan at the cost of reduced target conformality (i.e., higher OAR doses).
Figure 4 shows the dose color wash of the SEQ base reference, scheduled and adapted plans for a representative patient.This example illustrates the benefits of online plan adaptation.Due to response and subsequent target shrinkage, the dose in the scheduled plan was no longer conformal to or adequately covering targets as in the original reference plan.This results in a suboptimal plan with unnecessary high dose spill into adjacent mucosa as well as medium dose (i.e., 40% of the PTV High prescription dose, equivalent to 26 Gy) to nearby parotid glands that could potentially result in toxicity (e.g., xerostomia).By comparison, the adapted plan, reoptimized to the anatomy of the day with the clinical goals used to generate the reference plan, successfully rectifies these issues by maintaining desired target conformality, improved OAR avoidance, and improves homogeneity (D max < 105%) to even be superior to that of the reference plan.
Composite PQM (%) SIB plans SEQ plans
Figure 5 shows the dosimetric benefits of online adaptation with Ethos via (a) PTV and (b) OAR dose differences from institutional guidelines in Ethos-scheduled and Ethos-adapted SIB plans.Adaptation was quite useful as scheduled plans in many cases did not achieve PTV clinical goals, with PTV under-coverage and high hotspots on average: PTV High D min,0.03cc= 80.6%/95.7%,PTV Med D min,0.03cc= 64.8%/96.9%,PTV Low D min,0.03cc= 73.4%/96.7%,and PTV High D 0.03cc = 111.9%/108.0% in Ethos-scheduled/adapted plans, respectively.Normal tissue (Body-PTVs) average D 0.03cc was significantly higher in scheduled plans (111.4%)compared to adapted plans (106.5%) and was lower than the PTV High D 0.03cc (i.e., global hotspot is in the PTVs) for 8/10 scheduled plans (average 35 cGy difference, for the full course, outside but proximal to PTV High ) and 10/10 adapted plans.Overall, OAR doses trended higher in scheduled versus adapted plans, but only SpinalCord_PRV05 D 0.03cc was found to be statistically significant (p = 0.05).Overall, adapted SIB plans succeeded in achieving 100% of all PTV goals with F I G U R E 4 Dose color wash of the reference, scheduled and adapted Ethos plans for a representative SEQ base plan.The top and bottom rows show different slices with different dose colorwash ranges highlighting target coverage and OAR doses (e.g., parotid goal D mean < 26 Gy here would be the 40% dose), respectively.Contours shown are: PTV high (red), PTV low (blue), right parotid (green), and left parotid (orange).Note, compared to the reference plan, the adapted contour for PTV high (red) in columns 2 and 3 has decreased in size, resulting in over/under-coverage and undesirable hotspots in the scheduled plan.Additionally, the scheduled plan shows the 40% dose spilling further into the left parotid.By comparison, the adapted plan provides conformality to the changed targets as well as better sparing of OARs and reduced heterogeneity (D max < 105%).
superior OAR sparing compared to scheduled plans by an average improvement of 9% (range: 2−18%) lower dose across all specific OAR goals.In addition to inferior OAR sparing, scheduled SIB plans only achieved 14% of PTV goals across all plans.
Similar dosimetric benefits from online adaptation with Ethos in the SEQ plans are shown via dose difference from institutional guidelines for SEQ scheduled and adapted plans, which were evaluated separately for individual base and boost plans (Figure 6 and Figure 7, respectively), rather than for a composite.This represents how these plans would be evaluated and selected in a clinical online adaptive setting based on meeting goals independently.Adapted base and boost plans had higher PTV coverage while maintaining lower OAR doses and lower global hotspots, with the PTV High D 0.03cc goal more consistently achieved in adapted plans.Adapted plans better spared normal tissue with D 0.03cc outside of PTVs of 107.5% versus 104.9% and 109.4% versus 104.2%, for scheduled versus adapted in base and boost plans, respectively.For some plans slightly higher hotspot was outside but proximal to PTV High with improvement from scheduled to adapted plans of 40% to 80% in the base and 50% to 90% in the boost plans, respectively.The D 0.03cc outside of the PTV High was marginally higher in these cases and decreased from the scheduled to adapted plans (from an average of 54.7 to 48.3 cGy and 60.0 to 6.0 cGy in the base and boost plans for the full phase, respectively).The difference between scheduled and adapted plans was statistically significant for PTV High D 0.03cc (p base = 0.01, p boost = 0.01), PTV High D min,0.03cc(p base = 0.01), PTV Low D min,0.03cc(p base = 0.01), and SpinalCord_PRV05 D 0.03cc (p base = 0.02).These statistically significant differences are indicated by asterisks in Figure 6 and Figure 7. Adapted SEQ plans succeeded in achieving 100% of PTV goals across all plans, whereas the scheduled plans only achieved 33% of these goals.Additionally, SEQ plans improved OAR sparing as assessed across all specific goals by an average of 10% (range: 2%−21%) lower dose versus scheduled plans.
The average composite PQM for Ethos-and clinicaladapted SIB and SEQ (boost only) plans are shown in Table 3.All adapted plans had higher average composite PQM compared to the initial plans.The average composite PQM for both initial and adapted Ethos plans was comparable (no significant difference) but trended slightly lower compared to clinical plans.Only adapted boost plans for SEQ patients were compared here because most patients in this dataset only had boost plans adapted clinically.Timing data was collected for the five components of the adaptive workflow simulated within the emulator software, from influencer generation to plan generation (Figure 8).The average total duration for SEQ boost, SEQ base and SIB plans was 14:02, 22:56, and 30:14 (min:sec), respectively.Time to generate influencers and targets was the shortest and most consistent for all plans (on average 1:19 and 1:00, respectively).The duration of editing influencers was consistent (5:14) for SEQ base and SIB plans and was longer compared to SEQ boost plans (1:58).However, there was no difference in the influencers that were being edited for these three plan types.Therefore, this discrepancy was due to the physician learning curve and comfort level increasing between SIB/SEQ base plan adaptative sessions, which were performed first and SEQ boost plan adaptative sessions, which were performed last.The improvement in the duration to edit influencers by 3:36 was due to the user's experience.Duration to edit targets varied as a function of increasing target complexity for SEQ boost, SEQ base and SIB plans (on average 5:27, 10:10 and 16:49, respectively).The duration for plan generation for SIB and SEQ boost include both fixed field IMRT and full arc VMAT plans and was on average 5:43 and 4:32, respectively.However, full arc VMAT plan generation was longer than that of fixed field IMRT, with an average of 11:55 compared to 3:59 in the IMRT plans.
DISCUSSION
The Ethos, CBCT-guided ring-gantry oART system provides a platform with semi-automated planning, recontouring and re-planning for efficient adaptation of the treatment plan to the anatomy of the day.In this study, rather than investigating daily adaptation, or adaptation with a pre-defined temporal sampling (e.g., weekly, every n-fractions, beginning/middle/end of treatment, etc.), as has been done in previous studies (e.g., 20 ), we considered the extreme scenario with H&N patients known to clinically require offline adaptation due to appreciable changes in anatomy.This allowed us to evaluate the limits of Ethos for online adaptive H&N treatments with the expectation that actual average daily variations in patient anatomy will be less dramatic.Therefore, the promising results seen in this work would imply Ethos could successfully be used for oART in H&N for most clinical scenarios.In this work, an initial treatment planning approach for H&N cancer, using the new clinical goal planning paradigm that is robust to plan adapta-tion was developed.An Ethos emulator was utilized for initial planning and oART simulations in silico.Different planning approaches were developed and tested on more complex SIB plans.Our first planning strategy, simply using all institutional guidelines as direct input for the Ethos clinical goals and/or excluding a subset of goals for OARs intersecting PTVs, failed to achieve acceptable initial plans.Our subsequent strategy, incorporating cropping of OARs intersecting PTVs (rather than relying on IOE auto-cropping) and using minimal goals per OAR was found to be superior.All presented dosimetric results were based on using this approach with additional anatomically derived helper structures and goals for dose shaping (see Figure 1 and Table 2) and PTV air density override (if deemed reasonable and necessary).Similar density overrides are used in our clinic for standard H&N patient plans with the justification being: if gross disease moves within the uncertainty margin represented by the PTV to a region containing air in the simulation-CT, the air at this location will have been replaced by tissue.The generated plans were more robust to variations in electron density abutting gross disease and within PTV margins.Ethos v1.1 does not allow importing density overridden structures, requiring manual delineation in an Ethos module (technical structures) separate from the target/OAR structure contouring,where it was not possible to see targets while drawing the overridden density structure.Furthermore, overridden density structures were not visible/editable during the adaptive workflow,instead a deformable prop-agation of the structures is performed.Future Ethos versions more robustly integrating technical structures during initial planning and inclusion of tools for density map verification/correction during online adaptation would be beneficial.
In this study, clinical plans were calculated based on collapsed cone convolution superposition, whereas Ethos used the more precise Acuros XB.Collapsed cone versus Monte Carlo (comparable to the Acuros XB) calculations of identical plans showed increased hot spots for Monte Carlo. 25,27,28For this reason, D 0.03cc ≤ 107% (vs.105%) was used in this work.Ethosgenerated initial SIB plans struggled to meet the strict institutional guidelines for PTV homogeneity.Aside from this, most PTV coverage and OAR goals were clinically acceptable.For institutions using less stringent guidelines (e.g., those suggested in the cooperative group setting, such as NRG 29 ) would encounter fewer issues.However, while the Ethos initial SIB plans struggled to achieve clinical acceptability, Ethos adapted SIB plans were more successful in achieving clinical goals with conformal/homogeneous target coverage while sparing OARs.This dosimetric improvement was most likely due to target size and complexity changes at the adaptive time point, which simplified plan optimization.Initial SEQ plans were comparable to clinical plans and in general met institutional guidelines.While superior manual plans could be achieved based on these comparisons, this approach requires more planning time/resources and would not allow for rapid online adaptation and superior daily dosimetry to changing anatomy.It is also worth noting that the capability to adjust plans to the anatomy of the day with oART decreases the uncertainties in daily anatomical variations that are accounted for traditionally with larger PTV margins.The Ethos plans in this work, used the same PTV margins as were used clinically without oART.In practice, these PTV margins can be confidently reduced when utilizing oART, as has been implemented in oART clinical trials. 30ixed-field IMRT plans were selected over VMAT plans 90.0% of the time in this work.This may change if future versions of Ethos enhance the VMAT optimizer.Fixed-field IMRT plans were also favored in v1.1 due to faster optimization versus VMAT plans with an average plan generation of 3:59 versus 11:55.This acceleration in the adaptive workflow was also found in another study due to the faster fixed field IMRT optimization. 18thos generated highly modulated IMRT plans (average MUs per prescribed fractional dose of 6.8 ± 2.1 MU/cGy for all plans).However, this is not expected to be an issue since similarly modulated Ethos plans have been observed and validated for delivery accuracy in previous studies. 18,31he institutional guidelines used for Ethos plan generation and evaluation in this work facilitated comparison with clinical plans but may make the results slightly less generalizable for other institutions.However, the institutional guidelines were mostly consistent with the established cooperative group metric NRG HN-005 guidelines and, where they differed, were consistently more conservative. 29Use of the established PQM for plan evaluation helped to further ensure generalizability of the results.The average composite PQM represented overall achieved plan quality using objective scoring rather than subjective plan quality evaluation.For SEQ initial plans, individual Ethos plan metrics were superior compared to those for the clinical plans in many instances (Figure 3).However, the average composite PQM was higher in clinical plans.This was mostly due to the strict score function where, for example, a score of 0.0 would be assigned for PTV High V 100% = 94.9% (goal of 95.0%).It should also be noted that, due to the fact that patients selected for this work were known to have anatomical changes during treatment that were significant enough to require offline adaptation, it is an expected result that Ethos adapted plan quality will be superior to Ethos scheduled plan quality.This is still shown explicitly in Figures 5-7 not only to highlight that the Ethos adapted plans do well to achieve clin-ical goals in the face of such appreciable anatomic changes, but also to illustrate, via direct comparison to the scheduled plan results in these figures, the potential dosimetric manifestation of these anatomic changes that are avoided with online adaptation.
Influencers for all three plan types were the same, namely, mandible, parotids, spinal canal, and brainstem.The difference in time to edit influencers (Figure 8) was most likely a function of physician experience and the order that oART was simulated for these plans.The average of all influencer editing times, for all three plan types, may be more representative of an average experienced user.Auto-generated influencers and targets were manually edited by physicians in all cases with average durations of 4:30 and 10:49 (min: sec), respectively.The incidence of manual edits and the amount of time required may represent an upper limit as this work presented the extreme situation where anatomical variation was known to be sufficient to have clinically required offline adaptation.Note, the emulator computer hardware was not identical to that of the clinical system and therefore the time to generate structures and plans reported herein may be diminished.
Patient setup, CBCT image acquisition, plan QA and beam delivery timing were not included in this emulator work.Median online adaptive duration after CBCT acquisition to plan selection was 18:56 (min: sec) and was similar to what Yoon et al. found (19:34).Missing steps were quantified by Yoon et al. to be image acquisition (1.6 min), QA (2-3 min), and treatment delivery (1-2 min). 20Further improvements in the IOE/PO, auto-contouring and the contouring tools will further accelerate this online adaptive process from end-to-end.
A limitation of this work was the use of offline adaptive simulation CT scans as pseudo-CBCTs, which do not incorporate uncertainties associated with degraded imaging quality between fan-beam CT and CBCT.This could include first-order effects such as decreased contouring accuracy.This may also include second-order effects, as the use of the offline adaptive simulation CT with higher quality and more consistent image intensities with the initial simulation CT (compared to the CBCT, which will be used in a real clinical scenario) in the adaptive workflow affects the DIR results.This would in turn affect generation of the synthetic CT and auto-generated structures as well as dose calculation.This may diminish the accuracy of some results presented in this work compared to what will be observed in the true clinical scenario.However, advances in CBCT acquisition/reconstruction techniques now provide image quality approaching that of fan-beam CT. 32 Manual contour editing was found to almost always be necessary, as was found in other studies. 20Hence, autocontouring improvements could reduce oART treatment duration by up to ∼15 min (∼4.5 min for influencers and ∼11 min for targets).Another study stratified time to manually edit contours based on the quality of the auto-contoured starting point for H&N oART with Ethos and found a decrease of treatment duration by ∼10 min (∼5 and ∼4.5 min for influencers and targets, respectively) when initial auto-contours were high quality, which may represent a more realistic potential time savings. 20ART, only underived targets (e.g., GTV and/or CTV Med/Low used to derive PTV) were editable.Clinical CTVs derived from GTVs were routinely edited manually to avoid areas with no microscopic disease spread (e.g., air, bone, fascia or muscle).This was not possible in Ethos, where a consistent 5 mm margin around the GTV was maintained, which led to differences between Ethos and clinical-adapted CTVs/PTVs.Initial plan normalization was used for 75% of the plans to achieve the PTV High D min,0.03ccgoal.However, within Ethos version 1.1 this normalization must be set prior to plan generation and will be applied to all online adapted plans.This might result in unwanted adapted plan degradation, especially in the extremely homogeneous H&N plans where a very small normalization change could produce a dramatic difference in target coverage.Consequently, avoiding normalization is preferable.In future versions,tools during oART to adjust normalization as a final step will be useful.
CONCLUSION
This study developed an initial planning strategy for H&N cancer with the new Ethos online adaptive system which, when compared to clinical plans generated with traditional planning approaches, resulted in acceptable plans with slightly lower quality for Ethos-initial SIB plans and comparable quality for Ethos-initial SEQ plans.In the current version, fixed-field IMRT plans (versus VMAT plans) were found to be superior in quality and in optimization efficiency for the online adaptive workflow.Online adaptive sessions were successfully simulated in a time-efficient manner for all plans, with adapted plans approved over scheduled plans in 97% of cases for achieving all PTVs goals (while scheduled plans would only have met 33%) with improvement in OAR sparing.These successful dosimetric results when using standard PTV margins suggest a promising potential to improve personalized treatments for patients in the future by implementing reduced margins as enabled by the reduced uncertainty associated with online adaptation.Ethos oART allows for daily adaptation incorporating small changes in patient anatomy.This work tested the capabilities and robustness of this system by considering the extreme scenario where anatomical changes were significant enough to have clinically required offline adaptation.The success of Ethos adapted plans in adjusting the dose to these major changes in anatomy will ensure success of the daily online adaptation in most expected clinical scenarios.
This AI-based online adaptive platform is a promising and powerful tool that will allow the generation of semi-automated initial plans with the option of daily adaptation to ensure accurate and efficient treatment delivery by adjusting the treatment plan to changing patient anatomy.This is the first version of this novel system and continued development, particularly in VMAT optimization, automated contouring and CBCT image quality, will help facilitate more personalized daily treatments for H&N cancer patients.
AU T H O R C O N T R I B U T I O N S
All of the above listed authors contributed directly to the intellectual content of the paper including work design and acquisition of data, writing/editing the manuscript, and final approval of this version.
AC K N OW L E D G M E N T S
This work was supported and funded by Varian Medical Systems, Palo Alto, CA.The authors would like to thank the dosimetrists at the Moffitt Cancer Center: Mark Bowers and Kevin Greco, for sharing their planning expertise as well as Sellakumar Palanisamy for his help and guidance with the Ethos emulator system.
F I G U R E 2
SIB initial plan dosimetric evaluation.Distributions of (a) PTV (b) OAR dose differences between achieved values and institutional guidelines for Ethos (blue) and clinical (green) initial plans.The shaded areas indicate that a goal was not met.*Statistically significant difference (p < 0.05).
F I G U R E 3
SEQ initial plan dosimetric evaluation.Distributions of (a) PTV (b) OAR dose differences between achieved values and institutional guidelines for Ethos (blue) and clinical (green) SEQ composite (base + boost) initial plans.The shaded areas indicate that a goal was not met.*Statistically significant difference (p < 0.05).TA B L E 3 Composite PQM (%) statistics for the initial and adapted Ethos and clinical plans.
F I G U R E 5
SIB adapted plan dosimetric evaluation.Distributions of (a) PTV and (b) OAR dose differences between achieved values and institutional guidelines for Ethos Scheduled (blue) and Ethos Adapted (green) SIB plans.The shaded areas indicate that a goal was not met.*Statistically significant difference (p < 0.05).
F I G U R E 6
SEQ base phase adapted plan dosimetric evaluation.Distributions of (a) PTV (b) OAR dose differences between achieved values and institutional guidelines for Ethos Scheduled (blue) and Ethos Adapted (green) SEQ base plans.The shaded areas indicate that a goal was not met.*Statistically significant difference (p < 0.05).
F I G U R E 7
SEQ boost phase adapted plan dosimetric evaluation.Distributions of (a) PTV (b) OAR dose differences between achieved values and institutional guidelines for Ethos Scheduled (blue) and Ethos Adapted (green) SEQ boost plans.The shaded areas indicate that a goal was not met.*Statistically significant difference (p < 0.05).F I G U R E 8 Duration (min: sec) of each component of the oART process for SEQ base, boost, and SIB plans, which were included in the emulator-based workflow.
TA B L E 1
List of clinical goals and their corresponding priority for the initial planning approach in Ethos treatment management system based on institutional guidelines. | 10,235.4 | 2023-08-24T00:00:00.000 | [
"Medicine",
"Physics"
] |
Analysis and forecasting of crude oil price based on the variable selection-LSTM integrated model
In recent years, the crude oil market has entered a new period of development and the core influence factors of crude oil have also been a change. Thus, we develop a new research framework for core influence factors selection and forecasting. Firstly, this paper assesses and selects core influence factors with the elastic-net regularized generalized linear Model (GLMNET), spike-slab lasso method, and Bayesian model average (BMA). Secondly, the new machine learning method long short-term Memory Network (LSTM) is developed for crude oil price forecasting. Then six different forecasting techniques, random walk (RW), autoregressive integrated moving average models (ARMA), elman neural Networks (ENN), ELM Neural Networks (EL), walvet neural networks (WNN) and generalized regression neural network Models (GRNN) were used to forecast the price. Finally, we compare and analyze the different results with root mean squared error (RMSE), mean absolute percentage error (MAPE), directional symmetry (DS). Our empirical results show that the variable selection-LSTM method outperforms the benchmark methods in both level and directional forecasting accuracy.
Introduction
Since 2014, the international crude oil price has experienced the most significant volatility since the 2018 financial crisis. The oil market has taken on new features that affect the development of the global economy, national strategic security, and investor sentiment significantly. Especially as the primary alternative energy resources, the US tight oil production has been significant macroeconomic effects on the oil price (Kilian, 2017). In 2014, US shale oil producers plundered market share, leading to a change in global crude oil supply and demand balance. According to EIA, US shale oil production increased from 4.96 million barrels per day in 2017 to 5.59 million barrels per day in 2022. In addition, there are geopolitical events, trade frictions, and OPEC's agreement have occurred in recent years, causing the volatility of oil price. The internal and external environment of the oil market is changing, and the influencing factors have become diverse and complex. As the factors affecting international oil prices become more and more complex, it becomes difficult to capture practical factors and predict oil prices. Many past kinds of literature about crude oil price forecasting show that the forecasting results are sensitive to the modeling sample data frequency and data interval selection (Yu et al., 2019;Yu et al., 2008a;Zhang et al., 2015). As a strategic resource, crude oil plays a vital role in national energy security.
Meanwhile, with the financial properties of crude oil strengthened gradually, the volatility of crude oil prices is bound to affect oil companies' earnings and investor behavior. Therefore, systematic analysis of the characteristics of complex international oil markets and accurate capture of the new trend in international oil prices are critical. However, as the linkage between the markets, the uncertainty of the world economy and energy, the influence factors of oil price have become complex. It is difficult to point out which factors have the dominant effect on the oil price. If all possible oil price factors are added into the existing forecast model, it may lead to over-fitting problems, which will affect the forecast results. How to forecast crude oil prices in a new and effective method is one problem that academics and practitioners are very concerned about all the time. It can provide reference and theoretical support for the formulation of national energy security strategy and enterprise avoidance of market risks. To better analyze the changing trend of the crude oil market, it is necessary to determine the main factors affecting the price, determine the impact of each factor on price, and establish a forecasting model finally.
The research on the prediction of international oil price has always been a hot topic. A large number of papers with theoretical and practical application value have appeared. We make a simple review from two aspects of influencing factors and forecasting methods as follow:
Influencing factor
Most of the research has divided the influence factors of crude oil price into supply and demand, finance factor, technology (Hamilton, 2009a;Kilian & Murphy, 2014;Zhang et al., 2017;Wang et al., 2015;Tang et al., 2012).
Supply and demand
As the fundamental factor, supply and demand have been the main factors affecting oil prices. Supply and demand changes have always been the fundamental factors affecting the long-term trend of oil prices. (Hamilton, 2009b) analyzed the drivers of oil prices and argued that the main reason for the rise in oil prices in 2007-2008 was the global demand for production. (Kilian, 2009) developed a structural VAR model to explain the global crude oil price fluctuation and understand the reaction of the US economy associated with oil price volatility. The crude oil price was decomposed into three components: crude oil supply shock, the shocks to the global demand for all industrial commodities, and the demand shock to the global crude oil market. However, in recent years, with the development of alternative energy sources, the worldwide supply and demand structure of crude oil has changed. (Kilian, 2017) reported the increased U.S. tight oil production not only reduced demand for oil in the rest of the world and lowering the Brent oil price but also caused other countries to cut back on their oil imports, lowering global oil demand.
Global economic development
Global economic development is a manifestation of supply and demand (Doroodian & Boyd, 2003;Sadorsky, 1999;Barsky & Kilian, 2001). (Kilian & Hicks, 2013) measured the global demand shock directly by correcting the real gross domestic product (GDP) growth forecast. The results showed that the forecast was associated with unexpected growth in emerging economies during the 2003 to 2008 period. These surprises were associated with a hump-shaped response of the real price of oil that reaches its peak after 12-16 months. The global real economic activity has always been considered to impact the changes in oil price significantly. (Özbek & Özlale, 2010) researched the relationship between global economic and oil prices with trend and cycle decomposition. They found that economic shock has a lasting effect on oil prices, which were considered mainly to be supply-side driven.
Financial factor
In addition to commodity attributes, crude oil also has financial attributes. The longterm trend of crude oil price is determined by the commodity attributes, which are affected by the supply and demand factors generated by the real economy; the shortterm fluctuations of crude oil price are determined by the financial attributes, which are influenced by market expectations and speculative transactions. The financial factor mainly includes speculation factor, exchange rate and some other financial index, which connect the stock market and monetary market with the crude oil price (Narayan et al., 2010;Zhang, 2013;Reboredo, 2012;Cifarelli & Paladino, 2010). (Kilian & Murphy, 2014) developed a structural model to estimate the speculative component of oil price through the inventory data and found it played an important role during earlier oil price shock periods, including 1979including , 1986including and 1990including . (Sari et al., 2010 estimated the comovement and information transmission among oil price, exchange rate and the spot prices of four precious metals (gold, silver, platinum, and palladium). Investors could diversify their investment risk by investing in precious metals, oil, and euros.
Technology factor
The Crack spread is defined as the price difference between crude oil and its refined oil, reflecting the supply and demand relationship between the crude oil market and its refined product market . (Murat & Tokat, 2009) used the random walk model (RWM) as a benchmark to compare the crack spread futures and crude oil futures and found the crack future could forecast the movements of oil spot price as reasonable as the crude oil futures. (Baumeister et al., 2013) selected crack spread as one of the variables to forecast crude oil prices, and the studies suggested it was an influential prediction factor.
Forecast method
Except for the influence factors, researchers are also very concerned about the forecast methods for improving forecast accuracy. The four main forecast method categories: time series models, econometric models, qualitative methods and artificial intelligence techniques are used in oil price modeling and forecasting (Wang et al., 2016;Charles & Darné, 2017;Yu et al., 2015;Sun et al., 2019;Suganthi & Samuel, 2012;Zhang et al., 2008;Valgaev et al., 2020). The autoregressive integrated moving average (ARIMA) and exponential smoothing (ETS) are the most widely used time series forecasting model, and they are usually used as the benchmark models (Wang et al., 2018;Chai et al., 2018;Zhu et al., 2017). In addition, the econometric models and qualitative methods like the generalized autoregressive conditional heteroskedastic model (GARCH), the vector autoregression model (VAR), the state-space models and the threshold models are also widely used (Kilian, 2010;Wang & Sun, 2017;Zhang & Wei, 2011;Ji & Fan, 2016;Drachal, 2016).
However, with the increasing of the data volume and influence factors complex, traditional models failed in predicting accurately. The machine learning forecasting method presents its superiority and mostly outperform traditional approaches when tested with empirical results, especially in dealing with the nonlinear problem and short-term prediction. Such as support vector machines (SVM), artificial neural networks (ANNs), genetic algorithms (GA) and wavelet analysis are introduced into oil price forecasting in recent years. For example, (Zhao et al., 2017) proposed the stacked denoising autoencoders model (SDAE) for oil price forecasting. Empirical results of the proposed method proved the effectiveness and robustness statistically. (Xiong et al., 2013) developed an integrated model EMD-FNN-SBN, which is the empirical mode decomposition (EMD) based on the feed-forward neural network (FNN) modeling framework incorporating the slope-based method (SBM). The results indicate this model using the (multiple-input multiple-output) MIMO strategy is the best in prediction accuracy.
During the last decades, more and more factors and models have been introduced, estimated and validated. Several different factors can address the oil price forecasting problem from the empirical and theoretical vision. Many researchers always select general factors and models directly, regardless of which indicators are the actual core variables. Especially with the expansion of data and quantitative indicators, variable selection becomes more and more critical. In recent years, there are some papers begin to extract core factors before forecasting. Even though there are some variable selection processes in some machine learning methods, they are all nested in the forecasting and just for the robustness of the model (Drezga & Rahman, 1998;May et al., 2008;Korobilis, 2013;Huang et al., 2014). There are fewer papers devoted to variable selection before predicting. For example, (Chai et al., 2018) used the Bayesian model average method for influence variable selection before establishing the oil price forecasting model. (Zhang et al., 2019) accurately screen out the main factors affecting oil price by using an elastic network model and LASSO shrinkage in the case of many predictive variables but relatively few data. The main factors influencing oil price forecast are studied from the Angle of variable selection. Secondly, the accuracy and robustness of the elastic network model and the LASSO contraction method in predicting oil prices are comprehensively verified using a variety of robustness tests. The results show that the LASSO contraction and elastic network model outperforms other standard oil price forecasting models. An investor who allocates assets based on the predictions of these two methods can achieve a more substantial return than other oil price forecasting models.
In this paper, we develop an integrated model with a new machine learning method for crude oil price forecasting based on core factor selection. This paper contributes to the variable selection and machine learning method in oil price forecasting. In the process of variable selection, we introduce three approaches with different advantages for comparison analysis and forecasting, elastic-net regularized generalized linear model, Bayesian model average and spike-slab lasso method. In addition, we combine them with a new machine learning method long short-term memory (LSTM) model for oil price forecasting. Finally, random walk (RW), autoregressive integrated moving average models (ARMA), elman neural Networks (ENN), ELM Neural Networks (EL), Walvet Neural Networks (WNN) and generalized regression neural network (GRNN) Models were used to forecast the price. Finally, we compare and analyze the different results with root mean squared error (RMSE), mean absolute percentage error (MAPE), directional symmetry (DS). The research framework is shown in Fig.1.
The structure of the paper is as follows: Section 1 reviews the related literature including influencing factors and forecast methods. Section 2 introduces the data. Section 3 develops the technique. Section 4 presents the empirical analysis. Finally, Section 5 concludes the paper and outlines future work.
Dataset
According to the above literature, we selected 30 variables from supply and demand, inventory, financial market, technical aspects as the initially chosen variables (Hamilton, 2009a;Kilian & Murphy, 2014;Zhang et al., 2017;Wang et al., 2015;Tang et al., 2012), and we choose the real monthly West Texas Intermediate (WTI) crude oil price as the dependent variable. The interval of the data is from January 2000 to December 2017. For data stability, we used the return rate of data. The crude oil production, consumption structure and replacement cost as the related variables measure the supply index. The demand index includes crude oil consumption and global economic development, a measure of real global economic activity.
Meanwhile, we select the total OECD petroleum inventory, U.S. crude Oil inventory (total, SPR, and Non-SPR) as the inventory index. In addition, the related variables are selected from speculation, monetary, stock, commodity market as the financial factor index. Finally, we calculated the WTI-Brent spot price spread, actual value of the WTI crack spread and Brent crack spread: actual value as the technical indicators. In Table 1, we describe each variable and its corresponding data sources.
Theoretical background
As the crude oil market is very complex and has various uncertain determinants, we must select the core influence factors first before establishing forecasting models. The main four variable selection methods are significance test (forward and backward stepwise regression), information criteria (AIC BIC), principal component factor analysis model, lasso regression, ridge regression, and other punitive models (Castle et al., 2009). It is hard to tell which the best is because each has its own strong and weak points. Thus, we introduce three different methods to select core influence factors of crude oil price, which are elastic-net regularized generalized linear Models (GlMNET), spike-slab lasso method (SSL) and Bayesian model averaging (BMA). These three methods are effective variable selection methods and they are all improvements on the existing mature models (LASSO, Ridge regression). Thus, we use these new methods for variable selection (Zou & Hastie, 2005;Friedman et al., 2010).
Variable selection
The elastic-net regularized generalized linear Models (GLMNET) Zou and Hastie (2005) (Zou & Hastie, 2005) proposed the elastic-net method for variable selection, which is considered to be the best contraction method for handling multicollinearity and variable screening, and its loss precision is not too great. Their simulation results showed that the elastic net outperformed the Lasso in terms of prediction accuracy. Like the Lasso, the elastic net simultaneously does automatic variable selection and continuous shrinkage, and it can select groups of correlated variables. On the one hand, it achieves the purpose of ridge regression to select essential features; on the other hand, it removes features that have little influence on dependent variables, like Lasso regression, and achieves good results, especially when the sample size n is smaller than the number of predictors. (The specific formula refers to (Zou & Hastie, 2005)) In this paper, we choose the Elastic-Net Regularization Linear Model (GlMNET), which is a package that fits the Generalized Linear model by punishing maximum likelihood (Friedman et al., 2010). The regularization path of the Lasso or elastic net penalty is calculated on the value grid of the regularization parameter. The algorithm is high-speed, can make full use of the sparsity of input matrix X, and is suitable for linear, logic, polynomial, poisson and Cox regression models. It can also be applied to multi-response linear regression. These algorithms can process vast datasets and can make use of sparsity in the feature set.
Wherein the values of the grid λ cover the entire range, l(y i , η i ) is the negative logarithmic likelihood distribution of the contribution to the observed value i. For example, it is 1 2 ðy−ηÞ 2 for Gaussian. The elastic mesh penalty is controlled by α. (The specific formula refer to Friedman et al. (2010)).
Bayesian model averaging
The basic idea of the Bayesian model averaging approach (BMA) can comprehend. Each model is not fully accepted or not entirely negative (Leamer, 1978). The prior probability of each model should be assumed firstly. The posterior probability can be gain by extracting the dataset contains information as well as the reception of models to the dependent variables. The excellence of the BMA approach is not only that it can sort the influence factors according to their importance but also can calculate the posterior mean, standard deviation and other indicators of the corresponding coefficients. With the help of the Markov chain Monte Carlo method (MCMC), the weight distribution of the model according to the prior information could be estimated (Godsill et al., 2001;Green, 1995). The MCMC method can overcome the shortcoming of BIC, AIC and EM methods. (The specific formula refer to (Leamer, 1978;Merlise, 1999;Raftery et al., 1997)). It has the following three advantages: First, under different conditional probability distributions, there is no need to change the algorithm. Second, the posterior distribution of the weight and variance of BMA is considered comprehensively. Third, it can handle parameters with high BMA correlation.
Spike-slab lasso
Although model averaging can be considered a method of handling the variable selection and hypothesis testing task, only in a Bayesian context since model probabilities are required. Recently, Ročková, V. and George, E. I. introduce a new class of selfadaptive penalty functions that moving beyond the separable penalty framework based on a fully Bayes spike-and-slab formulation (Ročková & George, 2018). Spike-slab Lasso (SSL) can borrow strength across coordinates, adapt to ensemble sparsity information and exert multiplicity adjustment by non-separable Bayes penalties. Meanwhile, it is using a sequence of Laplace mixtures with an increasing spike penalty λ 0 , and keeping λ 1 fixed to a small constant. It is different from Lasso, which a sequence of single Laplace priors with an increasing penalty λ. Furthermore, it revisits deployed the EMVS procedure (an efficient EM algorithm for Bayesian model exploration with a Gaussian spike-and-slab mixture (Ročková & George, 2018)) for SSL priors, automatic variable selection through thresholding, diminished bias in the estimation, and provably faster convergence. (The specific formula refer to (Ročková & George, 2018)).
Long short-term memory network
Long short-term memory (LSTM) neural networks are a special kind of recurrent neural network (RNN). LSTM was initially introduced by Hochreiter and Schmidhuber (1997) (Hochreiter & Schmidhuber, 1997), and the primary objectives of LSTM were to model long-term dependencies and determine the optimal time lag for time series issues. In this subsection, the architecture of RNN and its LSTM for forecasting crude oil prices are introduced. We start with the primary recurrent neural network and then proceed to the LSTM neural network.
The RNN is a type of deep neural network architecture with a deep structure in the temporal dimension. It has been widely used in time series modeling. A traditional neural network assumes that all units of the input vectors are independent of each other. As a result, the conventional neural network cannot make use of sequential information. In contrast, the RNN model adds a hidden state generated by the sequential information of a time series, with the output dependent on the hidden state. Figure 2 shows an RNN model being unfolded into a full network. The mathematical symbols in Fig. 2 are as follows: 1. x t denotes the input vector at time t.
2. s t denotes the hidden state at time t, which is determined relayed on the input vector x t and the previous hidden state. Then the hidden state s t is determined as follows: is the activation function, it has many alternatives such as sigmoid function and ReLU. The initial hidden state s 0 is usually initialized to zero.
3. o t denotes the output vector at time t. It can be calculated by: 4. U and V denote the weights of the hidden layer and the output layer respectively. W denotes transition weights of the hidden state. Although RNNs simulate time series data well, these are difficult to learn long-term dependence because of the vanishing gradient issue. LSTM is an effective way to solve the vanishing gradient by using memory cells. A memory cell is consists of four units: input gate, forget gate, output gate and a self-recurrent neuron, it is shown in Fig. 3. The gate controls the interactions between the adjacent memory cells and the memory cell itself. Whether the input signal can change the state of the memory cell is controlled by the input gate. On the other hand, the output gate can control the state of the memory cell to decide whether it can change the state of other memory cells. Additionally, the forget gate can choose to remember or forget its previous state. Figure 4 illustrates the unrolled module in an LSTM network, which describes how the values of each gate are updated. The mathematical symbols in Fig. 4 are as follows: 1. x t is the input vector of the memory cell at time t.
6. f t and C t are values of the forget gate and the state of the memory cell at time t, respectively. f t and C t can be formulated as follows: . o t and h t are values of the output gate and the value of the memory cell at time t, respectively. o t and h t can be calculated as follows: The architecture of the LSTM neural network includes the number of hidden layers and the number of delays, which is the number of past data that account for training and testing. Currently, there is no rule of thumb to select the number of delays and hidden layers. In this work, the number of hidden layers and delays are set to 5 and 4 by trial and error. The back-propagation algorithm is employed to train the LSTM network and MLP neural network. The learning rate, batch size and number of epochs are 0.05, 60 and 1000, respectively. The speed of convergence is controlled by the learning rate, which is a decreasing function of time. Setting the number of epochs and the learning rate to 1000 and 0.05 can achieve the convergence of the training. The empirical result will become stable once convergence is achieved through the combinations of parameters are varied. Interesting readers may refer to (Hochreiter & Schmidhuber, 1997) for more information.
Empirical study
In this section, we compare the variable selection-LSTM integrated learning approach to the predictive performance of some benchmark models. First, section 4.1 analyzes the core influencing factors for the screening of the three feature extraction methods, and 4.2 describes the evaluation criteria and statistical tests to compare prediction accuracy. Second, section 4.3 provides an input selection and reference model parameter settings. Finally, Section 4.4 for the discussion.
Variable selection
We can see from Table 2, the elastic-net selects the most number (18 variables), followed by the SSL method (11 variables), and the BMA method includes the least number (8 variables). Meanwhile, the variable of the SSL method is a subset of the BMA method, and the BMA method is a subset of the elastic-net. Non-OPEC production, shale oil (tight oil) production, Fed Fund effective, Kilian Index, USA CPI, USA PPI: Energy, Euro PPI, USA: PMI, OECD inventory, USA SPR inventory, USA Non-SPR inventory, Crude oil non-commercial net long ratio, Real dollar index: generalized, COMEX: Gold: Future closing price, LME: Copper: Future closing price, WTI-Brent spot price spread, WTI crack spread: actual value and Brent crack spread: real value. Firstly, we can see Non-OPEC production and tight oil production were selected from the supply aspect. It suggested that with the reduction of OPEC production, the production capacity of Non-OPEC countries is increasing gradually. In particular, the US tight oil production has become an essential factor affecting the trend of oil prices in recent years. According to the IEA report, the global oil production is expected to increase by 6.4 million barrels to 107 million barrels by 2023. The US tight oil production growth will account for 60 of the global growth. Meanwhile, in 2017, the US was the world's largest producer.
From the demand aspect, global economic development is still the main driver of crude oil price. The USA PPI: energy, Euro PPI and USA: PMI factors are selected by all three methods. Crude oil is an important raw material for industry, agriculture, and transportation. It is also the main parent product of energy and chemical products in the middle and lower reaches. Therefore, crude oil plays a critical role in the price of domestic production and living materials. For example, when the oil price continued to fall sharply, which is bound to lower the overall price level of the country. Moreover, it will have a heavier negative effect on domestic currency fluctuations, which will shift the currency situation from expansion to contraction.
From the inventory aspect, the OECD inventory, the USA SPR inventory and the USA Non-SPR inventory are chosen into the elastic-net model. The inventor is an indicator of the balance of the supply and demand for crude oil. Furthermore, the impact of commercial inventory on oil prices is much more substantial. When the future price Table 2 Selected key features by GLMNET, SSL and BMA method
Methods
Selected feature ID GLMNET X 2 , X 5 , X 8 , X 9 , X 10 , X 13 , X 14 , X 15 , X 16 , X 18 , X 19 , X 20 , X 21 , X 26 , X 27 , X 28 , X 29 , X 30 Spike-slab Lasso X 13 , X 14 , X 15 , X 20 , X 21 , X 27 , X 29 , X 30 Bayesian model averaging X 13 , X 14 , X 15 , X 16 , X 18 , X 19 , X 20 , X 21 , X 27 , X 29 , X 30 is much higher than the spot price, the oil companies tend to increase the commercial inventory, which stimulates the spot price to rise and reduces the price difference. When the future price is lower than spot prices, oil companies tend to reduce their commercial inventories, and spot prices fall, which will form a reasonable spread with futures prices. From the speculate aspect, there is a positive correlation between the noncommercial net long ratio and oil price (Sanders et al., 2004). With the crude oil market and the stock market relation gradually strengthened, hedging plays a more important role in driving market trading (Coleman, 2012).
From the exchange rate market, the real dollar index: generalized was selected. The dollar index is used to measure the degree of change in the dollar exchange rate against a basket of currencies. If the dollar keeps falling, real revenues from oil products priced in dollars will fall, which will lead to the crude price high.
Form the commodity market, the LME: Copper: Future closing price factor is picked up. Copper has the function of resisting inflation, while the international crude oil price is closely related to the inflation level. There is an interaction between them from the long-term trend.
From the technical aspect, the WTI-Brent spread serves as an indicator of the tightening in the crude oil market. As the spread widens, which suggests the global supply and demand may be reached a tight equilibrium. The trend of price spread showed a significant difference around 2015 because of the US shale oil revolution. US shale production has surged substantially since 2014. However, due to the oil embargo, the excess US crude oil could not be exported, resulting in a significant increase in US Cushing crude oil inventories. The spread at this stage was mainly affected by the WTI price, and the correlation between the price difference and the oil price trend was not strong. After the United States lifted the ban on oil exports at the end of 2015, the correlation between the price spread and the oil price trend increased significantly. After the lifting of the oil export ban, the WTI oil price was no longer solely affected by the internal supply and demand of the United States. During this period, Brent and WTI oil prices were more consistent, resulting in a narrower spread. In addition, after 2015, the consistency of them increased significantly.
In summary, the factors selected by all three models are USA PPI: Energy, Euro PPI, USA: PMI (global economic development), Crude oil non-commercial net long ratio (speculate factor), Real dollar index: generalized (exchange rate market), LME: Copper: Future closing price (commodity market), WTI crack spread: actual value and Brent crack spread: actual value (technology factor).
Evaluation criteria and statistic test
To compare the forecasting performance of our proposed approach with some other benchmark models from level forecasting and directional forecasting, three main evaluation criteria, i.e., root mean squared error (RMSE), mean absolute percentage error (MAPE), directional symmetry (DS), which have been frequently used in recent years (Chiroma et al., 2015;Mostafa & El-Masry, 2016;Yu et al., 2008b;, are selected to evaluate the in-sample and out-of-sample forecasting performance. Three indicators are defined as follows: where x t andx t denote the actual value and forecasted value at time t, respectively, To provide statistical evidence of the forecasting performance of our proposed ensemble learning approach, three tests, i.e., the Diebold-Mariano (DM) (Diebold & Mariano, 1995) test, the Superior Predictive Ability (Hansen, 2005) test and Pesaran-Timmermann (PT) (Pesaran & Timmermann, 1992) test, are performed. The DM test checks the null hypothesis of equal predictive accuracy. In this study, the Mean Squared Error is applied as DM loss function and each model is compared against a random walk. As for performance measurement, we use the RMSE and MAPE. The PT test examines whether the directional movements of the real and forecast values are the same. In other words, it checks how well rises and falls in the forecasted value follow the actual rises and falls of the time series. The null hypothesis is that the model under study has no power in forecasting the oil prices.
Forecasting performance evaluation
To verify the predictive power of the variable Select-machine learning integrated approach, we selected 8 benchmark models, including 6 multivariate models (MLP, RBFNN, GRNN, ELMAN, WNN, ELM) and 2 univariate models (RW and ARMA). In this paper, the number of MLP output layer neurons is 1, the number of iterations in the training stage is 10,000, and the number of hidden layer neurons is determined by the trial and error method to be 11. Similarly, the number of LSTM hidden layers and the number of delays were set as 5 and 4, respectively, and the number of output layer neurons was set as 1. The structure of the LSTM neural network was trained by backpropagation algorithm (BP), and the learning rate, batch size and the number of the epoch were set as 0.05, 60 and 1000 respectively. The convergence rate is controlled by the learning rate, which is a function of decreasing time. When the number and learning rate of epochs is set at 1000 and 0.05, the convergence of the training set is realized, and the empirical results tend to be stable, which can recognize the convergence of the training set data. When the parameter combination changes, once it converges, the experimental results tend to be stable. All models are implemented using Matlab 2017B software.
According to the forecast results of Table 3, we can find some interesting conclusions: (1) no matter in the sample inside or outside the sample prediction, this chapter proposed variable selection -machine learning approach to integration in the training set and test set level precision (RMSE and MAPE) and direction (DS) were better than that of the single variable precision model and the core factors extracted model. (2) Among the variable selection-machine learning integration models, the BMA-LSTM integration model performs best, followed by Spike and Slab LASSO-LSTM and GLMN ET-LSTM. For example, the RMSE, MAPE and DS predicted in the SAMPLE of rhe BMA-LSTM integrated module were 1.12%, 0.74% and 84.88%, respectively, which were 3.77%, 3.28% and 35.46% higher than RW, and 1.09%, 0.61% and 12.21% higher than the LSTM model without variable selection. (3) When predicted one step in advance, RMSE, MAPE and DS of the BMA-LSTM integrated model were 2.04%, 0.83% and 81.40%, respectively, 3.7%, 4.36% and 32.56% higher than that of the RW model, and 1.13%, 0.73% and 11.63% higher than that of the variable free LSTM model. It shows that the prediction accuracy of the variable selection-machine learning integrated model is significantly improved compared with that of the univariate model and the univariate model. Secondly, the number of core variables selected by BMA is neither the most nor the least among the three variable selection models, indicating that the number of core variables will also affect the prediction results.
Statistic tests
According to Table 4 statistical test results can be seen that (1) the step ahead prediction samples, variable selection -machine learning integration model of DM test results are less than 7.195, this means that the performance of the proposed method under the confidence level of 99% is better than all the other benchmark model, the possible reason is the variable selection -machine learning integration model significantly improves the prediction performance of the model. (2) In the out-of-sample prediction 1 step in advance, when the LR model is used as the test model, the DM test results of other test models are far less than − 7.015, indicating that the predictive performance of the three integrated methods and the other three single models is better than that of the machine learning model without variable selection at the 99% confidence level.
(3) samples within 1 step ahead prediction and sample 1 step ahead prediction of the performance of the three variables extraction method was compared, the BMA-LSTM integration model to predict performance is the best, the next step is the Spike and slab LASSO-LSTM and GLMNET-LSTM, suggesting that this chapter puts forward the integrated research framework based on the variable selection-machine learning significantly improves the performance of the integrated machine learning approach. PT test results also give three interesting points: (1) In the one step forward insample forecasting, the PT test results of the proposed variable selection-machine learning integration approach are all rejected the movement direction of the actual independence assumption under 99% confidence level. This also means that the variable selection-machine learning method is the best direction prediction performance and also can be seen that the direction of the ARIMA predicts performance is the worst. (2) In the out-of-sample one-step forecasting, the PT test value of the predicted results of the integrated method is significantly greater than that of the single model, which means that the direction prediction ability of the integrated method is better than that of the single model, mainly because the variable selection-machine learning integration idea significantly improves the direction prediction performance of the single model.
(3) In the prediction in and out of sample 1 step in advance, it can be seen from the PT test results of variable Selection-machine learning integration method that the direction prediction accuracy of BMA-LSTM is the highest, followed by Spike-Slab Lasso-LSTM, which is mainly attributed to the direction prediction ability of variable Selection-machine learning method.
Conclusions and future work
In this paper, we proposed a variable selection and machine learning framework that combines the variable selection (BMA) and forecasting method (LSTM) to forecast the oil price and compared its forecasting performance with other primary and new variable selection methods (elastic-net and spike and slab Lasso). Moreover, compared to other popular benchmark forecast methods (RW, ARMA, MLP, RBFNN, GRNN, ElMAN, WNN, ELM). Specifically, our contributions are as follow: Introduce the variable selection before forecasting. In this process, we compare three different methods and analyze core influencing factors based on the literature review from supply and demand, global economic development, financial market, and technology aspects. The results showed that the variable of the SSL method is a subset of the BMA method, and the BMA method is a subset of the elastic-net.
Testing the performance of the proposed variable selection and machine learning framework based on 3 variable selections and 8 individual forecasts. Comparing with the 8 individual forecasts without variable selection, the combinations forecasting reduces the errors. The results showed that (1) the variable choice-machine learning integration method proposed in this chapter is superior to the univariate model and the model without core factor extraction in both training set and test set level accuracy (RMSE, MAPE) and direction symmetric (DS). (2) Among the variable selectionmachine learning integration models, The BMA-LSTM integration model performs best, followed by Spike and Slab LASSO-LSTM and GLMNET-LSTM. It shows that the prediction accuracy of the variable selection-machine learning integrated model is significantly improved compared with that of the univariate model and the univariate model. Secondly, the number of core variables selected by BMA is neither the most nor the least among the three variable selection models, indicating that the number of core variables will also affect the prediction results. (3) The statistical test results show that the prediction of 1 step in advance in-sample and 1 step in advance in out of sample. Compared with the prediction performance of the three variable extraction methods, the directional prediction accuracy and horizontal prediction accuracy of the BMA-LSTM integrated model are the best, followed by Spike and Slab-LASSO-LSTM and GLMNET-LSTM. This indicates that the variable selection-based machine learning integrated research framework proposed in this chapter significantly improves the forecasting performance of oil prices. In future research, we may introduce more independent variables with the help of internet search data, test our framework performance. Moreover, investor sentiment can be quantified in this process. In addition, different variable selection methods can be introduced more. | 8,880.2 | 2021-09-01T00:00:00.000 | [
"Computer Science"
] |
Effect of Metakaolin and Lime on Strength Development of Blended Cement Paste
: To develop a more reactive pozzolan for supplementary cementitious materials (SCMs), the co-calcination of kaolinite and limestone was investigated for its contribution to hydration of blended cement. Kaolinite (with ~50 wt% quartz impurity) was calcined at 700 ◦ C, and a mixture of kaolinite and limestone was calcined at 800 ◦ C. These activated SCMs were added to ordinary Portland cement (OPC), replacing ca. 30 wt% of the OPC. The compressive strength of these blended cement paste samples was measured after 28 and 90 days, while the hydration products and microstructural development in these blended cement pastes were analyzed by X-ray diffraction (XRD), thermogravimetric analysis (TGA), and scanning electron microscopy (SEM). The results revealed that adding free lime to OPC, together with metakaolin, led to enhanced compressive strength. The compressive strength of this new blended cement paste reached 113% and 112% of the compressed strength of pure OPC paste after 28 and 90 days of hydration, respectively. Furthermore, this study showed that the improvement was due to the increased consumption of Portlandite (CH), the formation of calcium aluminosilicate hydrate (CASH), and the reduction of porosity in the sample containing free lime and metakaolin.
Introduction
As far back as the Roman civilization, activated aluminosilicate, in the form of lava, was processed together with reactive lime, in the form of burnt limestone (CaO), to form extremely durable, pozzolanic cements [1,2]. More recently, metakaolin has been widely used as a source of activated aluminosilicate; and calcium hydroxide (CaO·H 2 O) has been used in place of burnt limestone [3]. When metakaolin in blended OPC reacted with free lime, which is one of the products of hydration reactions in OPC, additional calcium silicate hydrate (CSH) and CASH had formed [4][5][6][7][8][9][10][11][12]. These two phases offered the combined effect of increasing strength and durability, albeit over the long term [13][14][15][16][17]. Metakaolin is already being added to OPC in amounts ranging 5-30 wt% in many studies [12]. However, to use the full pozzolanic potential of this much metakaolin for forming CSH and CASH, one would need much higher amounts of free lime than what is produced by OPC hydration reactions.
To design an alternate solution, the use of schist-type materials was considered. Because pure clay minerals, such as kaolinite, are relatively scarce, using schist as an alternative clay source is more cost-effective [18]. Although the schist minerals do have a variation in their mineralogical composition, they generally contain quartz, different types of clay minerals, and feldspars in variable ratios [19,20]. Because they may additionally contain carbonates of calcium and magnesium, they already provide the two main ingredients for pozzolanic reactions in cement: aluminosilicates, i.e., clay minerals, and an abundant calcium source, limestone [19,20]. However, clays must first be activated-i.e., by calciningbefore they can be used as SCMs [7]. Calcined clays as SCMs have already been adopted by the cement industry due to their good pozzolanic characteristics. In pozzolanic reactions, calcined clays consume Portlandite, which is one of the main sources of cement durability problems [4][5][6][7]21]. On the other hand, calcined limestone, i.e., lime (CaO), was converted to hydrated lime (calcium hydroxide-CH) after being mixed with water and was reported to enhance the pozzolanic reactivity of calcined clays (metakaolin) [3]. Thus, co-calcining both clays and limestone may offer synergistic potential for enhancing pozzolanicity, producing blended cement paste with higher compressive strength and durability.
As the simplest-structured clay, kaolinite has been widely used in many studies as an SCM source [4][5][6][7][8][9][10][11][12]. Its advantage over other types of clays is the simplicity of dehydroxylation during heat-treatment. Kaolinite consists of a silicon oxide layer and an aluminum hydroxide layer [22], and this 1:1 structure facilitates the study of pozzolanic reaction mechanisms. Generally, the activation of clay minerals occurs through the destabilization of aluminum hydroxide layers. In the case of kaolinite, de-hydroxylation takes place in the temperature range of 400-700 • C [8]. This thermal activation leads to the transformation of kaolinite into the more disordered and unstable structure of metakaolin. Metakaolin is highly reactive in cement hydration reactions, i.e., under high pH values [7,12,23]. Ambroise et al. [24] evaluated the pozzolanic reactivity of heat-treated kaolinite in the calcination temperature range of 600-800 • C and reported that the optimum calcination temperature is 700 • C, to give maximum strength at 3, 7, and 28 days. Several groups [7][8][9][10][11]25] have used different ratios of calcined kaolinite, i.e., metakaolin, in blended cement and reported that it generally improved most of the properties of blended cement, such as strength, drying shrinkage, microstructure, and porosity. In addition, several studies [4,6,7,21,22] compared kaolinite to other clays and showed that it is the most reactive clay in pozzolanic reactions.
The addition of limestone to OPC was claimed to be another efficient way to improve compressive strength [26]. There were already several studies [27][28][29][30][31] that were focused on the use of pozzolanic materials, such as calcined clays, together with finely ground virgin limestone particles as the SCMs. This additive not only sped up the hydration at early age, but also served as a filler and influenced the hydrated structure of blended cement pastes. Pera et al. [32] observed physical and chemical changes during cement hydration, by adding different amounts of calcium carbonate to cement. They reported that calcium carbonate accelerated cement hydration and caused the formation of calcium carbosilicate hydrate phases. A mixture of pozzolanic material and limestone was also used as an efficient cement substitute, when used by the Romans as the first blended cement, based on natural volcanic ash together with limestone [1,2,33]. Recently, metakaolin was used together with calcite (limestone) in blended cement [5,12,[34][35][36]. Calcite reactions with cement during hydration were claimed to impart additional benefits to the hardened paste through the formation of hemi-carboaluminate, instead of mono-sulfoaluminate. Hemi-carboaluminate subsequently converted to mono-carboaluminate [12,37], which was another strength-providing phase. In limestone calcined clay (LC 3 ), which was introduced and developed by Scrivener et al. [18], the increased pozzolanic activity arose from adding both calcined clay and limestone to OPC. Antoni et al. [12] reported that in the LC 3 method, the excess alumina provided by metakaolin reacted with limestone, leading to stabilization of ettringite, and enabled elevated levels of substitution.
In previous studies [18,19], schist-type materials, containing several types of clays, were combined with calcium carbonate and evaluated as a partial replacement for cement. Calcite was decomposed over the temperature range of 700-900 • C [38]. The calcination of calcium carbonate produced free lime and carbon dioxide, suggesting that co-activation of clay minerals together with calcium carbonate could lead to using the full pozzolanic potential of calcined clays [19,20]. Furthermore, Weise et al. [3] reported an enhanced pozzolanic reactivity of metakaolin when adding calcium hydroxide (CH) to a metakaolin and Portland cement mixture. The use of CH with activated complex-structured clays was shown to be feasible. The next logical step would be co-activation of limestone with clays for pozzolanic reactions, which has not been reported previously in the literature.
Several factors affecting the compressive strength of cement are well known, such as the amount of Portlandite and CSH and CASH phases. Portlandite alone is a weak phase in terms of mechanical properties and does not contribute to compressive strength in a cement paste [39]. The CSH gel is the major strength provider in cement paste and concrete [13][14][15]. CASH is an additional hydrate phase that contributes additional compressive strength to cement paste [14][15][16][17]. Another factor strongly affecting the compressive strength of cement paste is porosity. It is known that the strength of any solid material, including cement paste, decreases with an increase of porosity [40][41][42][43]. The amount of macro-porosity in cement paste is a function of the water-to-cement (w/c) ratio [44].
In this study, the improvement in the compressive strength of blended cement pastes by using co-calcined kaolinite and limestone as an SCM is reported. The role of limestone in cement pozzolanic reactions was investigated by making a simple schist-like compound of kaolinite and limestone to be used as an SCM after calcination. To this end, blended cement paste samples were produced by including 30 wt% of SCMs in OPC. The pozzolanic reactivity of the SCMs were determined by quantifying the Portlandite consumption with XRD and TGA methods and measuring the compressive strength of cubic blended cement paste samples after 28 and 90 days of hydration.
Materials
Kaolinitic clay with~50 wt% purity was provided byŞişecam company in Istanbul, Turkey. Limestone (CC) powder and OPC (ENS 197-1 CEM 42.5 R type) were provided by AkçanSA Cement Manufacturing Company (Istanbul, Turkey). The phase composition of ground kaolinite (K), Limestone (L), and OPC samples were determined by using the characterization methods described in detail below. Table 1 summarizes the phase composition of these three samples. The chemical compositions of the three starting materials are given in Table S1 in the Supplementary Materials section. The limestoneadded kaolinite sample (KL) was prepared by mixing 15 wt% of limestone with 85 wt% of kaolinite. Table 1. Phase composition of the raw materials.
Calcination Process
Thermal treatment was performed on K and KL samples to activate their pozzolanic potential. Using a heating rate of 10 • C/min., the K samples were calcined at 700 • C, and the KL samples at 800 • C. These temperatures were determined according to the thermal decomposition behavior of the respective SCM, and were chosen based on the end temperature of each total weight loss step. For example, the decomposition reaction, de-hydroxylation, occurred at 700 • C in K samples, while de-hydroxylation and calcination took place at 800 • C in KL samples. The calcination steps were the following: 150 g of each powder was put into an alumina crucible, and then heated to 300°C and held for 1 h to eliminate absorbed water; subsequently, the powders were heated up to the final calcination temperature and held for 2 h, followed by air quenching to room temperature. Calcined K and KL samples were re-evaluated with TGA and XRD to monitor the progress of calcination. The calcined K and KL samples were further analyzed with a particle size analyzer to evaluate the effect of particle size on the compressive strength of blended cement paste samples.
Composite Cement Paste Samples
Calcined K or KL samples were added to OPC at ca. 30 wt% and mixed with water, such that the water-to-solid ratio (w/s) was 0.5. This replacement ratio and w/s were chosen to provide a basis for comparison with most of the other reports in the literature [12]. The blended cement paste samples were designated as K-OPC and KL-OPC. The composition of each cement paste sample was listed in Table 2. The samples were then prepared for compressive strength measurement in cubic molds with dimensions of 40 mm × 40 mm × 40 mm. As a benchmark for strength measurements, a pure OPC paste was prepared with the same procedure. After 24 h, the samples were separated from the molds and held in circulating water at 23 • C for 28 and 90 days. Compressive strength tests were performed on the cubic samples after the predetermined days of hydration. In addition, the amount of Portlandite was assessed after 28 and 90 days in each hydrated, blended cement sample with the aid of XRD (Bruker D2 phaser, Bruker AXS GmbH, Karlsruhe, Germany) and TGA (TGA/DTA; Netzsch STA 449 Jupiter, Netzsch GmbH, Selb, Germany) methods. The microstructure of the blended cement pastes was also analyzed in an SEM (LEO Supra 35VP, Zeiss GmbH, Oberkochen, Germany).
Characterization
To investigate the temperature-dependent chemical and structural evolution, simultaneous thermal analysis-thermogravimetric analysis and differential thermal analysis (TGA/DTA; Netzsch STA 449 Jupiter, Netzsch GmbH, Selb, Germany)-was performed on the samples, over a temperature range of 30 to 1000 • C, under nitrogen gas, and with a heating rate of 10 K/min. Based on the TGA of each kaolinite sample, the temperature ranges and weight-loss amounts were determined for clay de-hydroxylation and CC decomposition.
Changes in the crystalline phase content were analyzed in K or KL samples before and after the calcination process by XRD analysis (Bruker D2 phaser, Bruker AXS GmbH, Karlsruhe, Germany). Cu-Kα (λ = 1.54 Å) radiation was used. The divergence slit size was fixed at 0.5 • . Samples were scanned between 5 to 90 • 2 θ, using a step size of 0.02 • 2 θ and a dwell time per step of 1 s. Moreover, OPC, K-OPC, and KL-OPC pastes were also analyzed with XRD to determine the hydration products, and to quantify relative Portlandite amount of each sample after 28 and 90 days. In the case of quantifying the Portlandite amount, rutile was used as an external standard. Rietveld quantitative phase analysis was performed on the XRD patterns using the TOPAS-Academic V5 [45] in the launch mode jointly with the Jedit text editor.
To investigate the spatial distribution and morphology of the hydration products in OPC, K-OPC, and KL-OPC, microstructures of hydrated samples were imaged using a scanning electron microscope (SEM; LEO Supra 35VP field emission, ZEISS GmbH) equipped with an energy dispersive X-ray microanalyzer (EDS; Roenteg, Xflash; Bruker AXS GmbH, Karlsruhe, Germany). Imaging of samples was done by using an Everhart-Thornley secondary electron detector and an aperture size of 30 µm. Porosity measurements were performed by applying Fiji (ImageJ, National Institutes of Health, Bethesda, MD, USA) [46] software to binary images generated from the SEM micrographs. In addition to total porosity being determined, macroporosity was also obtained from the total areal fraction of pores with effective diameters larger than 1 micrometer.
The average particle size in OPC, calcined K, and calcined KL samples was measured to determine reactive surface area using a particle size analyzer (Mastersizer 3000, Malvern, UK) with a laser micro-analysis detector and a measurement range of 100 nm to 2 mm. Ethanol was used as the solvent.
The load-bearing performance of blended-cement paste samples was evaluated by performing compressive strength measurements. These measurements were carried out on 40 mm × 40 mm × 40 mm cubes of composite cement paste samples of OPC, K-OPC, and KL-OPC, using a compression testing machine (MATEST E161N Servo-plus Evolution, Treviolo (BG), Italy). Samples were subjected to uniaxial loading in compression until fracture. Average compressive strength values were determined from 3 measurements on cube samples.
Phase Analysis of Un-Treated and Calcined Kaolinite Samples with and without Limestone Additions
K and KL samples were analyzed by XRD before and after calcination to determine the phase composition. Figure 1 shows the XRD diffractograms of un-treated and calcined K and KL samples. The XRD patterns revealed that in both K and KL samples, only the crystalline quartz peaks (as an inert phase) remained unchanged during calcination. Therefore, the quartz peak at a d-spacing of 3.35 Å was used as the basis for an internal comparison to estimate the relative amounts of crystalline phases. Crystalline kaolinite peaks disappeared after the calcination process. In addition, there was a relative decrease in calcite peak intensity after calcination of the KL sample. Rietveld phase analysis of the XRD patterns revealed changes in phase composition with calcination in both K and KL samples. While the Kaolinite phase content decreased from 53 to 3 wt%, quartz increased from 47 to 97 wt% in the K sample ( Figure 1). A similar result for calcination of KL was obtained from Rietveld phase analysis of the XRD patterns: kaolinite content decreased from 46 to 2 wt%, and calcite content decreased from 14 to 7 wt%, while the quartz amount increased from 40 to 92 wt%. During calcination, the crystalline amounts in the K and KL powders decreased from 81.2 to 46% and 79.1 to 62%, respectively. These results indicate that crystalline kaolinite was successfully de-hydroxylated and amorphized during calcination. In addition, a portion of limestone was also decomposed after calcination of the KL sample.
Thermal Analysis of Un-Treated and Calcined Kaolinite Samples with and without Limestone Additions
TGA analyses was performed on un-treated K and KL samples to assess the the decomposition behavior. To check the effectiveness of calcination, calcined K and KL
Thermal Analysis of Un-Treated and Calcined Kaolinite Samples with and without Limestone Additions
TGA analyses was performed on un-treated K and KL samples to assess the thermal decomposition behavior. To check the effectiveness of calcination, calcined K and KL samples were also analyzed with TGA. Figure 2 summarizes the TGA thermograms of un-treated and calcined samples in the temperature range 25-1000 • C. The TGA curve of the K sample showed a single weight loss in the temperature range 400-700 • C. This temperature range corresponds to the de-hydroxylation of kaolinite [47]. However, there was an additional weight loss stage in the KL sample thermogram, in the temperature range 650-800 • C corresponding to decomposition of calcium carbonate (limestone) [48]. This diagram also revealed that there was a delay in the end temperature of kaolinite decomposition in the KL sample. It should be noted that decomposition of kaolinite in the KL sample did not reach completion at 700 • C, as it did in the K sample. Instead, in the KL sample, the decomposition of kaolinite slowed down, as reflected by a change in slope of the TG curve in this temperature range (600 • to 700 • C). When the decomposition of calcium carbonate started at higher temperatures, the weight loss curve became steeper again, indicating faster weight loss.
of calcite during calcination process.
Thermal Analysis of Un-Treated and Calcined Kaolinite Samples with and witho Limestone Additions
TGA analyses was performed on un-treated K and KL samples to assess t decomposition behavior. To check the effectiveness of calcination, calcined K an ples were also analyzed with TGA. Figure 2 summarizes the TGA thermogr treated and calcined samples in the temperature range 25-1000 °C. The TGA c K sample showed a single weight loss in the temperature range 400-700 °C. T ature range corresponds to the de-hydroxylation of kaolinite [47]. However, th additional weight loss stage in the KL sample thermogram, in the temperature 800 °C corresponding to decomposition of calcium carbonate (limestone) [48 gram also revealed that there was a delay in the end temperature of kaolinite d tion in the KL sample. It should be noted that decomposition of kaolinite in the did not reach completion at 700 °C, as it did in the K sample. Instead, in the K the decomposition of kaolinite slowed down, as reflected by a change in slop curve in this temperature range (600° to 700 °C). When the decomposition of c bonate started at higher temperatures, the weight loss curve became steeper a cating faster weight loss. The total weight losses, due to de-hydroxylation of clay and decomposition of calcium carbonate, were extracted from the TGA thermograms. Table 3 summarizes the weight loss in each sample in the two temperature ranges associated with the decomposition ranges of kaolinite (300-650 • C) and calcium carbonate (650-800 • C). The TGA results of calcined samples revealed that the amount of remnant (undecomposed) kaolinite in the K sample was 0.8 wt%. This value was 0.4 wt% in the calcined KL sample. The amount of left-over calcium carbonate in the KL sample associated with weight loss in the temperature range of 650-800 • C was 3.5 wt%. Calcium carbonate is known to lose 44% of its initial weight during calcination [38]. Therefore, the amount of undecomposed limestone in the calcined KL sample was ca. 8%. These results confirmed the complete decomposition of kaolinite and partial decomposition of limestone during calcination. Table 3. Weight loss amounts of un-treated and calcined kaolinite (K) and kaolinite/limestone (KL) samples at the temperature ranges of 300-650 • C (kaolinite decomposition range) and 650-800 • C (calcium carbonate decomposition range).
Amount of Portlandite in Blended Cement Paste Samples
The relative amount of Portlandite in blended cement pastes was measured by using XRD, with rutile added as an external standard reference [49]. Since the amount of rutile was fixed in the three samples, it was possible to compare the amount of Portlandite in the blended pastes by calculating the peak intensity ratio of Portlandite/rutile [49]. Figures 3 and 4 show the main peaks of rutile and Portlandite in the XRD diffractograms of OPC and blended cement pastes. The Portlandite/rutile peak intensity ratio in each sample was also calculated and shown. These calculations indicated a significant decrease in the Portlandite content in the blended pastes, when compared to that in the OPC paste. In addition, the calculations showed that the amount of Portlandite in KL-OPC was less than in K-OPC samples. Therefore, it was concluded that calcined KL showed a better pozzolanic reactivity than calcined K at both 28 and 90 days. amount of undecomposed limestone in the calcined KL sample was ca. 8%. These re confirmed the complete decomposition of kaolinite and partial decomposition of l stone during calcination.
Amount of Portlandite in Blended Cement Paste Samples
The relative amount of Portlandite in blended cement pastes was measured by u XRD, with rutile added as an external standard reference [49]. Since the amount of r was fixed in the three samples, it was possible to compare the amount of Portlandi the blended pastes by calculating the peak intensity ratio of Portlandite/rutile [49]. Fi 3 and Figure 4 show the main peaks of rutile and Portlandite in the XRD diffractogr of OPC and blended cement pastes. The Portlandite/rutile peak intensity ratio in sample was also calculated and shown. These calculations indicated a significant decr in the Portlandite content in the blended pastes, when compared to that in the OPC p In addition, the calculations showed that the amount of Portlandite in KL-OPC was than in K-OPC samples. Therefore, it was concluded that calcined KL showed a b pozzolanic reactivity than calcined K at both 28 and 90 days. The amount of Portlandite in hydrated blends and reference OPC paste was also measured by using thermogravimetric analysis and quantified from the dehydration weight loss between 400-500 • C, using the tangent method [49]. The quantification results of all samples are summarized in Table 4. Amount of Portlandite after 28 and 90 days in OPC, OPC blended with calcined kaolinite (K-OPC), and OPC blended with calcined kaolinite/limestone (KL-OPC) pastes, obtained from the TGA-determined weight loss amounts. There was a small decrease in Portlandite content in the blended pastes (compared to OPC paste) after 28 days. However, these two samples (K-OPC and KL-OPC) showed a remarkable pozzolanic reactivity at 90 days, such that there was a significant consumption of Portlandite in these samples compared to the pure OPC paste. These measurements confirmed that the KL-OPC blend showed better pozzolanic reactivity than K-OPC at both 28 and 90 days. Figures 5 and 6 present the TGA and derivative of TGA (DTG) diagrams of all three samples after 28 and 90 days.
Constr. Mater. 2022, 3, FOR PEER REVIEW Figure 3. XRD diffractograms of OPC, OPC blended with calcined kaolinite (K-OPC), and blended with calcined kaolinite/limestone (KL-OPC) pastes at 28 days showing Portlandite-to-r (P/R) peak intensity ratio, where rutile was added with a fixed amount to these three sampl external standard reference. The amount of Portlandite in hydrated blends and reference OPC paste was measured by using thermogravimetric analysis and quantified from the dehydra weight loss between 400-500 °C, using the tangent method [49]. The quantification res of all samples are summarized in Table 4. Amount of Portlandite after 28 and 90 day OPC, OPC blended with calcined kaolinite (K-OPC), and OPC blended with calc kaolinite/limestone (KL-OPC) pastes, obtained from the TGA-determined weight amounts. There was a small decrease in Portlandite content in the blended pastes (c pared to OPC paste) after 28 days. However, these two samples (K-OPC and KL-O showed a remarkable pozzolanic reactivity at 90 days, such that there was a signifi consumption of Portlandite in these samples compared to the pure OPC paste. T measurements confirmed that the KL-OPC blend showed better pozzolanic reacti than K-OPC at both 28 and 90 days. Figures 5 and 6 present the TGA and derivativ TGA (DTG) diagrams of all three samples after 28 and 90 days.
Hydration Products of Blended Cement Paste Samples
To determine the phase composition of the hydrated samples, the OPC cement pastes were analyzed using XRD. First, the amorphous vs. crystall each sample was determined. The amount of each crystalline phase was a with Rietveld analysis. It should be noted that no external standard referen used in these measurements. Table 5 summarizes the results of Rietveld quantitative analysis of t Quantitative phase analysis of the diffractograms (shown in Figure 7) revea in ettringite content by the addition of SCMs to the OPC paste after both 2 The rate of this decrease in the K-OPC paste was slightly higher than in the 28 and 90 days. The amount of monosulfate (aFm) phase increased with t ettringite, as expected. The results showed that the maximum amount o phase was found in K-OPC after 28 days, and in KL-OPC after 90 day
Hydration Products of Blended Cement Paste Samples
To determine the phase composition of the hydrated samples, the OPC and blended cement pastes were analyzed using XRD. First, the amorphous vs. crystalline content of each sample was determined. The amount of each crystalline phase was also quantified with Rietveld analysis. It should be noted that no external standard reference (rutile) was used in these measurements. Table 5 summarizes the results of Rietveld quantitative analysis of the XRD data. Quantitative phase analysis of the diffractograms (shown in Figure 7) revealed a decrease in ettringite content by the addition of SCMs to the OPC paste after both 28 and 90 days. The rate of this decrease in the K-OPC paste was slightly higher than in the KL-OPC after 28 and 90 days. The amount of monosulfate (aFm) phase increased with the decrease in ettringite, as expected. The results showed that the maximum amount of monosulfate phase was found in K-OPC after 28 days, and in KL-OPC after 90 days. Stratlingite (CASH) appeared only in the blended cement pastes. The amount of this phase in the K-OPC and KL-OPC pastes increased as the hydration time increased from 28 to 90 days.
Evaluating Microstructural Changes Due to Co-Calcined Kaolinite and Limestone in Blended
Cement Paste Samples Figure 8 shows the SEM secondary electron (SE) images of OPC (a,b), K-OPC (c,d), and KL-OPC (e,f) pastes after 90 days of hydration at two different magnifications. The OPC paste images showed the typical CSH formation, which had already developed crystalline regions (late-age CSH) containing significant fine porosity. The microstructure of K-OPC and KL-OPC pastes showed the presence of an additional platy habit phase in addition to CSH, which was identified as CASH through EDS analysis. Moreover, both SCM-blended samples-i.e., K-OPC and KL-OPC-had a much smoother surface-i.e., less porosity-than the OPC one. In particular, in the KL-OPC sample, the macropores (i.e., pores with diameter >1 µm) occupied half of the area fraction observed in pure OPC or K-OPC microstructures. To confirm the assumption that the platy grains in K-OPC and KL-OPC microstructures were CASH, EDS elemental analysis was performed on the KL-OPC sample containing both CSH and CASH. Two regions with different morphologies, as indicated with circles labeled with point 1 and point 2 in Figure 9, were analyzed to elucidate their chemistry. Figure 10 shows the EDS spectra and the relative amounts of Ca, Si and Al at (a) point 1 (CSH) and (b) point 2 (CASH). The Al/Si ratio was 0.24 at point 1 and 2.29 at point 2. The significant amount of Al at point 2 with platy morphology confirmed the existance of CASH in the microstructure. The Ca/Si and Ca/Al ratios at point 1 were calculated as 1.87 and 7.63, respectively. These ratios were also calculated for platy region at point 2 as 1.2 and 0.53, respectively. To confirm the assumption that the platy grains in K-OPC and KL-OPC microstructures were CASH, EDS elemental analysis was performed on the KL-OPC sample containing both CSH and CASH. Two regions with different morphologies, as indicated with circles labeled with point 1 and point 2 in Figure 9, were analyzed to elucidate their chemistry. Figure 10 shows the EDS spectra and the relative amounts of Constr. Mater. 2022, 3, FOR PEER REVIEW 12 confirmed the existance of CASH in the microstructure. The Ca/Si and Ca/Al ratios at point 1 were calculated as 1.87 and 7.63, respectively. These ratios were also calculated for platy region at point 2 as 1.2 and 0.53, respectively. Figure 9, showing CSH and CASH, re tively, and the relative amounts of Ca, Si, and Al at Point 1 and Point 2.
Particle Size Analysis of Calcined SCM Samples
Particle size analyses of OPC, calcined K, and calcined KL powders indicated tha ingredients of the blended cement paste samples had similar particle size distribution shown in Figure 11. The particle size distribution of an SCM impacts the amount o sorbed water and, consequently, the compressive strength of blended cement paste s ples. Figure 9, showing CSH and CASH, respectively, and the relative amounts of Ca, Si, and Al at Point 1 and Point 2.
Particle Size Analysis of Calcined SCM Samples
Particle size analyses of OPC, calcined K, and calcined KL powders indicated that all ingredients of the blended cement paste samples had similar particle size distributions, as shown in Figure 11. The particle size distribution of an SCM impacts the amount of absorbed water and, consequently, the compressive strength of blended cement paste samples.
Compressive Strength of Blended Cement Paste Samples
The mechanical performance of blended cement paste samples was investigated by compressive strength measurements. Figure 12 shows the compressive strengths of SCM
Compressive Strength of Blended Cement Paste Samples
The mechanical performance of blended cement paste samples was investigated by compressive strength measurements. Figure 12 shows the compressive strengths of SCMblended samples compared to the pure OPC paste reference. The results showed that K-OPC reached 99% and 101% of the compressive strength of pure OPC after 28 and 90 days of hydration, respectively. By contrast, the compressive strength values of KL-OPC samples surpassed the one of pure OPC reaching 113% and 112% of the pure OPC strength, respectively. The best compressive strength value was exhibited by the KL-OPC sample after both 28 and 90 days. The compressive strength per weight of OPC (f c /W cement ) were calculated for all cement pastes. For pure OPC, f c /W cement was taken as 100%. After 28-days of hydration (f c /W cement ) K-OPC was 141% and (f c /W cement ) KL-OPC was 162%. After 90 Days of hydration the values were 144% and 159% for K-OPC and KL-OPC, respectively.
Compressive Strength of Blended Cement Paste Samples
The mechanical performance of blended cement paste samples was investigate compressive strength measurements. Figure 12 shows the compressive strengths of S blended samples compared to the pure OPC paste reference. The results showed tha OPC reached 99% and 101% of the compressive strength of pure OPC after 28 and 90 d of hydration, respectively. By contrast, the compressive strength values of KL-OPC s ples surpassed the one of pure OPC reaching 113% and 112% of the pure OPC stren respectively. The best compressive strength value was exhibited by the KL-OPC sam after both 28 and 90 days. The compressive strength per weight of OPC (fc/Wcement) w calculated for all cement pastes. For pure OPC, fc/Wcement was taken as 100%. After 28of hydration (fc/Wcement)K-OPC was 141% and (fc/Wcement)KL-OPC was 162%. After 90 Day hydration the values were 144% and 159% for K-OPC and KL-OPC, respectively.
Using the Full Pozzolanic Potential of Metakaolin
The goal in this study was to develop a more reactive SCM for exploiting the full pozzolanic potential of metakaolin by co-calcining kaolinite together with limestone. As revealed by SEM imaging (Figure 8) and XRD analysis (Figure 7), the additional free lime reacted with more metakaolin in the presence of water, yielding a less porous microstructure. Even though an increased amount of calcium hydroxide in a cement paste is known to weaken the strength and durability of OPC [39], in metakaolin-blended cements, it reacted with metakaolin and yielded higher strength values (Figure 12).
The simultaneous decomposition of kaolinite and limestone through co-calcination at 800 • C produced a reactive amorphous precursor, i.e., a mixture of activated kaolinite (metakaolin) and reactive lime, as confirmed by using XRD (Figure 1) and TGA (Figure 2). Even though activating limestone through calcium carbonate decomposition required a higher temperature than that of kaolinite, it should be noted that exposing metakaolin to this higher calcination temperature that was required for the decomposition of CaCO 3 did not lead to sintering or a loss of pozzolanicity of metakaolin. The consequence was that kaolinite was allowed to survive, even at temperatures under which it would have sintered when alone. This delay is significant because sintering of metakaolin annihilates its reactivity.
Forming cement paste samples by blending the new pozzolan with OPC provided the basis for demonstrating the full pozzolanic potential of metakaolin. As verified by XRD (Figures 3 and 4) and TGA (Figures 5 and 6) results of the blended cement paste samples, less Portlandite remained in the KL-OPC paste compared to that in the K-OPC and OPC ones after both 28 and 90 days of hydration. Furthermore, Rietveld analysis (Table 5) revealed that the amount of CASH in the KL-OPC paste was slightly higher than in the K-OPC after 90 days. There was no CASH present in the OPC paste. Because the amount of poorly crystalline CSH cannot be quantified through Rietveld analysis of XRD data, the increase in the amorphous-to-crystalline ratio served as the indicator of CSH increase in K-OPC. The amount of CSH also increased in the KL-OPC sample. These results confirmed the increase in pozzolanic reaction in the KL-OPC paste compared to the one in the K-OPC paste, especially after 90 days of hydration. The intentional addition of CaO (lime) to the system impacted the progress of pozzolanic reactions through increasing Portlandite consumption by metakaolin. The excess amount of CaO reacted with water, increasing [OH − ] and consequently, the pH of the hydration solution. It has been reported that the addition of CH to blended cement enhanced the reactivity of metakaolin [3]. It appeared that the extra Ca 2+ ions and increased basicity provided by the calcined limestone (CaO) led to a more reactive metakaolin, forming more CSH and CASH. The full potential of metakaolin was used by the consumption of Portlandite produced by OPC hydration reactions and added Ca 2+ ions from co-activated limestone.
Increased Compressive Strength in Blended Cement Pastes
The compressive strength of KL-OPC cement paste after 90 days of hydration was 12% higher than that of pure OPC and 11% higher than that of K-OPC ( Figure 12). In contrast, K-OPC paste only improved compressive strength by 1% over that of pure OPC. The superior mechanical performance of KL-OPC was attributed to several factors. One reason was the lower amount of Portlandite in KL-OPC, as shown in TGA measurements (Figures 5 and 6). Portlandite, which is known to be detrimental to compressive strength [39], was consumed significantly by co-calcined kaolinite and limestone to form extra (secondary) CSH and CASH. The amount of this consumption after 90 days of hydration was 36% higher than in the K-OPC cement paste-i.e., the one in which only calcined kaolinite was added-according to the TGA results in Figure 6.
The second reason for the compressive strength improvement in the KL-OPC cement paste was associated with the increased amount of CSH and CASH in the new SCM-i.e., the calcined KL. CSH and CASH determine the mechanical performance of cement pastes [13][14][15][16][17]. As revealed by the Rietveld analysis results of XRD spectra of hydrated samples (shown in Table 5), the addition of metakaolin or metakaolin/CaO mixture to OPC not only altered the amounts of the hydration products, but also led to the formation of new phases. The XRD ( Figure 7) and SEM (Figure 8) results showed the formation of a new phase, CASH, in SCM-added pastes. The amount of this new phase in the KL-OPC sample was slightly more than in K-OPC. The amount of CSH gel also increased in the KL-OPC sample due to the conversion of Portlandite into additional, "secondary" CSH in this sample. As a confirmation of the XRD results, the presence of a platy phase in SEM images (Figure 8) also indicated the formation of CASH in SCM-blended pastes.
Porosity is a major factor affecting the mechanical performance of cement pastes. The pure OPC paste microstructure (Figure 8a,b) had a mesoporous matrix, with the typical, late-age CSH phase. Further development of the microstructure in the OPC paste is rather slow and is not expected to significantly contribute to durability. On the contrary, the microstructure of the K-OPC (Figure 8b,c) and KL-OPC (Figure 8e,f) pastes revealed an amorphous appearance with less porosity. The implication is that the CSH phase in these two samples was expected to continue its crystallization, giving higher strength to the pastes over the long term and making them more durable. Comparison between the microstructures of the two blended cement pastes suggests that the KL-OPC sample with a higher amount of amorphous content and less porosity is expected to be more durable than K-OPC paste. Porosity gives rise to strength and durability issues in cement and concrete [40][41][42][43]. Because the metakaolin-added K-OPC and metakaolin/CaO-added KL-OPC pastes contained less porosity than the OPC paste, the microstructure images could serve as a confirmation of the higher compressive strength of these two samples compared to that of OPC. The denser microstructure in K-OPC and KL-OPC could be explained by the presence of SCMs facilitating water ingression and the lower w/s ratio that led to the reduction in porosity [50]. In addition, the KL-OPC sample, which showed almost no porosity, yielded the highest compressive strength value.
The new, co-activated kaolinite/limestone pozzolan demonstrated increased compressive strength in a blended cement paste, due to: (1) the formation of CASH and secondary CSH, (2) the additional consumption Portlandite, (3) the increased ettringite conversion to monosulfate and (4) the reduction in porosity.
In previous work, limestone (CaCO 3 ) was used without calcination, because the additional CaO was expected to increase the amount of Portlandite. In the current study, it was demonstrated that co-calcination of limestone and kaolinite produced lime, which enhanced metakaolin reactivity. Although the production of lime added excess Ca 2+ ions to the blended cement paste, the enhanced metakaolin reactivity, combined with using the full pozzolanic potential of metakaolin, consumed these extra Ca 2+ ions, in addition to the existing Portlandite (from OPC hydration). Hence, it was demonstrated that co-calcination of kaolinite and limestone produces a more reactive pozzolan and a blended cement of 12% higher compressive strength compared to pure OPC.
Conclusions
This study demonstrated enhancement and acceleration of pozzolanic reactions for metakaolin/OPC mixtures when active calcium ions were added to the system. In previous studies, calcined kaolinite-i.e., metakaolin-was shown to react with Portlandite to form more CSH and CASH during OPC hydration. Co-calcined kaolinite and limestone were incorporated as an SCM to OPC, and it was shown that the addition of active calcium ions, along with metakaolin, to OPC improved the compressive strength of the resulting blended cement paste. The positive impact of the new SCM was attributed to (i) the increased consumption of Portlandite by metakaolin, (ii) the formation of more CSH and CASH, (iii) the reduction of macro and meso-porosity in the microstructure, and (iv) the enhancement of the conversion of ettringite to mono-sulfate.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 9,251.2 | 2022-11-14T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Quantum fluctuations drive nonmonotonic correlations in a qubit lattice
Fluctuations may induce the degradation of order by overcoming ordering interactions, consequently leading to an increase of entropy. This is particularly evident in magnetic systems characterized by nontrivial, constrained disorder, where thermal or quantum fluctuations can yield counterintuitive forms of ordering. Using the proven efficiency of quantum annealers as programmable spin system simulators, we present a study based on entropy postulates and experiments on a platform of programmable superconducting qubits to show that a low level of uncertainty can promote ordering in a system impacted by both thermal and quantum fluctuations. A set of experiments is proposed on a lattice of interacting qubits arranged in a triangular geometry with precisely controlled disorder, effective temperature, and quantum fluctuations. Our results demonstrate the creation of ordered ferrimagnetic and layered anisotropic disordered phases, displaying characteristics akin to the elegant order-by-disorder phenomenon. Extensive experimental evidence is provided for the role of quantum fluctuations in lowering the total energy of the system by increasing entropy and defect clustering. Our thorough and comprehensive application of an intentionally introduced noise on a quantum platform provides insight into the dynamics of defects and fluctuations in quantum devices, which may help to reduce the cost associated with quantum processing.
Entropy-driven ordering mechanisms have been successfully employed to explain the formation of colloid materials 1,2 , although only a few observations in macroscopic systems 3 have been reported thus far.Experimental investigations into magnetic ordering driven by thermal and quantum fluctuations have been limited, despite the potential impact a deep understanding of the fundamental mechanisms governing the formation of ordered structures could have on the assembly and organization of magnetic nanostructures.
The state of a disordered system is defined by the even or uniform distribution of its constituent elements throughout space.Conversely, the concept of order pertains to the arrangement of these elements, where they tend to segregate or group together based on their similar properties.The second law of thermodynamics established entropy as a measure of disorder, with Boltzmann's expression for the entropy of a closed system, S ∼ ln Ω, stating that the entropy S increases with a higher number of accessible states Ω for the system 4 .Temperature, which relates to thermal fluctuations, plays a crucial role in altering the entropy or degree of ordering in a system.Higher temperatures provide more ways to distribute energy among the available states, leading to higher entropy values.While it may seem counterintuitive, an increase in thermal fluctuations can actually promote ordering within a system through the phenomenon known as order by disorder [5][6][7][8] .This phenomenon occurs in frustrated magnetic systems, where the classical ground state manifold possesses higher symmetry than the underlying Hamiltonian.Both thermal and quantum fluctuations can break the degeneracy and induce ordering in the system while increasing its entropy.
In the present investigation, we design magnetic systems with a highly frustrated classical ground state manifold in which to observe an increase of correlations between magnetic moments when thermal and quantum fluctuations are finely tuned.The idea that adding quantum fluctuations could promote order appears counterintuitive.For example, in magnetic kagome lattices a monotonic increase of magnetic moment correlations is found with increasing temperature.
Here we demonstrate that the inclusion of quantum fluctuations in asymmetric (Lieb) kagome lattices can produce nonmonotonic correlations among magnetic moments.
Results and discussion
This challenging problem for established macroscopic magnetic realizations [9][10][11][12][13] can be implemented in a straightforward manner in a quantum annealer hardware.Free of spurious interactions and defects that plague spin ice systems, the platform of interconnected qubits provided by D-Wave Quantum (Fig. 1a) enables the observation of many-body interactions driven uniquely by entropic effects.By using entanglement and superposition of tailor-designed quantum states, we unveil with a high degree of accuracy the interplay between temperature and domain-wall correlation in a continuously distorted while frustrated qubit lattice.In particular, the maximally frustrated kagome lattice is taken as a reference, and an effective deformation is introduced by modifying coupling parameters that mimics an applied strain to the real-space lattice.The kagome lattice is thus placed in a broader antiferromagnet geometry, showing that its constrained and correlated disorder is a critical point between fully ordered and uncorrelated disordered states.This flexibility of the degree of disorder in one single topology allows us to study the nonmonotonic evolution of qubit-qubit ordering within an effective range of temperature.
Our model of lattice transformation is described as the continuous deformation of a kagome (depleted triangular lattice) to a Lieb lattice (depleted square lattice), and to a deformed hexagonal lattice, as shown in Fig. 1c-e.Each site has a classical representation given by a magnetic moment in either " or # state orientation.In the following, we will refer to two types of sites, σ and Σ, denoting the Ising qubits circled in black and yellow, respectively.The unit cell in Fig. 1b features three nodes and two antiferromagnetic coupling constants J and J L .Different classical ground states can be obtained depending on the ratio J L /J, which controls the system frustration.J L /J < 1 yields an ordered ferrimagnetic configuration (Fig. 1c), where all σ qubits have the same alignment, and opposite to that of Σ qubits.The net magnetic moment per unit cell is |M| = 1/3.If J L /J > 1, the weaker coupler J is frustrated in a layered ground state.σ-site qubits align antiferromagnetically (Fig. 1e) along the stronger (shorter in the lattice representation) bond J L , forming layered, parallel antiferromagnetic lines, as shown by the solid gray lines in Fig. 1e.The Σsite qubits can be viewed as buffers that do not transfer information between antiferromagnetic lines, namely the energy of the ground state is unaltered upon a spin-flip operation on any of those sites.They are shown in white since they exhibit no specific order.The ordered lines are thus mutually uncorrelated and disordered in the vertical direction.This is only true in the ground state, as we shall see below, much as in the prototypical domino model of order by disorder 5 .Fluctuations develop correlations among antiferromagnetic lines, making the layered phase a quasi-one-dimensional (Q1D) phase, and the average magnetization zero.
At the critical point, J L = J, the lattice becomes a frustrated Ising antiferromagnet on a kagome lattice 14,15 (Fig. 1d).The low-energy state manifold of the two models consists of disordered qubits that obey the ice rule, namely the three spins in a triangle must add up to ± 1.In the ground state at zero temperature qubits are disordered and strongly correlated, with a finite correlation length 16 .The average magnetization is zero.
Note that both the ferrimagnetic (J L /J < 1) and the layered (J L /J > 1) phases are deviation from the kagome ice configuration (J L /J = 1), as both obey the ice rule as defined above in terms of minimal ± 1 charge 16 .Therefore, at T = 0 the hexagonal qubit ice is the structural critical point between an ordered ferrimagnetic phase and an anisotropically disordered layered phase.Analogously, square ice had been also shown to be a structural critical point between antiferromagnetic order and a state of subextensive disorder 15,17,18 .After discussing the (c-e) show the Lieb lattice with anisotropic couplings J (solid gray lines), and J L (dotted gray lines).Solid gray lines represent stronger couplings.Topological Lieb-to-kagome transformation is equivalent to reducing J L with respect to J that is kept constant.In (c), J > J L yields a ferrimagnetic ground state configuration.The kagome ice configuration in (d) is reached when J L = J.The layered configuration in (e) is equivalent to considering J < J L .Up and down magnetic moments are depicted in blue and red respectively.
anticipated ground state, we move on to investigating a finite-sized realization of J L < J kagome-derived lattices, where the system dynamics is dominated by quantum rather than thermal fluctuations.
Our finite-size qubit model consists of a lattice of 21 × 21 antiferromagnetic (AFM) chains, along with the mediating qubits, resulting in a total of 641 logical qubits.To represent a logical spin, three qubits are connected by strong ferromagnetic couplings with a magnitude of J = − 2, resulting in a total of 1923 qubits embedded in a D-Wave Pegasus topology.The collective dynamics of these qubits is described by a quantum transverse-field Ising model, governed by the Hamiltonian: where σ, Σ are Pauli matrices of the corresponding qubits on σ, Σ sites, respectively.In the absence of the transverse field, Γ = 0, Fock products of eigenstates of σ z are eigenvalues of the Hamiltonian, and the problem maps exactly onto the classical one described above.If Γ ≠ 0, the terms within the first parenthesis do not commute with the terms within the second parenthesis, and the transverse field subjects the spin degrees of freedom to quantum fluctuations.The unitless parameter s = t/t f controls the annealing progress from the quantummechanical superpositions of states at s = 0 to the classical state at s = 1.
The Ising energy scale is thus controlled by J ðsÞ which increases as the quantum fluctuations Γ(s) away out according to well-established annealing protocols.
Starting from a transverse-field induced quantum-superposition state, the forward annealing protocol is used to obtain thousands of classical configurations at different J L /J ratios.Figure 2a shows the magnetization per qubit vs. J L /J for a series of J values, averaged over 25600 annealed samples.In the ground state of an infinite lattice in the ferrimagnetic phase (J L /J < 1), where the magnetization is the order parameter, the two twofold coordinated spins antialign with the spin in the fourfold vertex.The stepwise dashed-dotted line points out the absolute value of its magnetization at T = 0, with a constant value of 1/3 for J L /J < 1 and of zero for J L ≥ J.The open boundaries in the chains induce fluctuations that cause the initially flat line to transform into a smoothly increasing curve for J L /J > 1.The finite value of J precludes the magnetization 〈m〉 from abruptly transitioning to 1/3, allowing for the possibility of deviations from the 2-up-1-down rule in the ferrimagnetic phase.At the exact value of J = J L , the ensemble meets the degenerate ice-rule ground states of the kagome lattice, with a ± 1 net magnetization on each triangle realizing an overall 〈m〉 = 0.The magnetization of the slab acquires finite values due to its finite size.The quality of the annealing process is reflected in the two sets of lines plotted in Fig. 2e, f.A swift annealing of t f = 1μs demonstrates to be less efficient than a slower t f = 100 μs annealing yielding lower 〈m〉 in the ferrimagnetic phase.
Structure factors indicating the underlying spin alignment for each J L /J ratio are displayed in Fig. 2b-d.
is the Fourier transform of the correlation between the ith-and the jthspin, and pinpoints a number of features: the ferrimagnetic order is clearly visible in Fig. 2b with peaks at q = ( ± π, ± π) of the Brillouin zone.The S(q) ≠ 0 at q = (0, 0) shows evidence of long-range correlations.Figure 2d shows a ridged line structure expected in the layered phase as corresponds to the one-dimensional antiferromagnetic order.Figure 2c shows the smeared hexagonal pattern typical of the kagome antiferromagnet.
It is also possible to explore higher energy states for fixed frustration J L /J, by varying both J L and J at a constant ratio.The coupling strength is subjected to at-will variations to mimic the effect of temperature, so that increasing J is equivalent to reducing the effect of thermal fluctuations.With discrete variations of the coupling strength (J L = 0.1. .1.0 in steps of 0.1) the system's effective temperature is mimicked, and it can be regarded as inversely proportional to the coupling strength between logical qubits.In this scenario, T eff = 1/J L has been used 15 to represent a useful notion of effective temperature, T eff .Figure 2d, e display the fraction of triangles obeying the ice rule plotted as a function of T eff = 1/J L , for fixed values of J L /J during fast (d) and slow (e) annealings.The system obeys the ice rule at low T eff .
The dimensionally reduced phase affords interesting similarities with the domino model 5 , where correlations develop as the system is subject to thermal fluctuations.The elegant order-by-disorder mechanism is due to the presence of excitations, absent in the ground state.Other layered nanomagnets recently realized 19,20 behave similarly by promoting linear arrangements that are mutually uncorrelated in the ground state, but can be correlated by entropy under stress.Interestingly, an analogous mechanism of entropy-induced correlations in the Q1D phase can also be expected, which relies on the sea of free Σ spins, that are uncorrelated in the ground state but correlate under fluctuating excitations.
Figure 3 illustrates that this mechanism proceeds through an entropic gain stemming from the correlation of domain walls.It is essential to note that in the ground state of the quasi-one-dimensional (Q1D) system, all σ-qubits lines (indicated by black circles) exhibit antiferromagnetic ordering, while the Σ qubits (indicated by yellow circles) have random orientations because their net coupling with the antiferromagnetic σ spins is zero.Consequently, in the ground state, there is no correlation among the antiferromagnetic lines.Excited states, on the other hand, involve the formation of domain walls on these lines, which separate domains with different antiferromagnetic orientations.
It is important to observe that a domain wall in an antiferromagnetic line possesses lower energy when located at the position of a Σ qubit, effectively locking it in an orientation opposite to that of the domain wall.In this scenario, the energy cost of creating a domain wall between two contiguous σ sites at both sides of a Σ spin is ΔE = 2(J L − J).In contrast, placing the same domain wall away from a Σ spin (as depicted in Fig. 3b) would result in a higher energy cost, ΔE = 2J L .It is therefore energetically advantageous for a domain wall to be adjacent to a Σ spin.
Next, consider two domain walls on two adjacent antiferromagnetic lines, with Σ spins in between.If these two domain walls lock onto the same Σ spin, they, and the domain they separate, adopt the same orientation, thereby contributing to a transverse correlation.However, there is no energy preference for having the two domain walls aligned by the same Σ spin.The energy remains the same if they lock onto two different Σ qubits.Nevertheless, there is an entropic advantage in having the two domain walls aligned because it allows one Σ spin to remain free to fluctuate, resulting in an energy gain of ΔS = ln2.Consequently, we can deduce that entropy promotes the alignment of domain walls in parallel chains and favors the existence of floppy spins, increasing the transverse correlation among these lines.
To investigate this fluctuation-induced transverse correlation, both perpendicular across lines, hσσ 0 i ?, and parallel along a line, hσσ 0 i k , nearest-neighbor correlations are considered and defined as where the star * denotes the normalized sum over σ spins at the ith position along the lth line.〈〉 denotes the average over 25600 classical states.At T = 0, hσσ 0 i ?= 0, since the prevalent energy is the coupling term that antiferromagnetically aligns the mutually uncorrelated chains preventing the formation of domain walls.At large temperature, J is overwhelmed by thermal fluctuations and the whole system becomes uncorrelated.As a result, it is anticipated that hσσ 0 i ?will exhibit a non-monotonic behavior with respect to the (effective) temperature.In the following, the effective temperature of the parallel chain system will be taken as inversely proportional to the coupling strength between logical qubits, T eff ~1/J L 15 .Figure 4b, d shows hσσ 0 i k plotted as a function of T eff for a series of values of J L /J.Annealing procedures are set to last t f = 1μs and t f = 100 μs, respectively.As it is generally the case for correlations, the curves are monotonic in T eff for J/J L < 1 (i.e. when the system is in the Q1D phase).At low-T eff , assuming that the frustration quantifier J/J L remains close to 1, an antiferromagnetic state (hσσ 0 i k = 1) is reached at the lowest T eff .If J/J L → 1 − , the curves approach uniformly the kagome ice behavior (black line).The collinear correlation for kagome ice at low-T eff is only slightly higher than the value hσσ 0 i k = À 4=6 + 2=6 = À 1=3, which can be readily obtained via a counting argument on an ice-rule obeying triangle.Curves for J/J L > 1 correspond to a ferrimagnetic regime whose correlations become positive at low-T eff .Linear correlations are slightly stronger for slower annealings as generally expected in optimal an annealing process.
We analyze the transverse correlations hσσ 0 i ?, plotted in Figure 4a, c, which exhibit a nonmonotonic behavior for J/J L < 1.As the value of J/J L increases, the correlations become stronger at the same effective temperature due to the influence of the Σ qubits, whose coupling with σ qubits is J.For each curve, the temperature associated with the maximum transverse correlation decreases as J/J L → 1, indicating the transition to an isotropically correlated kagome ice state.In this state, the correlations are monotonic in temperature, with the maximum occurring at the ground state.
The obtained correlation at low T eff , hσσ 0 i ?≈0:14, agrees well with experimental findings for kagome systems discussed in refs.21,22 for a nanomagnetic, artificial kagome annealed via AC demagnetization.The observed correlations, consistently weaker than those of kagome ice, suggest the absence of symmetry breaking in a large system.The extraction of an order parameter reveals at most very small values of around 18% which are explainable with counting arguments for a finite system made of an odd number of lines.
Notably, the transverse correlations shown in Fig. 4a, c exhibit stronger values during faster annealings.While faster annealings demonstrate only slightly smaller collinear correlations, they display significantly larger transverse correlations.For example, in the case of J/JL = 0.9 at T eff = 1, the faster annealing exhibits an approximately 50% increase in transverse correlation compared to the slower annealing, while experiencing only a 7% decrease in collinear correlation.This suggests that the enhancement of transverse correlation during faster annealings arises primarily from quantum fluctuations rather than from a large number of domain walls.
In the presence of a small transverse field Γ, the system experiences quantum fluctuations, leading to increased transverse correlations.The second term of the Hamiltonian in Equation (1) can be considered as a perturbative term that breaks the degeneracy of the classical ground state manifold.Configurations with more unlocked qubits, which correspond to more paired domain walls, have lower total energy 23,24 .Therefore, the full quantum Hamiltonian including the transverse field Γ generates configurations that, with more paired domain walls and fewer locked Σ qubits, are energetically favored.Although Γ = 0 always at the end of an annealing, decreasing it faster can promote states that are energetically advantageous when Γ ≠ 0. This explains the larger transverse correlation observed during faster annealings.
A comparison between the hσσ 0 i ?and hσσ 0 i k panels of Figure 4 reveals their opposite evolution at low T eff .This can be attributed to the increased formation of domain walls, leading to a decrease in longitudinal correlation and an increase in transverse correlation through the mechanism of domain wall pairing described earlier.In the ferrimagnetic phase, hσσ 0 i k is equal to 1, while in the layered phase it is equal to − 1, due to the finite size of the sample.
Subtle variations in the annealing protocols can have significant effects on quantifying the degree of order, as quantum fluctuations play a crucial role in enhancing the spin-spin correlation represented by hσσ 0 i ? .The slow forward annealing protocol used so far anneals an open quantum system that drops out of thermal equilibrium as quantum fluctuations (Γ(s)) decrease, causing the system response time to exceed the annealing time.By introducing a pause-and-quench protocol, we can probe midanneal properties: pausing at an s * < 1 allows the system to approach thermal equilibrium of the open quantum system.After the pause, rapidly quenching s to 1 collapses the wavefunction, approximately reading out the system at s * .Note that a change of s * changes both Γ(s * ) and J ðs * Þ, when the goal is to change only Γ(s * ).To achieve that, the couplings J ij are multiplied by a prefactor α(s * ) such that the product α(s * )J(s * ) remains constant across the range of s * values to be probed.
Considering the terms proportional to Γ in the Hamiltonian of Eq. ( 1) as the interaction Hamiltonian, a perturbative treatment at first order shows that configurations with the maximum number of floppy qubits are energetically favored.In other words, a small Γ field causes a fine splitting of the degeneracy of the classical ground state into a number of energy levels, favoring configurations with a larger number of floppy spins.When Γ is annealed very slowly, this mechanism is less relevant although significant than in a swift annealing dynamics where it resembles more a quench on the Γ field.
Figure 5 displays the dependence of hσσ 0 i ?with the degree by which the system is affected by quantum fluctuations induced in the lattice according to a reverse annealing protocol.The figure plots hσσ 0 i k vs. Γ/J L , which represents a good quantifier for the amplitude of quantum fluctuations induced in the system.The plot clearly illustrates that the transverse correlation increases with the ratio of the transverse field strength Γ to the exchange coupling J L : Larger quantum fluctuations lead to stronger correlations.The curve reaches a maximum value, and for larger Γ, the transverse Ising models are known to undergo a transition to a quantum paramagnetic phase, typically occurring at Γ ≈ J, as shown in the figure.This behavior serves as a significant signature of observable quantum effects arising from quantum fluctuations, as it provides experimental evidence of the ability of quantum fluctuations to induce correlations among the chains that are absent otherwise.When considering the same number of defects, those located at both sides of a locked qubit exhibit a lower configuration energy compared to the defects situated in different regions of the parallel chains that involve two intermediate qubits (Fig. 3c).
In summary, we have conducted a series of experiments in a frustrated magnetic lattice to provide understanding of the ability of quantum fluctuations to enhance ordering.Based on fundamental entropic principles, we have demonstrated that groundstate modification by quantum-mechanical fluctuations may induce a domain-wall entropic-interaction mechanism that reduces the disorder in the perpendicular direction leading to nonmonotonic transverse correlations of antiferromangnetically aligned magnetic moments.We have demonstrated the continuous deformation of a kagome lattice to ferrimagnetic and layered structures, by engineering the ratio between coupling constants.This flexibility in controlling the degree of disorder within a single topology has enabled us to explore the nonmonotonic evolution of qubit-qubit ordering within an effective range of temperature, to find that a given amount of uncertainty in the form of fluctuations can promote ordering within the quantum system.Our findings shed light on the mechanisms and physical conditions leading to defect clustering under the action of quantum fluctuations, which can pave the way for improved quantum annealing systems.Furthermore, our experiments on entropic crystallization have revealed the remarkable influence of random fluctuations in inducing the emergence of ordered patterns in qubit systems, providing a solid foundation for future investigations into the manipulation and control of quantum states for improved computational performance.
Our embedding of nearly 2000 qubits in a superconducting quantum annealer to study large magnetic lattices demonstrates the effectiveness of quantum annealing in modeling the mechanisms that enhance correlations between quantum states.A detailed understanding of the physics of defect clustering can be obtained to guide the development of targeted strategies for minimizing errors and devising innovative approaches for interconnected qubit arrays.These findings may have implications for the design and organization of magnetic nanostructures as well as the engineering of quantum materials with desired properties.Correlations are probed at individual s values using a pause/quench protocol.To eliminate dependence of Γ(s)/J(s) on s over the region of measurement, a prefactor α(s) is introduced.Inset shows the flat energy range obtained by multiplying J ðsÞ by the set of α values shown in the black curve.As a function of Γ(s)/J, we see a peak in transverse correlations hσσ 0 i ? .This is consistent with a region of order between the perturbative regime (Γ ≪ J) and the paramagnetic regime (Γ ≫ J).
Fig. 1 |
Fig. 1 | The system under study in its three ground states.a The D-Wave quantum processing unit in use.(b) Unit cell of studied model composed of three nodes coupled in a triangle with two different couplers, J and J L .Panels (c-e) show the Lieb lattice with anisotropic couplings J (solid gray lines), and J L (dotted gray lines).Solid gray lines represent stronger couplings.Topological Lieb-to-kagome
Fig. 2 |
Fig. 2 | Structure factors of three ground states obtained from quantum annealing experiments.a Average magnetization per qubit 〈m〉 vs. J L /J obtained from quantum annealing for different values of J and therefore different effective temperatures, T eff .In the ground states |M| > 0 only for J L /J < 1.The various ground state symmetries are shown via the Fourier transform of the qubit-qubit correlation after annealing for J/J L = 1.5, 1.0, 0.75.b corresponds to the ferrimagnetic case, (c) to the kagome ice, and (d) the layered case.All phases obey the ice rule, as shown in plots of fraction of ice-rule obeying vertices vs. T eff for fast (e) and slow (f) annealings.
Fig. 3 |
Fig. 3 | Entropy-induced correlations of the domain walls.a A domain wall sitting on top of a Σ site fixes the qubit with an entropic cost that is lower in energy than in the case shown in (b).In (c), two σ-site domain walls are "paired" if both sit around the same Σ site.Pairing creates entropic advantage with respect to two domain walls away from each other by releasing a Σ floppy qubit.Dotted red lines point out the transverse correlation between the domain walls.Up and down magnetic moments are depicted in blue and red, respectively.
Fig. 4 |
Fig. 4 | Entropic transverse correlations.a The transverse correlation hσσ 0 i ? vs. T eff = 1/J L showing its nonmonotonic behavior.Curves obtained after annealing time of t f = 1μs.b Longitudinal correlation along the antiferromagnetic lines.c, d Same as in (a) and (b), respectively, for an annealing time of t f = 100 μs.
Fig. 5 |
Fig.5| s-Dependent transverse correlations.Correlations are probed at individual s values using a pause/quench protocol.To eliminate dependence of Γ(s)/J(s) on s over the region of measurement, a prefactor α(s) is introduced.Inset shows the flat energy range obtained by multiplying J ðsÞ by the set of α values shown in the black curve.As a function of Γ(s)/J, we see a peak in transverse correlations hσσ 0 i ? .This is consistent with a region of order between the perturbative regime (Γ ≪ J) and the paramagnetic regime (Γ ≫ J). | 6,378.4 | 2024-01-18T00:00:00.000 | [
"Physics"
] |
Influence of the incorporation of titanium dioxide nanofibers net on bond strength and morphology of a total etching adhesive system
Background The aim of this study was to evaluate the nanoleakage and microtensile bond strength (μTBS) of an ethanol based-adhesive containing Titanium dioxide (TiO2) nanofibers to dentin. Material and Methods TiO2 nanofiber was produced by electrospinning and it was inserted in an ethanol-based adhesive in 0.5, 1.5 and 2.5% by weight. The original adhesive did not receive nanofiber. The middle dentin was exposed by diamond saw under water-cooling and dentin was polished with wet 600-grit SiC abrasive paper. Resin composite build-ups were applied incrementally to the dentin after adhesive application. After storage in distilled water (24 hours/37°C) the teeth were sectioned perpendicularly to the bonded interface and sticks were obtained. Twenty-five sticks per group were tested by μTBS with a crosshead speed of 0.5mm/minute. The average values (MPa) obtained in each substrate were subjected to one-way ANOVA (α=0,05) with the tooth being considered the experimental unit. The nanoleakage pattern was observed in ten sticks per group and analyzed by Chi-square test (α=0,05). Results There was no difference in μTBS among the experimental groups. However, there was a statistically significant difference among 2.5 % nanofiber adhesive, 0.5 % nanofibers and control groups, (p=0,028) in relation to nanoleakage. Conclusions TiO2 nanofibers in 2.5% of weight inserted in dental adhesive reduced the nanoleakage, but did not improve the μTBS to dentin. Key words:Dentin-bonding agents, nanoleakage, tensile bond strength.
Introduction
The advances in dental materials and adhesive technology have enabled the dentists to make esthetic anterior restorations in a simple and economical way (1).Nowadays, adhesive systems are widely used in direct procedures as restoration of anterior and posterior cavities, fissure sealing, reattachment of fractured fragments, corrections in tooth morphology and in indirect procedures involving cementation of root-canal posts and indirect ceramic and composite crowns (2).Simplified etch-and-rinse adhesives have reduced clinical steps, but they have showed permeability of water from the oral environment and from the underlying bonded dentin (3)(4)(5)(6), leading to incompatibility issues (7)(8)(9), faster degradation of resin-dentin bonds and may not be as durable as was previously assumed (10,11).The loss of bond strength and adhesive quality has mainly been attributed to degradation of the hybrid layer at the dentin-adhesive interface and deterioration of the dentin collagen fibrils.Numerous publications have demonstrated the lack of bond stability (12)(13)(14)(15).Different laboratorial approaches have been proposed to improve monomer infiltration, reduce the rate of water sorption, reduce collagen degradation and qualify the adhesion.Effects of primer/adhesive placement agitation and drying time for five seconds on dentin have showed improvement of the shear bond strength to dentin (16).Another method to improve gear to qualify the adhesion is the use of a warm air-dry stream after primer application, because this technique reduces the nanoleakage (17).The technique can also be used to improve the mechanical and biological properties of universal adhesive systems (18,19).Thinking about biomaterials, pure TiO 2 nanofiber is being used in tissue engineering applications as polymeric scaffolds, to drive cell differentiation and create an osteogenic environment without the use of exogenous factors (20).The nanofibers that are fabricated by an electrospinning method show excellent antimicrobial activity against gram-negative Escherichia coli and gram-positive Staphylococcus aureus (21).Its antibacterial potential makes titanium dioxide an interesting choice to be incorporated into adhesive systems, specially that in the total etch approach, because they remove the smear layer completely and expose the collagen fibers, but it is not known if the titanium dioxide incorporation would influence the adhesive's properties.TiO2 nanofibers structure are successfully prepared by electrospinning technique followed by calcination process and should act as cross-linking agents in adhesive systems, also, this nanoparticles are increasingly being used in pharmaceutical, medical and cosmetic products (22).The TEM and XRD analyses show that TiO 2 has uniform diameter of around 200 nm, and their length to width aspect ratio ranged between 5 and 15 (23).
The aim of this study was to evaluate the incorporation of TiO2 nanofiber in an ethanol-based adhesive by means of microtensile bond strength (μTBS) and nanoleakage.The hypothesis is that the performance of a total etch adhesive system with and without titanium dioxide nanofibers will be similar.
-Experimental design
It was an in vitro study involving the incorporation of titanium dioxide nanofibers in an ethanol-based total etch adhesive systems (in four levels).Bond strength and nanoleakage were the main response variables for comparison purposes.
-Specimen preparation Twenty extracted, caries-free human third molars were used in this study.The teeth were disinfected in 0.5% chloramine, stored in distilled water and used within 6 months after extraction.A flat dentin surface was exposed after wet grinding the occlusal enamel on a # 180 grit SiC paper.The exposed dentin surfaces were further polished on wet #600-grit silicon-carbide paper for 60 seconds to standardize the smear layer.One solvent-based, etch-and-rinse adhesive systems without inorganic particles were tested: Ambar ® (FGM, Joinvile, SC, Brazil).All the inorganic nanofibers were prepared by sol-gel processing and electrospinning technique using a viscous solution (chemistry department, Federal University of Rio Grande do Sul), and it has an antimicrobial function as well.Each nanofiber has about 200 to 300 nanometers of diameter.The adhesive received a nanofiber insert by weight, at 0.5, 1.5 and 2.5%, it is forming three groups respectively (G1, G2 and G3).The control group was the original bonding agent -G4.After acid etching, the surfaces were rinsed with distilled water for 15 s and air-dried for 15 s.The surfaces were then rewetted with water (24).Two coats of adhesive were gently applied for 10 s.After each coat, the solvent was evaporated (distance: 20cm) to performed it function.The adhesive were light-cured for the respective recommended time using a LED 1200 mw/cm2 (Radii cal, SDI, Australia).Resin composite build-ups (Z350 XT, Shade A2 Body, 3M ESPE, St. Paul, MN, USA.) were constructed on the bonded surfaces in 3 increments of 1 mm each that were individually light-cured for 20 seconds with the same light intensity.All the bonding procedures were carried out by a single operator at a room temperature of 20°C and constant relative humidity.All the teeth were stored in distilled water at 37°C for 24 h.
-Bond strength test The restored teeth were longitudinally sectioned perpendicularly to the bonded interfaces with a diamond saw (Isomet 1000, Buehler, Ilinois, USA) to obtain the sticks.Specimens areas were measured with a digital cali-per and recorded (Absolute Digimatic, Mitutoyo, Tokyo, Japan).Each stick had a cross-sectional area of approximately 0.96 mm 2 .Half of the specimens, from each tooth, were tested in microtensile testing, and they were randomly assigned.The other half was analyzed by SEM, in relation to nanoleakage.Each bonded stick was attached to a device for microtensile bond strength test (μTBS), with cyanoacrylate resin (Zapit, Dental Ventures of North America, Corona, CA, USA) and subjected to a tensile force in a Microtensile tester (Bisco, USA) at a crosshead speed of 0.5 mm/min (n=25).The mean BS for every testing group was expressed as the average of the five teeth used per group.The micro tensile bond strength data was subjected to one-way analysis of variance, with a significant level set at 0.05.
-Nanoleakage evaluation This study utilized ten sticks from each group for nanoleakage evaluation (n=40).None of the sticks evaluated by nanoleakage were tested by μTBS.The sticks were coated with two layers of fast-setting nail varnish applied up to within 1 mm of the bonded interfaces.The specimens were re-hydrated in distilled water for 10 min prior to immersion in the tracer solution.Conventional silver nitrate (Sigma Chemical Co.St. Louis.MO; USA) was prepared in a 50 wt% silver nitrate solution (pH=4.2) The sticks were placed in the conventional silver nitrate in darkness for 24 h, rinsed thoroughly in distilled water, and immersed in photo developing solution for 8 h under a fluorescent light to reduce silver ions into metallic silver grains within voids along the bonded interface.All these sticks were wet-polished with 600-grit SiC paper to remove the nail varnish.Then, the specimens were placed inside an acrylic ring, which was attached to a double-sided adhesive tape, and embedded in epoxy resin.After the epoxy resin set, the thickness of the embedded specimens was reduced to approximately half by grinding with silicon carbide papers under running water.Specimens were polished with a 600, 1000, 2000 and 2400 grit SiC paper and 6, 3, 1 and 0.25 mm diamond paste (Buehler Ltd., Lake Bluff, IL, USA) using a polish cloth.They were ultrasonically cleaned, silica dried, mounted on stubs, and coated with carbon-gold (MED 010, Balzers Union, Balzers, Liechtenstein).Resin-dentin interfaces were analyzed in a field-emission scanning electron microscope operated in the backscattered electron mode (JSM 5800, JEOL, Tokyo, Japan).Presence or absence of nanoleakage was observed, and this data were analyzed by Chi-square test, with a significant level set at 0.05.
-Bond Strength Test (µTBS)
The range of cross-sectional area was 0.96mm2 for all sticks.The different values of original adhesive (G4) and modified adhesives (Titanium dioxide nanofiber in-sert by weight) G1, G2, G3 respectively are summarized in Table 1.One-way ANOVA revealed that there were no statistically significant differences among the groups (p=0,607).
Group
MPa (SD) 0.5% Ambar (G1) Representative SEM backscattered images of the resin tags (Fig. 1) and resin-dentin interfaces (Fig. 2) of all groups were obtained.In general, the amount of silver nitrate decreased with the increased incorporation of titanium dioxide nanofibers (Fig. 2, white areas).The higher silver nitrate depositon was observed at control and 0.5% nanofibers groups while less nanoleakage was observed at 1.5 and 2.5% of fibers.
Regarding the resin-tags, Figure 1 shows higher amount of crosslink of nanofiber with tags in dentin as the amount of titanium dioxide increased.As observed for nanoleakage, groups 1.5 and 2.5% of titanium dioxide showed higher cross-linking nanofibers than that of 0.5% group.The control group (without nanofiber incorporation) did not show nanofibers within the resin tags.e718
Discussion
In the current study, the microtensile bond strength of adhesive systems was compared using different amount of nanofibers in adhesive without filled, manufactured by FGM (FGM, Joinvile, SC, Brazil).We believe that the microtensile testing and nanoleakage analysis are convenient methods for screening and testing the strength and quality of adhesive interfaces in vitro (2,9).The higher performance of this kind of actual adhesives has been previously reported and they are used to make esthetic anterior restorations in all patients (1).The aim of this study was trying to improve the adhesion in dentin, based on quality in adhesion and if its possible, more strength to hybrid layer and more stability (4,5).
The adhesive used in the current study was intentionally chosen because it is not filled with inorganical particles.
Although the incorporation of titanium dioxide nanofibers did not improve the bond strength of Ambar, it did not decrease its performance.The hypothesis must be accepted in terms of bond strength measurement.However, the application of the nanofibers in adhesive system significantly affected the quality of hybrid layer in relation to nanoleakage.The incorporation of titanium dioxide nanofibers at 1.5 and 2.5% levels resulted in the lowest levels of nanoleakage of the adhesive system when compared to the original bonding agent without nanofibers.For this reason, the hypothesis must be accepted, since differences were observed among the groups.It was also noted the presence of nanofibers with crosslink with tags at the same groups, and the collagen probably.If this really occurs, it is important to analyses the durability of bonding in these adhesive compositions, because the crosslink is a form to stabilize the composite adhesive and the TiO 2 nanofibers are insoluble in water, and this fact can also contribute to reduce the nanoleakage (9,10).The percentage of silver nitrate penetration in interfacial space was significantly higher in the specimens without nanofibers.It is possible to believe that the nanofibers were inserted in chemical crosslink polymer in an ideal amount, a reduction of the composite resin shrinkage would occur (25).This current study verified that the resin-dentin bond strength values were similar for those different amounts of nanofibers.This finding could be attributed to the fact that the same material was applied to build the nanofiber.
The high performance of the hybrid layer formed without nanoleakage in almost all specimens was probably because the nanofiber net reduced the polymerization stress, polymerization shrinkage and elastic modulus (26).According to Ozel and Soyman (25), composites that received fiber nets showed significantly lower microleakage and stress decrease, and this study showed this result using a qualitative analysis by nanoleakage.The tags penetration was not influenced by nanofibers in every test groups, view by scanning electronic micros-copy.Unfortunately, this issue has not yet been evaluated in other adhesives ethanol and alcohol based and should be further investigated.We believed that this paper is the first step to open the doors for titanium dioxide nanofibers in adhesive systems; however, clinical trials must be conducted to verify these results in a clinical situation.
Conclusions
Within the limitation of this study, it was possible to conclude that titanium dioxide nanofibers in 2,5% of weight inserted in a dental bonding agent reduced the nanoleakage within the hybrid layer, and did not improve the μTBS of composite restorations. e717
Fig. 2 :
Fig. 2:Representative backscattered SEM images of the resin-dentin interfaces of a total etching adhesive system with different percentages of titanium dioxide nanofibers (A-0%; B-0.5%; C-1.5%; D-2.5%).The silver nitrate deposites inside the adhesive layer (in white) decreases as the amount of titanium dioxide increases from A to D.
Table 1 :
Bond strength, in MPa (standard deviations) of experimental groups.
Table 2 :
Number (percentages %) of samples with (YES) and without (NO) nanoleakage after Chi-square test. | 3,123.4 | 2023-09-01T00:00:00.000 | [
"Materials Science"
] |
A Novel Adaptive Sensor Fault Estimation Algorithm in Robust Fault Diagnosis
The paper deals with a robust sensor fault estimation by proposing a novel algorithm capable of reconstructing faults occurring in the system. The provided approach relies on calculating the fault estimation adaptively in every discrete time instance. The approach is developed for the systems influenced by unknown measurement and process disturbance. Such an issue has been handled with applying the commonly known H∞ approach. The novelty of the proposed algorithm consists of eliminating a difference between consecutive samples of the fault in an estimation error. This results in a easier way of designing the robust estimator by simplification of the linear matrix inequalities. The final part of the paper is devoted to an illustrative example with implementation to a laboratory two-rotor aerodynamical system.
Introduction
In the developing world, the expansion toward Industry 4.0 results in an increase in the components of the system. Any industrial system cannot be imagined without measuring devices. Especially these days, measured information is analyzed in many ways to achieve an optimal performance of the entire technological process. Thereby, industries increase the number of sensors to have the information of the process as accurate as possible. However, by increasing the pool of sensors, it increases a chance of fault occurrence in some of them.
The developments concerning fault diagnosis (FD) have received a significant scientific attention over the last decades, and a large pool of reliable FD strategies is available [1][2][3][4]. However, while screening these fundamental works on FD, one can observe that initially, the research was focused on fault detection and isolation (FDI). This situation was completely changed along with the number of works devoted toward fault-tolerant control (FTC) [5][6][7]. Indeed, fault estimation or identification constitutes the crucial element of all active FTC schemes FTC [8,9]. This simply means that the FTC performance depends on the knowledge about the faults, which is provided along with the fault estimation. There are several approaches devoted to either sensor or actuator fault estimation [10][11][12]. The main development trends of fault estimation are oriented toward dedicated observer-based approaches [13][14][15]. Their appealing property is that they can realize both FDI and fault estimation simultaneously. Note also that the problem of simultaneous sensor and actuator estimation has received considerable attention as well [16][17][18][19][20]. However, the design strategies are usually realized by a simple extension of the actuator/sensor fault estimation schemes. Moreover, a recent literature review clearly indicated the trends for settling fault estimation for nonlinear systems. Indeed, in [21][22][23], the authors transformed a nonlinear Lipschitz system into a linear parameter-varying (LPV) one with the so-called reoriented Lipschitz strategy. Such an approach enables optimal H ∞ fault estimates in the limited frequency range. An alternative approach to Lipschitz systems was described in [24]. In the proposed strategy, the system is split into two subsystems. Each subsystem is affected by either an actuator or a sensor fault. This results in a tandem of separated sliding mode observers (SMOs). The main issue of the approach is that the resulting fault estimation error converges asymptotically to zero without taking inevitable disturbances into account. This unappealing property is eliminated in [25,26] by taking into account bounded disturbances. As an alternative to Lipschitz strategies, one can use LPV or Takagi-Sugeno fuzzy ones. A representative example of such strategies is provided in [27,28]. This paper proposes the so-called adaptive fuzzy estimator while considering predefined fault scenarios: drift, bias, and loss of accuracy along with loss of effectiveness. The main drawback of this approach is that it is not robust to disturbances, which may significantly impair its performance. Another interesting Takagi-Sugeno approach is proposed in [29] for switching nonlinearities. However, it inherits the same drawback with respect to the lack of robustness. Subsequently, the work [30] proposes a new Takagi-Sugeno multiple integral unknown input observer, which decouples disturbances. The final group of strategies deals with polynomial LPV and Takagi-Sugeno systems [31,32]. The approach reduces to converting the system into the so-called augmented-state form. To summarize, the dominating number of fault estimators is based on the following general frameworks: • Adaptive estimators [33,34]. • High-gain and sliding-mode observers [26,35,36]. • Kalman filter-based [37,38]. • Virtual diagnostic estimators [39]. • Proportional-integral (PI) observers [16].
Nevertheless, a typical way of obtaining the sensor faults estimates is that it is developed with a set of observers [40,41]. Each observer uses all but one sensor, and estimates the missing sensor readings. Those estimates are then compared with real sensor measurements and, as a result, the sensor fault is obtained. An obvious drawback of this scheme is that it is assumed that, within a given time interval, only one sensor reading is impaired by a fault. The other approaches of course allow for estimating all sensor faults at the same time, although, the unwelcome rate of change of the sensor fault factor is to be additionally minimized, which also influences the design involving such an approach, due to the fact that this additional factor needs to be taken into account during the optimization process while designing the gain matrices. To sort out such an issue, a novel structure of the observer is proposed. It merges two previous approaches proposed by the authors, namely the direct estimation strategy, where the sensor fault is estimated based on the output equation with the adaptive approach. However, the proposed algorithm minimizes the rate of change of the fault factor, which is its main advantage. Additionally, the investigated approach is capable of handling process and measurement uncertainties by an application of H ∞ theory.
The paper is organized as follows: Section 2 introduces the problem and provides a set of necessary preliminaries. In Section 3, a novel sensor fault estimation scheme is proposed along with the stability analysis. Section 4 provides an illustrative example dealing with a laboratory multi-tank system. Finally, Section 5 concludes the paper.
Preliminaries
Let us start by defining the possibly faulty system given by where k stands for a discrete time instance and x k = [x 1 , x 2 , . . . , x n ] ∈ R n , u k = [u 1 , u 2 , . . . , u r ] ∈ R r , y k = [y 1 , y 2 , . . . , x m ] ∈ R m , stand for the state, control input and measured output vectors, respectively. Moreover, f k = f 1 , f 2 , . . . , f n y ∈ R n y is the fault vector which affects the measured output, and for such a reason it will be referred to as a sensor fault. Moreover, w 1,k = w 1,1 , w 1,2 , . . . , w 1,q 1 ∈ R q 1 and w 2,k = w 2,1 , w 2,2 , . . . , w 2,q 2 ∈ R q 2 are unknown exogenous process measurement uncertainty vectors, respectively. Note that both w 1,k as well as w 2,k belong to l 2 class, and hence, they obey meaning that they have a finite energy. Moreover, a matrix f denotes a sensor fault distribution one. In the other words, it describes the way the sensor fault vector influences the system measurements. Furthermore, h(x k ) : X → X represents a non-linear function with respect to the state. It is assumed that h(x k ) is of the Lipschitz form, i.e., Evidently, a way of dealing with such Lipschitz nonlinearities is described, e.g., in [42,43].
The problem stated in this paper is related to reconstructing sensor faults. The sensor faults are as important as the other ones, such as, for example, actuator faults, due to the fact that it is hard for any system without measurements. Of course, there are plenty of effective ways and approaches capable of estimating it; however, adaptive observers presented in the literature (see [42,44,45]) are burdened with an unwelcome rate of change of the sensor fault factor. Approaching the problem in such a way determines the minimization of that additional factor in the design process, which additionally might worsen the final quality of estimation and as a consequence, a quality of control and final product. The issue stated in this paper concerns the problem of estimating the state and sensor fault for which the rate of change factor is eliminated by developing a suitable structure of the observer. In the subsequent section, an observer design procedure for handling such a problem is provided.
Fault Estimation
To settle an issue formulated in the former section, let us propose an observer of the following structure: where K x and K s are designated state and sensor fault estimation matrices. Moreover, Z stands for a pseudo-inversion of the sensor fault distribution matrix f such that Z = ( f ) † , Z f = I. The structure of the observer stands for a combination of an estimator enabling the direct fault estimation [46] with the one able to estimate the fault adaptively [42]. Such an algorithm inherits the advantages of both these both approaches.
To realize the design procedure, let us start with calculating the state estimation error as with e s,k indicating a sensor fault estimation error. Before proceeding to the sensor fault estimation error, let us transform the output Equation (2) in such a way as to obtain the sensor fault formula: and even more specifically Thereby, taking into account the sensor fault (11) and its estimate (7), the dynamics of the sensor fault estimation error can be presented as Then, for further analysis, let the sensor fault estimation error be shifted in time by one sample, resulting in which is actually equal to (8), an evolution of the sensor fault with the signal at time k + 1.
Having such estimation errors for both state and sensor faults, let us compose a vector incorporating both of them. However, before doing this, let us introduce the following lemma: For h(·), the following statements are equivalent: For all i, j = 1, . . . , n, there exist functions g i,j : X × X − → R and constants γ g i,j andγ g i,j such that, for each X, and as well as a scalar function g i,j given by where c i stands for the i-th column of the n-th order identity matrix, while X Y i is defined by Thus, the compact error vector takes the form In this point, it can be easily noticed the difference of the proposed approach compared to the classical adaptive observer. The vectord k in (22) does not contain the rate of change of the fault, unlike the strategy proposed in ref. [42]. This simply shows the advantages of such an approach due to the fact that the unwelcome rate factor does not need to be taken into account in the optimization process.
Having in mind the above deliberations, an H ∞ observer design procedure can be handled by shaping the following theorem: Theorem 1. Assume that the nonlinear system satisfies condition 1 of Lemma 1. Thus, it can be approximated by (20). Then the design problem of the observer (6)-(8) for the system given by (1)-(2) is solvable for a prescribed attenuation level µ > 0 of the state and estimation error (20), if there exist matrices P 0 and N such that the following condition is met: Proof. The problem of designing the H ∞ estimator reduces to find the N, U and P matrices by solving (24) such that It is known that for the sake of the H ∞ design, a Lyapunov function is equivalent to where: ∆V k = V k+1 − V k , V k =ẽ T k Pẽ k and P 0, which is sufficient to solve that problem. It is evident that ford k = 0, inequality (27) boils down to with ∆V k = V k+1 − V k , and then leads to (25). Thus, by employing (20), it is easy to show that Then, by establishing a new temporary variable inequality (29) may be reconstructed into the following shape: or alternatively into another form: Afterwards, it leads to and employing this set up into (33) leads to (24), which results in the proof being completed.
Finally, the design procedure boils down to solving the set of LMIs (24), and then calculating the gain matrices for the estimator from Thus, the arrangement of the entire methodology can be split up into off-line as well as on-line parts and summarized as the following algorithm: I. Off-line stage: Step 1: Find a feasible solution to the problem (24) for obtaining P and N. If there is at least one feasible solution then go to Step 2 else STOP; Step 2: Calculate the gain matrices K x and K s of the observer with (36); Step 3: Set time k = 1.
The main limitations of the proposed approach are caused by the assumptions, which means that firstly, the uncertainty vectors should belong to the l 2 class; hence, it means they have a finite energy. The other limitation is that the nonlinear function is to be Lipschitz. It causes that the nonlinear system should satisfy the condition 1 of Lemma 1. The fundamental constraint is that the pair(A, C) and especially pair(Ã,C) should be observable.
The objective of the subsequent section is to provide an exemplary results containing the state and sensor fault estimation.
Case Study
For the sake of validation purposes, the proposed algorithm was applied to the MT (multi-tank system) system provided by Inteco Ltd., Kraków, Poland [48] (see Figure 1). It is worth noticing that the system being under the study is fully computer-based controlled, which simplifies the control, identification and estimation strategies being investigated. It is configured in such a way that it consists of three different tanks, which are placed vertically one above the other. It is supplied with a fully controlled flow water pump, which supplies water to the upper tank. The water then flows out to the middle one and finally to the lower one. After that, it flows down to the reservoir. Those tanks are interconnected to each other with fully controlled solenoid valves, and their adjustment is done with a PWM (Pulse Width Modulation) signal. The PWM signal is also used to control the pump. The variables measured in that system are the water levels of the respective tanks. These measurements are based on the water pressure sensors, and the water level in [m] is provided to the user. During the experiments, the system was operated in a pump-valve mode which relies on controlling the system by both pump and valves, which is the most interesting and the most difficult mode. It allows stabilizing the water level in each tank, and every desired level in each tank is allowed to be achieved. Thus, the control input is given by where u p,k stands for the control signal for the pump, whilst u v 1 ,k , u v 2 ,k and u v 3 ,k denote the suitable signals for the drain solenoid valve of the top, middle and lower tanks, respectively, at time k. Moreover, the system's behavior is described by the following state vector T , where x 1,k , x 2,k and x 3,k signify a suitable liquid water of the first, second and third tank, respectively. The distribution matrices of the process and measurement uncertainties were achieved with series of tests. This estimated inaccuracies, and specifically their matrices were established as follows: The sensor fault distribution matrix f is set with components equal to either ones and zeros in which one denotes that the fault acts onto an appropriate sensor reading whilst zero signifies an opposite position. As a consequence, it was set as follows: The evaluation was accomplished in such a way that the sensor in the third tank is not taken into account as a possibly faulty one, why is why the level sensor in the bottom tank is considered as never existing, which additionally makes the entire estimation process harder to realize. It entails the output matrix given by It can be summarized that two out of three liquid levels, namely the level in the top tank as well as the level in the middle tank, are measured, and they all are impaired by the faults.
Such a configuration of the system allows defining a scenario for investigating the fault estimation process, which is given as follows: An examination of the fault estimation approach in four manners is allowed in such a scenario. Firstly, there is a temporary, abrupt fault being biased to the real state. The readings show the value as 5 cm less than it really is in the tank. The second one is the fault in which the measurements in the second tank are impaired by an abrupt fault, in which the values read from the sensor are by 2 cm higher than they really are. Then, the sensor readings suddenly run into stuck in place fault. This means that the readings from this sensor are still very the same irrespective of the actual water level. Specifically, the value read from this sensor is always 0 in the specified period of time. The other aspect is that those two above mentioned faults are partly in the same time span, which additionally might include difficulties during the estimation process. Finally, the fourth aspect is that the sensor in the bottom tank is not taken into account, as it was already mentioned, due to a missing level sensor in the third tank which renders the estimation more difficult in detection.
It is worth noticing that the achieved results by solving the LMI (24) gave the following gain matrices: For further analysis, it should be emphasized that the experiment was carried out in an open loop with control signals set as the ones provided in the Figure 2. It means that the water pump performed with 50% efficiency throughout the whole time span of the experiment, while the solenoid valves were changing in some sinusoidal ways. It can be easily noticed that the state estimates follow the real states with very high accuracy, despite the quite big noise present in the measurements. The state estimates converged to the real state very quickly in all three cases. Only the third state was biased a bit in the initial phase, but within acceptable limits, although that situation is caused by the fact that this specific state was immeasurable. Taking such a fact into account, the state estimation in that case when it was immeasurable can be perceived as being very proper. In these figures, docked windows show zooms of a specific part of time in which the obtained results can be seen more precisely.
Discussion
Another important and interesting thing is the sensor fault and especially its estimate. These signals are presented in Figures 6 and 7, where blue dashed lines represent the real faults and red solid lines stand for the estimates of the faults. It should be emphasized that the real faults are plotted only for demonstration purposes, and their estimates are obtained without any knowledge of their shape and magnitude. They were achieved just on the basis of the model of the system and the structure of the estimator. It can be easily noticed that the sensor faults were estimated with quite good precision. It is obvious that the fault estimates are impaired with some noise, which means that the estimates follow the real ones by oscillating around the specific value with some relatively small amplitude. It might seem that the inaccuracy of the fault estimate of the sensor placed in the top tank is bigger then the one placed in the middle; however, although there is a slight difference, it is not big and could be perceived as a good quality. Although they have been reconstructed precisely regardless of if there was an abrupt fault or stack in place one when the sensor reading was at a constant value, the estimator is capable of reconstructing them appropriately. The docked windows show zooms of a specified time span in which the precision as well as fast convergence can be observed. It can be noticed that the fault estimate very precisely and quickly reacts to changes related to the real faults.
The obtained results for the state as well as sensor faults indicate a very good estimation quality. The achieved results confirm the performance of the proposed approach.
Conclusions
The paper dealt with the problem of simultaneous state and sensor fault estimation. The investigated problem is rather very common; however, in this paper, a proposed solution in the form of the adaptive observer is slightly different from those presented in literature. It actually combines two approaches, the direct fault estimation which relies on achieving sensor fault estimation directly from the output equation, and the classical adaptive observer. It particularly means that the features of both was received. Firstly, the sensor fault equation was achieved from the output, and based on this, the observer was constructed. The proposed observer also contains the correction part, which additionally stabilizes the estimator contrary to the direct estimator. However, an unwelcome factor of the rate of change of the sensor fault was vanished, comparing to the classical adaptive way of estimating this kind of fault. Moreover, the proposed approach was provided for the class of non-linear systems, and it also can handle the exogenous uncertainties influencing the system. To solve such a problem, a H ∞ approach was utilized. The verification of the algorithm was made by implementation to the laboratory multi-tank system. The obtained results clearly confirm the efficiency of the proposed approach. The future works will focus on integrating the proposed approach with the one capable of actuator fault estimation, which will result in the simultaneous estimation of the actuator and sensor faults. Moreover, employing the proposed approach to the FTC scheme is going to be developed. Furthermore, the authors will focus on combining the proposed approach with a suitable ILC scheme.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: Symbols k discrete time n number of states m number of outputs r number of inputs n y number of sensor faults q 1 number of disturbance inputs q 2 number of noise inputs x k state of the system x k state estimate u k control input y k output of the system w 1,k process uncertainty vector w 2,k measurement uncertainty vector h(x k ) nonlinear function with respect to x k f k sensor fault f k sensor fault estimatê y k output estimate e k state estimation error e s,k sensor fault estimation error e k extended estimation error vector composed of e k and e s,k d k extended uncertainty vector composed of w 1,k , w 2,k and w 2,k+1 A, B, C system matrices f fault distribution matrix W 1 , W 2 process and measurement uncertainties matrices K x , K s state and sensor fault gain matrices P ( )0 (semi-) positive definite matrix FTC fault-tolerant control LMI linear matrix inequality LPV linear parameter varying MT multi-tank PI proportional-integral PWM pulse width modulation | 5,453 | 2022-12-01T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Carcinogenesis and teratogenesis may have common mechanisms.
VAINIO H. Carcinogenesisand teratogenesis may have common mechanisms. Scand J Work Environ Health 1989;15:13-17. The specificmechanisms of carcinogenesis and teratogenesis are poorly under stood. There are, however,some known or potential common mechanisms,such as geneor chromosome mutations, interference withgeneexpression, alteredmembrane properties,or alteredintracellular homeosta sis. Carcinogenesis isgenerallyregarded as a multistageprocess,and a carcinogencan act at one or several stages. Agentsactingin the earlystagesof the neoplasticprocessare DNA-reactive, mutageniccompounds whichenable cellsto be transformed to malignancy. These agents can also, if acting during critical peri ods of ontogenesis,induce abnormal developmentof the embryo. Agents which block gap junctional in tercellular communication may act both as tumor-promoting agents and as teratogens in the developing embryo. Hormones are essential in the control of development and differentiation. Modulation of the intracellular hormone receptors may lead to changesin homeostasiswith abnormal cellularproliferation and development as a consequence.
Most forms of cancer are brought about by a multistage process involving initiation, promotion, and progression. Most so-called "complete" carcinogens act as initiators bringing about the first irreversible step in carcinogenesis. This complex process involves several mechanisms, some of which are still not well understood (I, 2).
Without it being suggested that teratogenesis is the result of a single or simple errant mechanism, there is little evidence that teratogenesis involves the same multistage mechanistic processes that occur in carcinogenesis. Nevertheless, the initiation and promotion stages of carcinogenesis may have the same molecular bases as teratogenesis (3). The fact that gene regulation is crucial for the control of cell differentiation and development is generally accepted. It is known that interference in the process of normal differentiation and development can lead to abnormal growth, malformation, and neoplasm. Thus, at the level of cellular differentiation, carcinogenesis and teratogenesis may be biologically linked.
Carcinogenesis as a complex biological phenomenon
A carcinogen may act during one or several of the stages in the neoplastic process. Especially pertinent to cancer is the activation, by carcinogens, of a certain class of genes, called oncogenes, that appear to I control cellular growth. Mutation or gross deoxyribonucleic acid (DNA) rearrangement of these genes, such as chromosomal translocation or gene amplification, may result in the loss of their capacity to control cell replication. The conversion of oncogenes, which is triggered by a variety of factors ("carcinogens"), could be the final common step in the biological mechanism of carcinogenesis. Activation of an oncogene (such as a ras protooncogene) is probably an early event in tumorigenesis and may even be the "initiation" event in some cases. Oncogenes can be roughly classified into two groups -those functioning in the nuclei and those functioning in the cytoplasm, including the cell membrane (4). And it may be that one of each group must be activated for carcinogenesis to occur. Some of the proteins that are encoded by oncogenes are located in nuclei (eg, those encoded by myc oncogenes), and others are located outside the nuclei (eg, those encoded by ras oncogenes) (5). P21 protein, which is encoded by cellular ras genes, is the best known oncogenic gene product. Normal P21 has both guanosine 5' -triphosphate-binding (GTP-binding) and GTPase activity, but carcinogen-activated P21 has greatly reduced GTPase activity. Amino acid substitutions at positions 12 and 61 reduce GTPase activity but have no effect on GTPbinding. The significance of GTPase activity lies in the analogy with the "G proteins" which, like P21, have GTP-binding capacity and can hydrolyse GTP to guanosine 5'-diphosphate (GOP) but, in addition, are known to function as intracellular transducers of growth regulatory signals from cell surface receptors.
Once cells in certain parts of the body are released from normal growth control, they begin to proliferate, overwhelm normal cells, and invade and distort 13 normal tissue. Evidence obtained from several studies on experimental animals suggests that activation of ras protooncogenes is an early event in this process, although it is not sufficient to give full tumorigenic properties to the cells which harbor them. Activated ras genes have been detected in many benign (in situ) tumors, including mouse skin papillomas and liver adenomas, and in a large variety of human tumors (6). On the average, however, activated oncogenes have been detected in only 15-20 % of human malignancies.
Yamasaki et al (7) have shown that the c-Ha-ras oncogene can be activated transplacentally in mice through a specific point mutation caused by a chemical carcinogen [7, I2-dimethylbenz[a]anthracene (DMBA)], but the cell containing this mutation may remain dormant until it encounters (postnatally) a tumor-promoting stimulus. Thus, a single transplacental dose of DMBA can induce a mutation (A to T transversion) at the 6Ist codon of the cellular Ha-ras gene, which remains dormant until the cells are promoted (eg, by skin painting with a phorbol ester). Cells "initiated" in this way can remain silent for a long time, and therefore a long-term memory of initiation on mouse skin must exist.
The identification and molecular characterization of human oncogenes have validated the old hypothesis that cancer is a genetic disease that is initiated by the occurrence of somatic mutations. However, it has also become evident that cancer development is a multistep process requiring the activation of many growth-controlling genes. Somatic hybridization of normal cells with malignant ones shows that the normal genome may enable tumor cells to respond to appropriate growth-controlling stimuli in vivo. Genes that can inhibit expression of the tumorigenic phenotype have been called "tumor suppressor genes" or "antioncogenes" (8,9).
Biological basis for abnormal embryogenesis
The term "reproductive toxicant" covers a mixed group of agents which interfere with the ability of males and females to reproduce. A "developmental toxicant" can produce adverse effects on the developing conceptus after exposure during pregnancy. These effects may be manifested in the embryonic or fetal periods or postnatally.
One manifestation of developmental toxicity is structural malformations in the conceptus as a consequence of exposure of the mother during pregnancy. Teratogenesis is thus a process of abnormal embryonic development. The definition of teratogenesis is often restricted to structural abnormalities; thus nonstructural deficits such as behavioral deficits, mental and physical growth retardation, immunological compromise, and increased susceptibility to carcinogenesis are, inappropriately, excluded. Spontaneous abortions, premature delivery, and stillbirth are additional 14 manifestations of embryotoxicity which may share common mechanisms with teratogenesis.
The processes of teratogenesis have been discussed by Wilson (10), who outlined the early events that could contribute to abnormal embryonic development and teratogenesis. Mutation is probably the most firmly established mechanism of teratogenesis. It has been estimated that 20-30 % of human developmental errors can be attributed to mutation in a prior germ line. When somatic mutations occur in an early embryo, a demonstrable structural or functional defect may be produced. Other pivotal effects include cell death and altered membrane characteristics and the processes that depend on them, such as tissue induction, cell recognition, responsiveness to hormonal stimuli, and formation of intercellular junctions.
Intercellular communication has been an integral consideration in most theories of embryogenesis. It has been reasoned that the cells of an embryo must be able to communicate with each other in order to define tissue specificity and pattern formation and to coordinate morphogenetic events. In addition neighboring tissues have long been known to exert inductive influences on each other and therefore require some sort of transmission of signals.
Carcinogenesis and teratogenesis may have common mechanisms
Associations between teratogenesis and carcinogenesis have been observed at several levels. Because of the complexity of each stage, many mechanisms are probably involved. Some of the biological steps of potential importance in carcinogenesis, as well as in teratogenesis, are shown in figure 1.
Tumor initiation, mutagenesis, and teratogenesis DNA-reactive alkylating agents are usually also carcinogens. As chemical mutagens, they can induce permanent alterations of genetic information and can act both as initiators and as "complete" carcinogens. However, genotoxic agents are not necessarily mutagenic in normal in vitro assays. Reynolds et al (11) found that furan and furfural were not mutagenic in the Ames Salmonella assay (and would be classified as "nongenotoxic" on this basis), but that they caused liver tumors activating mutations in several codons of the ras gene of mice. Thus, although furan and furfural were not active in tests designed to test for mutagenicity, the authors considered that the mutations in tumor oncogenes were evidence that they are mutagenic. Agents that do not damage DNA can also cause karyotypic changes, which can result in chromosome mutations (polyploidy, aneuploidy, nondisjunction) that may lead to a deregulation of cellular growth.
Cancer initiation is an irreversible event involving mutagenesis, and mutagenesis in the germ line can contribute to genetic diseases. Furthermore, mutagenesis membrane, these substances may also activate cAMP reactions by binding to their receptors. Since increased intracellular levels of cAMP and cAMP-dependent protein kinase activity have been associated with recovery of junctional competence, it is possible that substances that activate the cAMP cascade may also influence junctional communication. Gap
Modulation of hormone receptors
Hormones are essential for controlling normal development and differentiation; for instance, estrogen is necessary for normal reproduction and the maintenance of pregnancy and, in both males and females, affects other hormones that regulate homeostasis and reproductive function. In cells, estrogen binds to and activates a cytoplasmic receptor, and this estrogen-estrogen receptor complex is translocated into the nucleus, where it acts like a transcriptional regulator (16).
2,3,7,8-Tetrachlorodibenzo-para-dioxin (TCD D) is a highly toxic environmental pollutant that can cause cancer in rodents; it is also teratogenic in rodents. It is not genotoxic, ie, not mutagenic, and it does not react with DNA. A striking feature of the toxicity of TCDD is its ability to deplete (atrophy) lymphoid tissues in general and thymus in particular, this deple-in somatic cells plays a role not only in carcinogenesis but, when occurring in the early embryo, also in teratogenesis (10).
Tumor promotion and teratogenesis
Some chemicals (known as tumor promotors) which have little or no genotoxic activity can drastically enhance the incidence of tumors, eg, on the backs of mice that have been treated previously (initiated) with an alkylating agent. (Such as DMBA, see reference 7.) Tumor promotion has been observed in many tissues in numerous animal model systems with high tissue and species specificity. However, very little is known about the mechanisms of tumor promotion.
One popular theory is based on the effects of promoters on intercellular communication. When mammalian cells come into contact with one another, small membrane channels, known as gap junctions, may form between them. These gap junctions permit the transfer of small molecules that may regulate cell proliferation and differentiation (12). Tumor promoters can interrupt intercellular communication between a variety of cell types, and this effect may have a mechanistic relationship with promotion.
Cells within a tissue communicate with each other not only by secreting molecules into the extracellular space, but also by passing signals directly through intercellular channels. Mechanistic studies suggest that cells exchange low-molecular-weight components (< 2000 D) via gap junctions and that this activity is regulated by phosphorylation dependent on adenosine 3,'5'-cyclic monophosphate (cAMP) (13). These intercellular signals presumably participate in the regulation of growth and differentiation so that their disturbance could represent a step in the process of carcinogenesis or teratogenesis.
Initiation of the carcinogenic process may begin with the production of a single tumor cell. This cell is probably in contact with a number of normal cells and would thus receive growth regulatory substances from them via gap junctions. As a result, the phenotypic expression of malignancy in the tumor cell would be suppressed. Tumor promoters might act by a disrupting gap junctional function, preventing phenotypic suppression and allowing the tumor cell to proliferate. Numerous types of evidence support this hypothesis, such as the observation that many tumor promoters inhibit intercellular communication (14).
Gap junctional communication also plays a very important role in the adaptation of multicellular organisms, as well as in various developmental processes, morphogenetic field determination, and tissue induction (15). Several teratogens that are not known to be alkylating or DNA-reactive agents, including mirex, various alkyl glycol ethers, chlorpromazine, and ethanol, can block intercellular communication. Direct interruption of junctional communication may thus be a mechanism of action for teratogens. At the cellular tion being manifested clinically as immune deficiency. (For a review, see reference 17.) TCDD has been shown to interact with specific, soluble binding sites in target tissues. The large interand intraspecies differences in both acute toxicity and teratogenesis can be attributed to events that occur after receptor binding (18). A major determining factor in TCDO toxicity seems to be the species-and tissuespecific control over a battery of enzymes, which are expressed or repressed after TCDD binds to its receptor. Recently, it has been suggested that the biological effects of TCDD are based on a modulation of the estrogen receptor (19). It has also been suggested that the resistance of some animal species (eg, hamster) to the toxicity of TCOO is due to species' ability to synthesize estrogen and thus increase its tissue concentrations during exposure.
Exogenous estrogens are also immunosuppressive and cause thymic involution. Other hormones and growth factors can modulate the effects of estrogen. For example, some breast cancers appear to be estrogen-dependent and can be arrested by antiestrogens (20,21).
Concluding remarks
Many carcinogens are also teratogcns, but many teratogcns are not carcinogens. Carcinogenesis is a multistage process, and a carcinogen can be active at one or more of these stages. The simplest operational model distinguishes initiation from promotion. Initiators induce genotypic changes, and promotion results in clonal expansion of the initiated cells. Initiating activity reflects the capacity of an agent to produce irreversible cellular changes and gives cells the potential to progress to tumors. This phenomenon involves changes in the growth-related genes (oncogenes). While a genetic change in the target cell is likely to represent an early required event in malignant transformation, other differentiation-related events may be important for tumor promotion.
In a given nssue, both in the developing embryo and in stationary tissue, cells are maintained in homeostasis by various forms of intercellular communication, gap junctions being the factor believed to play the most important role. Disturbance of this orderly, structured cell society may modulate cell proliferation and differentiation and lead either to the process of teratogenesis or to the process of malignant growth. Many of the tumor promotors that have been shown to be teratogenic inhibit gap junctional communication. Modulation of intracellular receptor proteins can lead to drastic changes in cellular homeostasis, which, under certain circumstances, can also lead to reproductive failure, abnormal growth, and cancer. These modulators need not be DNA-reactive substances, but they must be able to interact with receptors. It has been suggested that the effects of some nongenotoxic agents, such as TCDD, are mediated by the modulation of es-16 trogen-binding receptors and lead to both teratogenesis and carcinogenesis.
Thus agents that act at the genetic level as initiating agents, as well as agents with tumor-promoting activity, may also have teratogenic effects if the developing embryo is exposed at a critical moment. The mechanisms are still largely unknown, but recent advances in molecular biology have provided new tools for this research. | 3,495.8 | 1989-02-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Discussion and Think Pair Share Strategies on Enhancement of EFL Students ’ Speaking Skill : Does Critical Thinking Matter ?
Previous research has underscored the importance of learning strategies and critical thinking. However, the relationship between both constructs toward the enhancement of the students’ speaking skills in EFL context still receives scant attention. Therefore, this study aims to investigate the influence of learning strategies (i.e. Discussion Strategy and Think-Pair-Share Strategy/TPS) mediated by critical thinking on the speaking skill of the Department of English Education students at a private university in Cirebon. The subject of this research consisted of 60 students divided into two classes (N=30 for experimental class and N=30 for control class). This study employed an experimental research design with a 2X2 factorial design. The findings demonstrated that English speaking skill of the students with Discussion Strategy was higher than those with the TPS strategy. Second, the English speaking skill of the students with high critical thinking level was higher than those with low critical thinking level. Third, there was a relationship between learning strategy and critical thinking toward English speaking skill. Fourth, there was no significant difference between the students’ speaking skills with Discussion strategy than those with Think-Pair-Share strategies in the group of students with low critical thinking level. This study concludes with the recommendations for future research to examine the effectiveness of discussion and think-pair-share strategies on enhancing the students’ critical thinking skills to promote their speaking skills.
of the four core skills of English, speaking is considered the most difficult among EFL students.Mastering English language skills can be very challenging for Indonesian EFL learners since several problems can be faced during the process of mastering the language (Mahmud, 2018).Speaking is a means of communicating with other people to exchange some information, but the students must master the speaking aspects.The main aspects of speaking skill include fluency, comprehension, grammar, vocabulary, and pronunciation.
As with the innovation to emphasize speaking ability, the learning process should involve an effective learning strategy.Learning strategy which appropriates in this century which can be implemented encourages group discussion to create a positive environment of study, build the responsibility, and develop the critical thinking ability.Two of which are discussion strategy and Think-Pair-Share strategy.
Both learning strategies are believed to have an influence on the student's critical thinking ability.The reason is that the students can develop the responsibility by themselves to build a cooperative team in solve the encountered problems.
Second, the students can increase not only a responsibility but also speaking ability by giving argumentation and an overview of the case.Third, the students can investigate a case by their critical thinking ability from the scientific resources with depth and carefully while solving the case.
English Speaking Ability
Students often think that the ability to speak is the product of language learning, but speaking is also a crucial part of the language learning process Syafrizal & Rohmawati, (2017:71).Speaking is the verbal use of language to communicate with others (Syafrizal, Sutrisno, et al, (2018:67).Speaking becomes the tool of communication in daily activity.Speaking skill refers to verbal communication ability in a practical, functional, and precise way using the target language.It is simply concerned with putting ideas into words to make other people grasp the message that is conveyed (Al-Tamimi & Attamimi, (2014 : 31).
Therefore, speaking is the ability which students must learn to build better communication.When speaking students should have a good pronunciation as the one of speaking aspect.According to Brown (1994) in Ardhy (2018 : 19), speaking is the interactive process of constructing meaning that involves producing, receiving and processing information.Meanwhile, speaking is one of the main purposes of language learning in that is an ability to transfer some ideas to other people clearly and correctly Argawati, (2014:74).Moreover, speaking is the tool of communication to get some of information and message by understanding the meaning when expressing the message properly and fluently to the interlocutors.Thus, the students are encouraged to know the aspects of speaking itself, so that when having a conversation; it can be understood by the other students, especially when having a discussion in the learning process.
Critical Thinking in the 21st Century Speaking Activities
In the development of 21 st -century foreign language education, the students must have critical thinking in responding to a problem, for example in everyday life.They have to be able to explore relevant sources through authentic research and fact.In addition, the language learners who have developed critical thinking skills are capable of doing activities of which other students may not be capable Shirkhani & Fahim, (2011:112).They also can assert something to act rationally, empathically and reasonably Fell & Lukianova, (2015:2).According to Istiara and Lustyantie critical thinking is an activity that relates to the ability and action (Istiara & Lustyantie, 2017:23).Moreover, the students can take action to solve the problem during the learning process by learning group or by their own thinking.
Students with higher critical thinking can think rationally.Thus, the students as young generations need to be able to discriminate facts from opinions, evaluate and judge the credibility of evidence El Soufi & See, (2019 : 140).In the learning process, when students are given some questions by the lecturer, they should raise a problem until given the right solution by the argument.A critical thinker raises vital questions and problems, gathers and assesses relevant information, formulates well-reasoned conclusions and solutions, thinks open-mindedly within the alternative system of thought and communicates effectively with others in figuring out solutions to a complex problem Bhushan, (2014:11).
Critical thinking in the field of education is the learning process not only focused on the students' answers but also students who thinking critically to give perspective about what they think.In addition, Ennis stated that critical thinking is the reasonable reflective thinking that is focused on deciding what to believe or do.It also entails formulating a hypothesis, a different perspective of viewing a problem, questions, possible solutions, and plans for examining something, (Ennis 1991 p. 6;Ennis 2011as cited in Devi, Musthafa, & Gustine(2017:3).
Therefore, the influence of critical thinking on speaking skill has been investigated by previous scholars, such as Fahim andKoleini (2014), Shirkhani andFahim (2011), Afshar and Rahimi (2014).Fahim and Koleini (2014) investigated contributing factors of the speaking skill and critical thinking levels of EFL/ESL learners.The research was conducted involving 40 students.The research design was experimental.The findings showed that there were several relationships between critical thinking skill and speaking skill at the academic level with the significance level r=0.423, <0.01.However, the research only explores the relationship of the learner's factors and critical thinking skill, leaving a gap in the relationship between learning strategies and critical thinking upon speaking skill.Afshar and Rahimi (2014) also investigated the relationship between learners' attributes and critical thinking on speaking skill by using the California Critical Thinking Skills test and interview.The findings suggested that critical thinking ability had a correlation with speaking skill.All components of the correlation of emotional intelligence and speaking skill had correlated significantly.Additionally, there was a positive effect between emotional intelligence and critical thinking ability.Based on the research, it can be concluded that critical thinking has a relationship with speaking skill.However, the researcher did not attempt to explore the mediation of critical thinking within the use of learning strategies upon the speaking skill of the students.
Learning Strategies
The students must be trained to activate their learning strategies to reach an effective learning process.In addition, their critical thinking ability can mediate the use of learning strategies.Learning strategies are helpful to assist the speaking learning and understanding the target language.It is important then to be successful in accomplishing academic tasks.For example, they can make inferences based on the information in integrated language instruction including speaking and writing tasks Oxford et al., (2014:36).A technique or strategy can also be in the forms of specific classroom activities Richards & Renandya, (2002:121).Brown (2000) added that to encourage learning strategy especially in speaking strategies some ways are worth to take into account such as asking for clarification, asking someone to repeat something, using conversation, and getting someone's attention (Brown, 2000).Meanwhile, students will have a chance to build conversation to clarify the problem by discussion and make a pair with their group discussion.Thus, of the available learning strategies, discussion and thinkpair-share strategies are the focus of the present study.
Think-Pair-Share Strategy
According to Arends Think-Pair-Share is a challenge for the assumption that all recitation or discussion needs to be held in whole-group settings, and it has builtin procedures for giving students more time to think to respond and to help each other Arends, (2012:454).Then, think-pair-share is the strategy that requires a pair in solving the problems.This strategy also needs responses between students' pairs because the students' responses are very important during the learning process.
Similarly, San Tint and Nyunt (2015) assumed that think-pair-share strategy is the activity that prompts students to reflect on an issue or problem and then to share that thinking with others.The students are encouraged to justify their stance using clear examples and clarity of thought and expression.They extend their conceptual understanding of a topic and gain practice in using other people's opinions to develop their own San Tint & Nyunt, (2015:1).Moreover, Raba added that the think-pair-share strategy reinforces students' communication skills.Each student takes the chance to speak, discuss and participate which has many positive effects on the whole group where students feel more self-confident and more active in the class Raba, (2017:13).
Yuliasri describes the three-step of Think-Pair-Share strategy.The first step allows individuals to think silently about a question/task posed by the instructor.The second step suggests individuals pair up and exchange thoughts.In the third step, the pairs share their responses/ideas with other pairs, other teams, or the entire group Yuliasri, (2013:15).Another relevant article about the Think-pair-share and speaking ability was conducted by some researches, such as Ardhy (2018), Hajhosseini (2016).Ardhy ( 2018) conducted research about the application of think-pair-share strategy in improving students' speaking ability.The research was conducted at the English Language Education Study Program at Palopo University.
The results showed that the students' performance level was influenced by the think-pair-share strategy.The mean score of the pretest of the students who applied the think-pair-share strategy was 2.16 while the post-test score was 4.02.
Research about critical thinking and social interaction in active learning focusing on the Iranian students' perspectives demonstrated that the students who implemented discussion strategy got more benefits for their social interaction during the learning process.It also gives an effect on the dynamic cultures.
Meanwhile, Ardhy (2018) reported that the think-pair-share strategy influenced students' speaking skill as viewed from the pre and posttests results.
Another study conducted by Afshar and Rahimi (2014) investigating the influence of critical thinking on speaking skills of EFL students suggested that critical thinking has a relationship with speaking skill.However, the exploration of whether the students applying both strategies mediated by critical thinking obtain higher scores of their speaking performances still receives little attention.
Therefore, this study aims to investigate the influence of discussion and thinkpair-share strategies mediated by critical thinking on EFL students' speaking skill.To students have some of the difficulties to arrange the sentences based on the true grammatical 3) students repeated the words when speaking, so make the sentences not effective 4) students have difficulties to think critically to solve some of problem 5) the lecturer did not implement a learning strategy to teach speaking itself.
The researcher used cluster random sampling to take the sample.The sample is the third-semester students (N=60) divided into two groups.The first group (N=30 students) was taught by discussion strategy.Regarding the division of students with high and low critical thinking, this study used Guilford's theory in which 27% of them (N=16) were tested and selected to reach a conclusion that eight (8) students belong to the high-critical-thinking group, while the other eight students belong to the low-critical-thinking group.The second group (N=30 students) was taught by the think-pair-share strategy.The division of students with high and low critical thinking conforms to the first group.
To collect the data, the researcher used two instruments: a critical thinking To what extent the English speaking performances of the students with discussion strategy and think-pair-share strategy mediated by high critical thinking differ?How is the relationship between learning strategies and critical thinking on the students' English speaking skill?
The results of validity test
The result of the validity calculated by the Product Moment formula, which has a correlation of reliability.The result of reliability is reliable, based on the r-table with the alpha 0.05 and the r table 0.722 > 1.02 so, the result reliability is very high.
The conclusion of the instrument of critical thinking test is reliable for the research.
The instrument of the speaking test was calculated by the inter-rater reliability which scored by two experts.Based on the result of the reliability between two experts, the result is 0.722>0.997,so the instrument of speaking is reliable as the instrument of the test.The research also has the result of Liliefors (Normality Test), Barlett Test (Homogeneity Test).Based on the normality test can be a result that the data is normally and homogeny.The data continued by the calculated hypothesis based on the Two-Way ANOVA.
Before the researcher conducting the hypothesis testing, the research would be tested by normality testing and homogeneity testing.The normality test aims whether the data is normally distributed or not.First, the calculation of the normality of the data.Data on the ability to speak English students are tested by normality test, to find out the data is normal with a significance level of α = 0.05.
The hypothesis proposed in this normality test is as follows: H0 = accepted if the L count ≤ L table; means that data is normally distributed.H0 = rejected if L count> L table; means that data is not normally distributed.To answer the first hypothesis, the data was calculated based on two way ANOVA analysis between rows and line, shows that the price of F count (A) = 12.5 > F table (4,20) on the significance α = 5%.So, this is means H0 is rejected and H1 is accepted.And based on the average of the score, students who had implemented by the discussion strategy (A1) is 22.5 higher than students who implemented by the Think-pair-share (A2) an average value of 19.5.Based on the result, can be concluded that Discussion Strategy is better than Think-pair-share Strategy.
To answer the second hypothesis, the data was calculated based on twoway ANOVA analysis between rows and line, shows that F Count (B) = 26.4> F table (4.20) on the significant α = 5%.So, this is means H0 is rejected and H1 is accepted.The second hypothesis is about measuring students who have higher critical thinking ability and lower critical thinking ability.Students who had higher critical thinking the score of average is 22.0 and students who had lower critical thinking ability the score of average is 19.8.So, as a result, students who had higher critical thinking ability is better than students who had lower critical thinking ability.
To answer the third hypothesis, the data was calculated based on two-way ANOVA F Count (AB) = 13,6 > F table (4,20) on the significance of α = 5%.So, this is means H0 is rejected and H1 is accepted.The third hypothesis stated that there is an interaction between the learning strategy and the critical thinking ability.As a result, there is the interaction between the implementing of learning strategy of discussion and think-pair-share of the critical thinking ability through the speaking ability of students of Swadaya Gunung Jati University.
To answer the fourth hypothesis, the data was calculated based on the Tukey Test for groups A1B1 and A2B1, Q Count = 2,08 and Q table = 3,26 on the significance of 0,05.This means that H0 is rejected and H1 is accepted.The fourth hypothesis stated that there are have differences in the speaking ability for students who learn by the discussion strategy and higher critical thinking ability with the think-pair-share strategy and higher critical thinking ability.Based on the result, it was calculated by the average of speaking ability that learning by discussion strategy (A1B1) is 23.8 higher than the average score of speaking ability by the think-pair-share strategy (A2B1) is 17.6.Based on the result, that the ability to speak for students who have higher critical thinking ability who studied by the discussion strategy is better than students who learn by the think-pair-share strategy.
To answer the fifth hypothesis, further, the test used the Tukey test for A1B2 and A2B2 groups, Q Count = 2.08 and Q table= 3.26 on the significance 0.05.So, this is means Q count lower than the Q table.So, H1 is rejected and H0 is accepted.Based on the result, there is no significant difference between the English speaking skills of students who study with Discussion strategies higher than students who learn with think-pair-share strategies in groups of students who have low critical thinking.The average of the students who have lower critical thinking ability through think-pair-share strategy is 23.5 and the average of students who have lower critical thinking ability through think-pair-share strategy is 22.5.
Discussion
In this research, the researcher has implemented two learning strategies (i.e. discussion and think-pair-share strategies) mediated by the students' critical thinking to investigate the influence on their speaking skill.The research found that there is significance between learning strategy and critical thinking toward speaking skill.The results of the research found that the students who were taught by discussion strategy and think-pair-share strategy can improve their speaking skill including those who also have high critical thinking.On the other hand, the students who have low critical thinking do not project similar results because they The novelty of this research lies in the explanation of the relationship between learning strategy and critical thinking toward the students' speaking skill.
It demonstrates that the students taught with think-pair-share strategy obtain the average score of pretest 2.16 and the average score of posttest 4.02.Previous research only focuses on implementing discussion strategy for the improvement of the Iranian EFL students as well as their critical thinking (Hajhosseini, Zandi, Hosseini Shabanan, & Madani, 2016).The results of the present study extend that critical thinking might mediate a better process of activating discussion strategy in the EFL classrooms.Third, the research were conducted by Baroroh, Arif and Ashlihah (2017) the result of the research, students gain a positive response through the think-pair-share strategy on English speaking.The results of this research show that the students taught by think-pair-share strategy gain a positive effect on their speaking skill.Fourth, the research conducted by Soodmand and Rahimi (2014) were investigated the critical thinking, emotional intelligence, and speaking skill of Iranian EFL learners.The results show that critical thinking influences speaking skill.
CONCLUSION AND SUGGESTION
Based on the result of the research, it could be concluded that were a significant difference between the students with discussion strategy and think-pair-share strategy mediated by critical thinking toward speaking skill.On the other hand, students who have low critical thinking obtain lower score although they employ discussion and think-pair-share strategies as well.
Furthermore, this study suggests that students who learn language especially at the speaking subject should implement the learning strategy mediated by critical thinking to improve speaking skill.Students will habitually to think critically to investigate the information from an informant.Meanwhile, discussion strategy and think-pair-share strategy also build the responsibility of students on their group to answer the questions based on the authenticity of the information.Hence, future research may replicate the research with a different and larger sample and in other levels of education.
obtain adequate findings, the driving research questions are: 1) To what extent the English speaking performances of the students with discussion strategy and thinkpair-share strategy differ? 2. To what extent the English speaking performances of the students with discussion strategy and think-pair-share strategy mediated by high critical thinking differ?3. To what extent the English speaking performances of the students with discussion strategy and think-pair-share strategy mediated by low critical thinking differ? 4. How is the relationship between learning strategies and critical thinking on the students' English speaking skill?RESEARCH METHODOLOGY This research was conducted on the third semester in academic year 2018/2019 of The English Department of Swadaya Gunung Jati University which located in Cirebon, West Java.The research was conducted for four months.The method of the research used the experimental research with factorial design 2x2 and used ANOVA data analysis alpha 0.05 significance level.The reason for choosing the settings is based on the preliminary study by interviewing half of the subjects as the sample of the interview.The results demonstrated the causes of the difficulties to speak English from the students: 1) students lack vocabulary 2) test and speaking performance test.The normality and homogeneity tests were also conducted.The critical thinking test consists of 30 multiple-choice questions about a reading text.The sources of the instrument of critical thinking test were from the text of TOEFL exercise.The reading exercise as the instrument test of critical thinking, because contains of how students to think critically based on the critical thinking assessment.The speaking performance test was in the form of individual conversation with five (5) themes about the phenomena of life.It comprised five questions each of which is under one theme.The instrument of the speaking performance test was created by the researcher validated by the expert.DISCUSSION To what extent the English speaking performances of the students with discussion strategy and think-pair-share strategy differ? 1. Description score of English speaking performances of students with discussion strategy (A1) The students who learned with discussion strategy have a range of scores 0-25, with the lowest score 19.3 and the highest score 25.0.The average score is 22.5 with the standard deviation 1.85, the mode 23.5 and the median 22.80.The distribution of the frequency distribution of the scores of the students who learned with discussion strategy can be seen in the following histogram.
Figure 1 .
Figure 1.Histogram of speaking skill through discussion strategy (group A1)
1 .
Description of scores of English speaking performances of students with high critical thinking (B1)The data of the English speaking performances of students with high critical thinking skills exemplified a range of scores 0-25, with the lowest score of 18.0 and the highest score 25.The average score is 22.0 with a standard deviation 2.239, the mode 23.9 and the median 22.6.The distribution of the frequency distribution of the scores of the students with high critical thinking can be seen in the following histogram.
Figure 3 .
Figure 3. Histogram of English speaking skill of students with high critical thinking (group B1)
Figure 3 .
Figure 3. Histogram of English speaking skill of students with high critical thinking (group B1)
Figure 4 .
Figure 4. Histogram of English speaking skill of students with discussion strategy and high critical thinking (group A1B1)
Figure 5 .
Figure 5. Histogram of English speaking skill of students with think-pair-share strategy and high critical thinking (A2B1)
Figure 6 .
Figure 6.Histogram of English speaking skill of students with discussion strategy and low critical thinking (A1B2)
Figure 7 .
Figure 7. Histogram of English speaking skill of students with think-pair-share strategy and low critical thinking (A2B2) cannot have enough ability to solve the assigned problem or task from the teacher.The similarity of research was conducted by (Ardhy, 2018), the research was investigate student's speaking ability through Think Pair Share strategy.The results of the research that have an influence in mean score of student's speaking skills, before researcher implemented Think Pair Share, the mean of the student's speaking skill was 2.16 and after implemented the strategy, student's speaking skills score were increased 4.02.Not only based on the result of the speaking test, but also the result of the questionnaire were students have a positive response of the think pair share strategy.The results of the present study are relevant to (Ardhy, 2018).
Table 1 .
Normality Test Result of Influence Learning Strategy and Critical Thinking onThe information from the table above shows that all groups of data tested for normality with the Liliefors test have a calculated L value <L table.Thus, it can be concluded that all data on the English speaking abilities of students are normally distributed.Second, To find out the data on the ability to speak English is homogeneous with a population or not, a homogeneity test was carried out using
Table 2 .
The Homogeny Result of Influence Learning Strategy and Critical Thinking | 5,649 | 2019-09-01T00:00:00.000 | [
"Education",
"Linguistics"
] |
Simultaneous Electrochemical Sensing of Dopamine, Ascorbic Acid, and Uric Acid Using Nitrogen-Doped Graphene Sheet-Modified Glassy Carbon Electrode
: Nitrogen (N) doping is a well-known approach that can be effectively used to tune the properties of graphene-supported materials. The current attempt followed a simple hydrothermal protocol for the fabrication of N-doped graphene sheets (N-GSs). The N-GSs were subsequently applied to modify the surface of a glassy carbon electrode (GCE) for a dopamine (DA) electrochemical sensor (N-GSs/GCE), tested on the basis of differential pulse voltammetry (DPV). The findings highlighted a limit of detection (LOD) as narrow as 30 nM and a linear response in the concentration range between 0.1 and 700.0 µ M. The modified electrode could successfully determine DA in the co-existence of uric acid (UA) and ascorbic acid (AA), the results of which verified the potent electrocatalytic performance of the proposed sensor towards AA, DA, and UA oxidation, and three distinct voltammetric peaks at 110, 250, and 395 mV via DPV. The practical applicability of the as-developed N-GSs/GCE sensor was confirmed by sensing the study analytes in real specimens, with satisfactory recovery rates.
Introduction
3,4-dihydroxyphenethylamine, or dopamine (DA), as a key neurotransmitter, is positioned in the family of excitatory chemical neurotransmitters.It is capable of regulating human cognition and emotions, and exists as a main catecholamine in the central nervous system (CNS).The mean DA content in blood and serum samples of human beings can range from 10 −6 to 10 −9 mol/L [1,2].DA has an extensive presence in mammalian brain tissues and body fluids.It is significantly involved in some processes performed in the CNS, metabolic system, kidneys, cardiovascular system, and hormonal actions.Hence, any change in the level of DA in different parts of the body can be indicative of diverse neurological problems such as Parkinson's disease, Alzheimer's disease, Huntington's disease, and addiction.DA can be a biochemical precursor for catechol amine neurotransmitters, epinephrine, and norepinephrine [3][4][5].Accordingly, there is a need for a sensitive, simple, and selective approach to determine and quantify the DA for early diagnosis of relevant medical conditions.
There are some analytical methods for determination of DA, some of which include chemiluminescence [6], chromatography [7], colorimetry [8], fluorescence [9], and capillary electrophoresis [10].In spite of the valuable advantages of these techniques, they suffer from some shortcomings such as long testing time, high cost, and tedious pretreatment processes.Hence, considerable attention has been recently focused on electrochemical approaches in determination of electroactive analytes because of their commendable merits such as high sensitivity, low cost, and fast reaction [11][12][13][14][15][16][17][18][19][20][21].Nevertheless, the presence of some interferants such as uric acid (UA) and ascorbic acid (AA) can disrupt the detection of DA [22,23].This problem can be particularly aggravated with the use of bare electrodes due to some deficiencies such as overlapping oxidation potentials and marked electrode fouling, commonly leading to poor selectivity and reproducibility.This bottleneck can be circumvented by modifying the electrode surface using different modifiers [24,25].In general, chemically modified electrodes lead to an increase in sensitivity and selectivity for the determination of various analytes [26][27][28][29][30][31][32][33].
Graphene with a 2D layer of C atoms in a honeycomb configuration have shown unparalleled electrical, mechanical, chemical, and optical properties.They can act as potent electrode modifiers due to huge surface area, high electro-conductivity, great chemical and thermal stability, and mechanical strength [50][51][52].Reportedly, chemical doping of graphene can trigger semiconducting properties; thus, N-doped or P-doped graphene can experience high charge conductivity and chemical reactivity [53].N-doping can be highly effective in the regulation of electronic activity and mechanical properties and in the elevation of the electrocatalytic performance of carbon materials [54].N-doped graphene sheets (N-GSs) have been promising options in the fabrication of electrochemical (bio)sensors, supercapacitors, batteries, and electrocatalysis, because they possess huge surface area, great electric conductivity, subtle electronic traits, a large number of edge sites to adsorb electroactive molecules, catalytic capacity, and appreciable electron density [55][56][57].
The current attempt followed a simple hydrothermal protocol for the fabrication of N-GSs.The N-GSs was subsequently applied to modify the surface of a GCE to create a DA electrochemical sensor (NGSs/GCE), even in the co-existence of UA and AA, tested on the basis of DPV.The practical applicability of the as-developed NGSs/GCE sensor was examined by sensing the study analytes in real specimens.
Instruments and Chemicals
An Autolab PGSTAT 320N Potentiostat/Galvanostat Analyzer ((Eco-Chemie, Utrecht, The Netherlands)) with GPES (General Purpose Electrochemical System-version 4.9) software was applied for all electrochemical determinations at ambient temperature.A Metrohm 713 pH-meter with glass electrodes (Metrohm AG, Herisau, Switzerland) was used to determine and adjust the solution pH.
All solvents and chemicals applied in our protocol were of analytical grade from Merck and Sigma-Aldrich.Phosphate buffer solution (PBS) was prepared by phosphoric acid and adjusted by NaOH to the desired pH value.
Synthesis of N-GSs
The synthesis of N-GSs was based on the hydrothermal protocol using the base material graphene oxide (GO) and the reducing and doping agent urea.Thus, 100.0 mg of exfoliated GO was dissolved in 100 mL of deionized water under ultra-sonication, with the solution pH adjusted to 10.0 using NH 3 •H 2 O (30%), followed by adding 6.0 g of urea under 3 h ultra-sonication.Next, the resultant mixture was placed in a Telfon-lined autoclave at 180 • C for 12 h, followed by cooling to room temperature.After collecting N-GSs, washing was done thoroughly with water and the pH was adjusted to a neutral value.Eventually, the freeze-drying process was done.
Preparation of the N-GSs/GCE Sensor
A drop-casting technique was followed to fabricate the N-GSs/GCE.Thus, a certain amount of as-prepared N-GSs (1.0 mg) was subsequently dispersed in deionized water (1.0 mL) under 25-min ultra-sonication.Then, the prepared suspension (4.0 µL) was coated on the GCE surface dropwise and dried at the laboratory temperature.The surface areas of the N-GSs/GCE and the bare GCE were obtained by CV using 1.0 mM K 3 Fe(CN) 6 at different scan rates.Using the Randles-Sevcik equation [58] for N-GSs/GCE, the electrode surface was found to be 0.119 cm 2 , about 3.8 times greater than bare GCE.
Characterization of N-GSs
Figure 1 shows the characterization of GO and N-GSs using XRD.According to the XRD pattern for GO (Figure 1a), a peak can be seen at the 2θ value of 11.5 • (inter-planer space = 7.69 Å).A larger inter-planer space can be observed for GO in comparison to graphite, which can be attributed to the presence of H 2 O molecules and diverse oxygen groups on its surface.The XRD pattern captured for the N-GSs highlighted a high degree of reduction (Figure 1b).Following the hydrothermal reactions and together with the N-doping process, the disappearance of the peak at 2θ = 11.5 • confirmed the successful GO reduction, but the N-GSs pattern created a wide peak at 24.7 • [59], related to the inter-planar distance of 3.602 Å, which provides π-π stacking between graphene sheets [60].
Preparation of the N-GSs/GCE Sensor
A drop-casting technique was followed to fabricate the N-GSs/GCE.Thus, a certain amount of as-prepared N-GSs (1.0 mg) was subsequently dispersed in deionized water (1.0 mL) under 25-min ultra-sonication.Then, the prepared suspension (4.0 µ L) was coated on the GCE surface dropwise and dried at the laboratory temperature.
The surface areas of the N-GSs/GCE and the bare GCE were obtained by CV using 1.0 mM K3Fe(CN)6 at different scan rates.Using the Randles-Sevcik equation [58] for N-GSs/GCE, the electrode surface was found to be 0.119 cm 2 , about 3.8 times greater than bare GCE.
Characterization of N-GSs
Figure 1 shows the characterization of GO and N-GSs using XRD.According to the XRD pattern for GO (Figure 1a), a peak can be seen at the 2θ value of 11.5° (inter-planer space = 7.69 Å ).A larger inter-planer space can be observed for GO in comparison to graphite, which can be attributed to the presence of H2O molecules and diverse oxygen groups on its surface.The XRD pattern captured for the N-GSs highlighted a high degree of reduction (Figure 1b).Following the hydrothermal reactions and together with the Ndoping process, the disappearance of the peak at 2θ = 11.5°confirmed the successful GO reduction, but the N-GSs pattern created a wide peak at 24.7° [59], related to the interplanar distance of 3.602 Å, which provides π-π stacking between graphene sheets [60].Figure 2 compares the FT-IR spectra captured from GO and N-GSs.In the FT-IR spectrum of GO, there are different functional groups that are shown in Figure 2a.On account of these characteristic peaks, multiple oxygen (O)-containing functional groups, such as epoxy, hydroxyl, and carboxyl, were present on the GO surface.The lack of absorption peaks of oxygen groups in the N-GSs showed the effective reduction of GO during the hydrothermal process.Two peaks appeared at 1184 and 1562 cm −1 related to C-N and C=N (occasionally, C=C and C=N bonds are stretched at the identical wavelength), respectively.Therefore, the graphene was N-doped successfully.As shown in Figure 3 depicting the FE-SEM image captured for the N-GSs, the N-GSs maintained the ultrathin flexible 2D structure of pristine graphene; and the N-GS sheets possessed a wave structure similar to thin wrinkled paper, representing the successful N-doping process for the graphene.As shown in Figure 3 depicting the FE-SEM image captured for the N-GSs, the N-GSs maintained the ultrathin flexible 2D structure of pristine graphene; and the N-GS sheets possessed a wave structure similar to thin wrinkled paper, representing the successful N-doping process for the graphene.As shown in Figure 3 depicting the FE-SEM image captured for the N-GSs, the N-GSs maintained the ultrathin flexible 2D structure of pristine graphene; and the N-GS sheets possessed a wave structure similar to thin wrinkled paper, representing the successful N-doping process for the graphene.Figure 4 shows the energy dispersive X-ray spectroscopy (EDS) analysis in determining the chemical composition of the N-GSs, the findings of which showed C (80.4%), N (16.3%), and O (3.3%) present in the prepared sample.
Electrochemical Response of DA at the Various Electrodes surfaces
The electrochemical response of DA in the 0.1 M PBS adjusted to variable pH values (2.0 to 9.0) was explored to determine the influence of the electrolyte solution pH.The results showed that the redox peaks of DA depended on the pH value, with these reaching a maximum with increasing pH up to 7.0 and then decreasing with greater pH values.Hence, the pH value of 7.0 was considered to be the optimum for subsequent electrochemical determinations.The reaction mechanism of DA is shown in Scheme 1.The cyclic voltammograms (CVs) were captured for the DA (200.0 μM) on the bare GCE and NGSs/GCE to explore the electrocatalytic performance of the N-GSs (Figure 5). Figure 5 illustrates two weak oxidation and reduction peaks on the bare GCE with oxidation peak current (Ipa = 4.6 μA) and reduction peak current (Ipc = −0.8μA), whereas the NGSs/GCE had a significant improvement in the currents (Ipa = 13.9 μA and Ipc = −4.8µ A).This significant improvement in the redox peaks may have appeared because of the appreciable catalytic impact of NGSs for the DA oxidation and reduction.
Electrochemical Response of DA at the Various Electrodes Surfaces
The electrochemical response of DA in the 0.1 M PBS adjusted to variable pH values (2.0 to 9.0) was explored to determine the influence of the electrolyte solution pH.The results showed that the redox peaks of DA depended on the pH value, with these reaching a maximum with increasing pH up to 7.0 and then decreasing with greater pH values.Hence, the pH value of 7.0 was considered to be the optimum for subsequent electrochemical determinations.The reaction mechanism of DA is shown in Scheme 1. Figure 4 shows the energy dispersive X-ray spectroscopy (EDS) analysis in determining the chemical composition of the N-GSs, the findings of which showed C (80.4%), N (16.3%), and O (3.3%) present in the prepared sample.
Electrochemical Response of DA at the Various Electrodes surfaces
The electrochemical response of DA in the 0.1 M PBS adjusted to variable pH values (2.0 to 9.0) was explored to determine the influence of the electrolyte solution pH.The results showed that the redox peaks of DA depended on the pH value, with these reaching a maximum with increasing pH up to 7.0 and then decreasing with greater pH values.Hence, the pH value of 7.0 was considered to be the optimum for subsequent electrochemical determinations.The reaction mechanism of DA is shown in Scheme 1.The cyclic voltammograms (CVs) were captured for the DA (200.0 μM) on the bare GCE and NGSs/GCE to explore the electrocatalytic performance of the N-GSs (Figure 5). Figure 5 illustrates two weak oxidation and reduction peaks on the bare GCE with oxidation peak current (Ipa = 4.6 μA) and reduction peak current (Ipc = −0.8μA), whereas the NGSs/GCE had a significant improvement in the currents (Ipa = 13.9 μA and Ipc = −4.8μA).This significant improvement in the redox peaks may have appeared because of the appreciable catalytic impact of NGSs for the DA oxidation and reduction.The cyclic voltammograms (CVs) were captured for the DA (200.0 µM) on the bare GCE and NGSs/GCE to explore the electrocatalytic performance of the N-GSs (Figure 5). Figure 5 illustrates two weak oxidation and reduction peaks on the bare GCE with oxidation peak current (Ipa = 4.6 µA) and reduction peak current (Ipc = −0.8µA), whereas the NGSs/GCE had a significant improvement in the currents (Ipa = 13.9 µA and Ipc = −4.8µA).This significant improvement in the redox peaks may have appeared because of the appreciable catalytic impact of NGSs for the DA oxidation and reduction.
Effect of Scan Rate
The CVs were recorded for DA (150.0 µM) on the NGSs/GCE under variable scan rates (Figure 6).There was an apparent gradual elevation in the redox peaks achieved by raising the scan rate range from 10 to 500 mV/s.As seen in Figure 6 (inset), the anodic peak current (Ipa) and cathodic peak current (Ipc) had a linear association with the scan
Effect of Scan Rate
The CVs were recorded for DA (150.0 μM) on the NGSs/GCE under variable scan rates (Figure 6).There was an apparent gradual elevation in the redox peaks achieved by raising the scan rate range from 10 to 500 mV/s.As seen in Figure 6 (inset), the anodic peak current (Ipa) and cathodic peak current (Ipc) had a linear association with the scan rate square root (ʋ 1/2 ).This result indicates that a diffusion-controlled DA redox reaction occurs on the NGSs/GCE.plot presents the count of transferred electrons during the rate-determining step.Based on Figure 7 (inset), the Tafel slope was estimated to be 0.091 V for the linear domain of the plot.The Tafel slope value reveals that the rate-limiting step is a one-electron transfer process considering a transfer coefficient (α) of 0.35.
Calibration Curve (DPV Analysis of DA)
DPV analysis was performed for variable DA contents to explore the linear dynamic range, LOD, and sensitivity of the N-GSs/GCE under optimized experimental circumstances (Figure 8).As expected, the elevation in DA level enhanced the peak current.Figure 8 (inset) shows a linear proportional of the oxidation peak currents to variable DA contents (0.1 μM to 700.0 μM), with the linear regression equation of Ipa (μA) = 0.0625CDA + 1.1884 (R 2 = 0.9996) and a sensitivity of 0.0625 μA/μM.The LOD was estimated at 30.0 nM for DA determination on N-GSs/GCE.Table 1 compares the efficiency of the DA sensor prepared by the N-GSs-modified GCE and other reported works.
Calibration Curve (DPV Analysis of DA)
DPV analysis was performed for variable DA contents to explore the linear dynamic range, LOD, and sensitivity of the N-GSs/GCE under optimized experimental circumstances (Figure 8).As expected, the elevation in DA level enhanced the peak current.Figure 8 (inset) shows a linear proportional of the oxidation peak currents to variable DA contents (0.1 µM to 700.0 µM), with the linear regression equation of Ipa (µA) = 0.0625C DA + 1.1884 (R 2 = 0.9996) and a sensitivity of 0.0625 µA/µM.The LOD was estimated at 30.0 nM for DA determination on N-GSs/GCE.Table 1 compares the efficiency of the DA sensor prepared by the N-GSs-modified GCE and other reported works.Electrochemical detection of DA concurrent with AA and UA on the surface of bare electrode entails interference due to the comparatively close oxidation potentials of the three substances.Hence, the analyte concentration needed to be changed simultaneously to obtain the DPVs (Figure 9).There were three well-separated signals with the potential differences of 140 mV for AA and DA as well as 145 mV for DA and UA.Therefore, oxidation of these compounds on NGSs/GCE occurred independently, i.e., it is possible to simultaneously determine these analytes with no significant interference.
DPV Analysis of DA in the Presence of AA and UA
Electrochemical detection of DA concurrent with AA and UA on the surface of bare electrode entails interference due to the comparatively close oxidation potentials of the three substances.Hence, the analyte concentration needed to be changed simultaneously to obtain the DPVs (Figure 9).There were three well-separated signals with the potential differences of 140 mV for AA and DA as well as 145 mV for DA and UA.Therefore, oxidation of these compounds on NGSs/GCE occurred independently, i.e., it is possible to simultaneously determine these analytes with no significant interference.
Real Samples Analysis
The practical applicability of the N-GSs/GCE was tested by sensing AA, DA, and UA in a DA ampoule, a AA ampoule, and urine samples using the DPV procedure and standard addition method, the results of which can be seen in Table 2.The recovery rate was between 96.0% and 103.5% and all RSD values were ≤3.4%.According to the experimental
Real Samples Analysis
The practical applicability of the N-GSs/GCE was tested by sensing AA, DA, and UA in a DA ampoule, a AA ampoule, and urine samples using the DPV procedure and standard addition method, the results of which can be seen in Table 2.The recovery rate was between 96.0% and 103.5% and all RSD values were ≤3.4%.According to the experimental results, the N-GSs/GCE sensor possesses high potential for practical applicability.
Conclusions
The present research developed a new sensor (N-GSs/GCE) for determination of DA.The analytical findings highlighted commendable electrocatalytic performance of the proposed sensor (N-GSs/GCE) for DA detection.The peak current of DA oxidation had a linear relationship with variable concentrations (0.1-700.0 µM), with a thin LOD (30.0 nM).The modified electrode could successfully determine the DA in the presence of UA and AA.The practical applicability of the as-developed N-GSs/GCE sensor was verified by sensing the study analytes in real specimens, with satisfactory recovery rates.These results suggest the proposed electrode has excellent performance and may be used for future DA detection probes based on electrochemical techniques.
Figure 2
Figure2compares the FT-IR spectra captured from GO and N-GSs.In the FT-IR spectrum of GO, there are different functional groups that are shown in Figure2a.On account of these characteristic peaks, multiple oxygen (O)-containing functional groups, such as epoxy, hydroxyl, and carboxyl, were present on the GO surface.The lack of absorption peaks of oxygen groups in the N-GSs showed the effective reduction of GO during the hydrothermal process.Two peaks appeared at 1184 and 1562 cm −1 related to C-N and
12 C
2022, 8, x FOR PEER REVIEW 4 of 14 C=N (occasionally, C=C and C=N bonds are stretched at the identical wavelength), respectively.Therefore, the graphene was N-doped successfully.
C
2022, 8, x FOR PEER REVIEW 4 of 14 C=N (occasionally, C=C and C=N bonds are stretched at the identical wavelength), respectively.Therefore, the graphene was N-doped successfully.
Figure 3 .
Figure 3.The FE-SEM image of the NGSs.
Figure 4 14 Figure 3 .
Figure4shows the energy dispersive X-ray spectroscopy (EDS) analysis in determining the chemical composition of the N-GSs, the findings of which showed C (80.4%), N (16.3%), and O (3.3%) present in the prepared sample.
Scheme 1 .
Scheme 1. Electrochemical oxidation mechanism of DA at the surface of NGSs/GCE.
Figure 3 .
Figure 3.The FE-SEM image of the NGSs.
Scheme 1 .
Scheme 1. Electrochemical oxidation mechanism of DA at the surface of NGSs/GCE.
Scheme 1 .
Scheme 1. Electrochemical oxidation mechanism of DA at the surface of NGSs/GCE.
Figure 6 .
Figure 6.CVs captured for the DA (150.0 μM) on the NGSs/GCE under variable scan rates (1:10, 2:30, 3:50, 4:70, 5:100, 6:200, 7:300, 8:400, and 9:500 mV s −1 ).Inset: the correlation of Ipa and Ipc of DA with ʋ 1/2 .A Tafel plot (Figure 7 (inset)) was created on the basis of data related to the rising domain of the current-voltage curve from the linear sweep voltammogram at a low scan rate (10 mV/s) for DA (150.0 μM) to explore the rate-determining step.The linearity of the E vs. log I plot clarifies the involvement of electrode-process kinetics.The slope from this plot presents the count of transferred electrons during the rate-determining step.Based on Figure 7 (inset), the Tafel slope was estimated to be 0.091 V for the linear domain of the plot.The Tafel slope value reveals that the rate-limiting step is a one-electron transfer process considering a transfer coefficient (α) of 0.35.
Figure 6 .
Figure 6.CVs captured for the DA (150.0 µM) on the NGSs/GCE under variable scan rates (1:10, 2:30, 3:50, 4:70, 5:100, 6:200, 7:300, 8:400, and 9:500 mV s −1 ).Inset: the correlation of Ipa and Ipc of DA with υ 1/2 .A Tafel plot (Figure 7 (inset)) was created on the basis of data related to the rising domain of the current-voltage curve from the linear sweep voltammogram at a low scan rate (10 mV/s) for DA (150.0 µM) to explore the rate-determining step.The linearity of the E vs. log I plot clarifies the involvement of electrode-process kinetics.The slope from this
C 14 7.
2022, 8, x FOR PEER REVIEW 8 of LSV for DA (150.0 μM) at the scan rate of 10 mV/s.Inset: the Tafel plot from the rising domain or the respective voltammogram.
Figure 7 .
Figure 7. LSV for DA (150.0 µM) at the scan rate of 10 mV/s.Inset: the Tafel plot from the rising domain or the respective voltammogram.
Table 1 .
Comparison of the efficiency of the NGSs/GCE sensor with other reported modified electrodes for DA determination.
Table 1 .
Comparison of the efficiency of the NGSs/GCE sensor with other reported modified electrodes for DA determination.
Table 2 .
Voltammetric sensing of AA, DA, and UA in real specimens using N-GSs/GCE.All concentrations are in µA (n = 5). | 5,150.6 | 2022-10-03T00:00:00.000 | [
"Materials Science"
] |
High-frequency ESR measurements and ESR/NMR double resonance experiments of lightly phosphorous-doped silicon
We studied lightly doped Si:P with high-frequency (80-120 GHz) ESR and ESR/NMR double magnetic resonance techniques in the temperature range down to 1.4 K. We found dynamic nuclear polarization of 31P from steady-state ESR measurements with approximately 3.6 T. We derived the nuclear spin relaxation time, T1N, of 31P by analysing the time-evolution of ESR spectra utilizing the dynamic nuclear polarization effect. We derive temperature and magnetic field dependence of T1N and compare with experimental data. Furthermore, from our ESR measurements, we modulate the nuclear polarization of 31P by applying an RF field.
Introduction
A quantum computer design using phosphorous-doped silicon (Si:P) proposed by B. Kane [1] has been attracting much interest, since it is one of the best practical quantum computer designs reported thus far. The model uses phosphorus (P) nuclear spins embedded regularly in silicon crystal as qubits, where for quantum computer operations, electron spins should be completely polarized by high fields (a few Tesla) and low temperatures in the range of 100 mK. Experimental demonstrations, however, have not yet been successful. In this study, we study an ensemble of 31 P nuclear spins in phosphorousdoped silicon (Si:P), rather than detecting the state of a single 31 P nuclear spin, as required in Kane's model. In Si:P, the donor electrons are well localized around the donor ions if the donor concentration is below the critical doping concentration, n c = 3.7 x 10 18 cm -3 , while they are delocalized above n c . The P atoms in lightly doped Si:P are in a similar environment to those in Kane's model, and therefore, it is important to study 31 P nuclear spin state and spin dynamics by NMR especially to measure the coherence time of nuclear spins. Until now, however, it has been difficult to observe the 31 P NMR signal directly because the critical concentration of P atoms to keep donors in their proper separation for quantum computation is too low. Recently, we found that 31 P nuclear spin polarization is enhanced by two or three orders of magnitude over the thermal equilibrium value using dynamic nuclear polarization (DNP) caused by microwave irradiation (~80 GHz) [2]. Since the NMR signal would be enhanced following enhancement of the nuclear spin polarization, it may be possible to observe the 31 P NMR signal from low concentration Si:P sample with the assistance of DNP. With this aim, we constructed an ESR/NMR double-resonance system.
We report here on 31 P nuclear spin dynamics measured by cw-ESR with the assistance of DNP. Further, we show that the magnitude of the nuclear spin polarization was changed when applying NMR pulses using an ESR/NMR double resonance system. We used Si:P samples in the insulating regime with P-concentration n = 6.5 × 10 16 cm -3 where doped P atoms in Si crystal are isolated impurities.
Experimental -ESR/NMR double resonance system
The isolated donor electron spin system was studied using cw-ESR at the frequency of ω/2π = 80-120 GHz and at temperatures from 20 K to 1.4 K with field modulation. The experimental setup (see figure 1) was basically the same as that in our previous report [2], except for the position of the sample. To add an NMR system to the ESR system, we made a new probe for the NMR/ESR double resonance experiment. Inside the waveguide we placed a coil for NMR, with its axis perpendicular to the magnetic field. The resonance frequency for NMR was tuneable with a capacitor on the probe and was adjustable from outside the probe. It should be noted that the microwaves are perpendicular to the magnetic field at the sample position in this system, whereas they were parallel to the magnetic field in the previous system. This different microwave propagation direction may cause a different excitation efficiency of electron spin.
A Si:P crystal sample with the size 3 × 3 × 0.3 mm 3 was set inside the cylindrical waveguide. To observe the absorption signal, a simple transmission method was adopted without a cavity resonator. We optimised the microwave frequency and phase and the field modulation as described in references 2 and 3.
where γ e and γ N are gyromagnetic ratios for electron and nuclear spins, respectively . ESR spectra at low temperatures consist of two resonance lines separated by approximately 4 mT. We refer to an ESR absorption signal appearing at the lower (higher) field as the L-line (H-line). From equation (1), the intensity of the L-line (H-line) is almost proportional to the population of up (down) state of the 31 P nuclear spin. Therefore, if 31 P nuclear polarization is enhanced over thermal equilibrium by the DNP effect, we expect to observe an asymmetric ESR spectrum where the L-line is more intense than the Hline. We measured the spectra at 1.44 K that is lower than our previous work [2]. After 67 minutes irradiation on the H-line, the ESR spectrum exhibited the expected asymmetry (see figure 2(a)). Note that these ESR spectra were recorded with extremely small microwave power (approximately 14dB attenuation). Approximately 20 % nuclear polarization was obtained by DNP, where the intensity of each line was evaluated from the integrated intensity of the line. We repeated the measurements by sweeping the magnetic field, and obtained the time t evolution of the ESR spectra, showing the nuclear polarization P relaxing to its thermal equilibrium (see figure 2(b)). The nuclear relaxation time, T 1N , was found from the best fit of and was obtained as 24 ± 7 min at 3.65 T and 1.44 K, where P 0 and a are also fitting parameters. The values of T 1N were derived similarly for above 3 K and under 8.58 T [4], and below 2.2 K and under 4.6 T [5]. In ref. 5, the relaxation times were attributed to two components: the faster one originated in the dynamic polarization of 29 Si nuclear spins (I=1/2) located nearby the P atom, and the slower one was observed t ≳ 500 s. In this work, we observed the longer relaxation time. It is worth comparing T 1N data taken across a wide range of the magnetic field. Figure 3 shows 1/(T⋅T 1N ) vs. 1/T, including data from these references. To analyse these data, we start with a general formula for T 1N where ω N is the nuclear Larmor angular frequency, H loc ± is the local field caused at 31 P nucleus perpendicular to B 0 , and {a,b}= (ab+ba)/2. Assuming an exponential correlation function with a correlation time τ, H loc − (0), H loc + (t) ~ exp(-t/τ), we obtain, where M 2 is a relevant second moment for the hyperfine coupling between 31 P and donor electrons.
Because the H loc ± originates from S x I x and S y I y terms in the Hamiltonian (1), T 1N is associated with the cross relaxation which involves the electron-nuclear flip-flop transition. Let us put τ as the cross relaxation time T x ~ 1/(B 0 2 T) [6]. For the relevant situation, ω N = γ N (B 0 + B HF ) where B HF = A/γ N is the hyperfine field, and ω N τ >> 1. Taking into account the effect of saturation of electron magnetization [7], we finally derive the following equation: where P e = tanh γ e !B 0 2k B T ( ) is the polarization of the electron spin; and C is a constant, independent of the magnetic field and temperature. Here, the activation-type temperature dependence of T 1N comes from the saturation effect of the magnetization. In figure 3, the dashed lines are the best fits of equation (5) to the experimental data, where the only fitting parameter is the prefactor C. For our data at 3.6 T, we used the same value of C with that for 4.6 T. The fitted results agree reasonably well to the experimental data. The obtained values of C for different conditions of temperature ranges and magnetic fields are nearly equal with each other, suggesting that the above discussion correctly accounts for 31 P nuclear relaxation process in lightly doped Si:P. It should be noted here that if the hyperfine coupling has anisotropy, the correlation function <S z S z > may contribute to T 1N [8]. For this case, τ is put as T 1e and then T 1N is proportional to T 1e . However, because of B 0 -4 dependence for the direct process of T 1e relaxation [2,9], the field dependence of the experimental data of T 1N can not be | 2,021.6 | 2014-12-08T00:00:00.000 | [
"Physics"
] |
Minimizing Work Risks in Indonesia: A Case Study Analysis of Hazard Identification, Risk Assessment, and Risk Control Implementation
. Workplace safety must be a primary priority in order to prevent work-related accidents that might result in disability or death. Promoting workplace safety is a crucial component of labor protection since it can lead to improved employee relations and management. Construction and industrial industries, which rely on heavy machinery or outside labor, are especially vulnerable to safety and health concerns that can lead to accidents. This case study and literature review investigates the Hazard Identification, Risk Assessment, and Risk Control (HIRARC) theory as a systematic method for promoting workplace safety. HIRARC seeks to identify potential hazards within an organization and control associated risks in order to prevent accidents. The study reveals numerous factors that contribute to occupational accidents and the potential harm they can inflict. The study specifically addresses worker attitudes toward the usage of personal protective equipment (PPE) and high-temperature production environments as accident risk factors. The study recommends that organizations prioritize instilling work discipline in their employees, with an emphasis on the correct use of PPE to prevent accidents and promote workplace safety.
Introduction
Along with the rapid development of technology, the manufacturing process is required to meet the desired standards and quality both in terms of quality and safety.In addition, in facing the era of industrialization and the era of globalization as well as the free market, occupational health and safety is one of the prerequisites set out in economic relations between countries that must be met by all countries [1] The acceleration of infrastructure development carried out by the government is a development of the era of industrialization which is global in nature and has very rapid development, such as the construction industry which provides construction services and has a significant role in current development.Work in the construction industry sector is a dangerous job and has a fairly high risk of work accidents.If a work accident occurs, it will cause various losses, both material losses, loss of life, and disruption of the production process.
Based on graphic data on work accidents in Indonesia for the last five (5) years, quoted from the website for the General K3 Expert Training, the Social Security Administration for Manpower (BPJS) noted that in 2017 the number of reported work accidents reached 123,041 cases, while throughout 2018 reached 173,105 cases with Work Accident Insurance (JKK) claims amounting to Rp 1.2 trillion.For 2019, there were 114,000 cases, and an increase of 55.2% cases to 177,000 cases in 2020.Then, from January to September 2021, there were 82,000 cases of work accidents and 179 cases of work-related illnesses, 65 percent of which were caused by Covid-19 [2] According to data released by the Indonesian Ministry of Manpower in 2020 [3], 57.5% of the total 126.51 million working population in Indonesia, have a low level of education.This condition affects the workers' low awareness of the importance of OHS (Occupational Health and Safety) culture.At the same time, the employer is also at risk of having to bear large costs if a work accident occurs in the workplace.
The purpose of implementing the Occupational Health and Safety Management System (SMK3) is to reduce or prevent accidents that result in injury or material loss; therefore Occupational Safety and Health (K3) experts seek to study the phenomenon of accidents, their causal factors, and effective ways to prevent them.Efforts to prevent accidents in Indonesia are still facing various obstacles, one of which is the traditional mindset that considers accidents as disasters so that people are resigned.
To prevent accidents, it is important to identify potential risks.The HIRARC (Hazard Identification, Risk Assessment, and Risk Control) method is commonly used for this purpose.This approach involves effective planning, including experience and assessment steps, and the establishment of control measures based on collected data.By implementing the HIRARC method, companies can develop a comprehensive model for managing occupational safety and health (OSH).This method guides the implementation of safety measures within the company, enabling it to address management issues and solve problems independently.(Risk Control).This whole process is also known as risk management.HIRARC is an important element in the occupational safety and health management system that is directly related to hazard prevention and control efforts.
According to Ramli [4] HIRARC is a process that has occurred Controlling hazards that can occur in routine and non-routine activities in the company then conducts a risk assessment of these hazards and creates a hazard control program in order to minimize the level of risk to a higher level with the aim of preventing accidents.1. Hazard Identification According to [4] "Hazard identification is a systematic effort to find out the existence of hazards in organizational activities".Every workplace that tries to take risks from each event and then weighs the conditions in determining risk are as follows: a) Normal Operating Conditions (N): Daily work and according to procedures b) Abnormal Operating Conditions (A): Work outside of procedures c) Emergency Conditions (E): Situations that are difficult to control 2. Risk Assessment According to Ramli [4] Risk assessment is an attempt to calculate a risk and determine whether the risk is acceptable or not.Risk is used to determine the level of assessment of the likelihood of occurrence (probability) and the severity that can be caused (severity).The qualitative method according to the AS/NZS 4360 standard, the probability or probability is given a range between a risk that rarely occurs to a risk that can occur at any time.For severity or severity, it is categorized between events that do not cause injury or only minor losses which are the most severe if they can cause fatal events (death) or major damage to company assets.3. Risk Control Risk control is carried out on all hazards found in the hazard process and considers risks to find priorities and how to control them.Furthermore, in controlling the control must consider the control control starting from elimination, substitution, technical control, administrative and PPE.
Occupational Health and Safety Management System
According to Regulation of the Minister of Manpower No. 05 released in 1996, "The OHS Management System is part of the overall management system which includes organizational structure, planning, responsibilities, implementation, procedures, processes, and resources needed for development, application, study, assessment, and maintenance of occupational safety and health policies in controlling risks related to work activities in order to create a safe, efficient and productive workplace".
Occupational Safety and Health (K3)
According to Suryati Darmiatun and Tasrial [5] the definition of K3 procedures includes preventing deviations from previously set K3 activities and objectives.According to Syafrial and Ardiansyah [6] there are things that must be considered in making OSH procedures in organizations: 1. Organizational Commitment in implementing OHS management 2. Focus/type, complexity of organizational structure and size, 3. Nature and scale of organizational risks.4. Implementation of procedures (easy to operate by the user); and 5. Measurability and able to evaluate the results of the implementation of the procedure.
Methodology
This study utilized a systematic literature review to collect the necessary data.According to Xiao and Watson [7], a systematic literature review (SLR) is a technique that involves determining, identifying, and critically evaluating the selected sources in order to provide an explanation for the formulated question.Therefore, a well-planned search strategy is required for the specified question.This study's systematic literature review was conducted using the Google Scholar database (https://scholar.google.com/).Google Scholar is an excellent choice for finding scientific sources due to its comprehensive coverage of journals, conference papers, theses, and books.Its user-friendly interface simplifies the search process, allowing researchers to efficiently locate sources on a specific topic.Being a free online resource, it provides open access to scholarly literature, ensuring universal accessibility.Additionally, the integration of citation metrics and the ability to access various publication formats make it a convenient and time-efficient tool for researchers.
When the keyword "HIRARC" was entered, 2,670 results were displayed.The website displayed 1,820 results after the second keyword "Indonesia" was entered, narrowing the search results further.The search was then limited to the keyword "industry," yielding 510 results.From the 510 records, research conducted more than ten years ago and by the same company are excluded.As a result, the author selected only five sources to analyze five distinct companies that offer distinct products and services.
Analysis and Discussion
The method used in this paper is a literature review that discusses HIRARC related to case studies on five (5) companies.Here are the companies that have been analyzed regarding HIRARC: PT.Hutama Karya Persero is an Indonesian state-owned company engaged in the construction service industry.Based on the results of the analysis in research journals that discuss the implementation of HIRARC at PT. Hutama Karya (Persero) Tbk., it is concluded that PT.Hutama Karya needs to take action to evaluate the risk of work accidents in the work of foundation erection and formwork brackets from piles because there are 7 risks of work accidents that are classified as high and must be repaired in 1 months ahead, as well as 3 risks of work accidents that are classified as medium and must be repaired within the next 1 year.PT.Hutama Karya needs to take administrative actions such as workers who do not wear complete PPE according to K3 regulations, they will be given penalty which will make workers aware of the behavior of workers not to repeat the same mistakes or other violations.Meanwhile, for workers who obey K3 regulations in using PPE every day, complete in the field at work, they are given rewards which will increase awareness and concern for the use of PPE and workers can be motivated not to take unsafe actions and unsafe conditions [8].
Application of HIRARC on CV. Jati Jepara Furniture
CV. Jati Jepara Furniture is a business engaged in the production of furniture.Based on the results and discussions in research journals that discuss HIRARC on CV.Jati Jepara Furniture, it can be concluded that the dangers contained in CV.Jati Jepara Furniture are in the form of the highest scores in the assessment process, namely 4 & 5 such as dust and sawdust that interfere with breathing and finishing sprays that interfere with smell and fall from the second floor (2) as well as machine noise that disturbs hearing, has such a high value because the long-term effects will affect the health of workers and are sustainable over time.The medium level is with values ranging from 3 & 2. Included in the medium level are the hands hit by the planner machine, the fingers hit by the spindle, the sanding machine conveyor, and the hands hit by the cutter.Then the lowest level with a value of 1 is that the hand is exposed to the cutter and the hand is exposed to glue when packing [9].
Application of HIRARC at PT. Glory Industrial Semarang II
PT. Glory Industrial Semarang II is a manufacturing company for the American and European markets.Implementation of HIRARC (Hazard Identification Risk Assessment and Risk Control) in the Production process of PT.Glory Industrial Semarang II.Has implemented 88 applications of HIRARC (Hazard Identification Risk Assessment and Risk Control) of the 97 applications that have been written in the HIRARC (Hazard Identification Risk Assessment and Risk Control) document, the Warehouse process contained 26 of 29 applications, the cutting process contained 25 of 29 applications, the process of Sewing there are 18 out of 19 applications, Ironing or rubbing process there are 7 out of 8 applications, and Finishing process there are 12 out of 13 applications, and in practice workers do not comply with the use of PPE, the company does not provide PPE, such as footwear with insulating materials with standards or regulations that applicable, namely in OSHA standards explaining that footwear is made of aluminum, steel, fiber or plastic, and can protect feet against industrial heat, stab wounds, and electrical hazards in the workplace and there are still production equipment that does not comply with standards or regulations applicable, such as a damaged pallet is not immediately replaced.Based on the data that has been obtained in this study, damaged pallets should not be used, specifications for pallets used for racking must include pallet quality and inspection of all storage equipment must be carried out systematically on a regular basis and usually carried out from ground level, where most of the damage tends to occur.unless there is an indication of a problem that needs investigation Recommendations that can be given to further researchers are to conduct a preliminary study so as to find more accurate data for research, to be able to add secondary data that researchers cannot obtain at this time, and also to adapt the research to standards and regulations legislation so that research results are better [10].
Application of HIRARC at PT. Cahaya Murni Andalas Permai
PT. Cahaya Murni Andalas Permai is a company engaged in the furniture sector with the Bigland Springbed trademark.PT.CMAP is a subsidiary of PT.Cahaya Buana Group which has not yet received the ISO 14001 certificate as its parent company.As a large company, PT.CMAP must be able to implement an occupational health and safety management system (SMK3) to minimize the risk of work accidents and increase company productivity.To demonstrate commitment related to OHS, in 2012 the management of PT.CMAP began to document detailed data on every work accident that occurred, such as inhalation of toxic gas, slashing with a cutting knife, and falling on equipment while working.As many as 10 cases of work accidents occurred in 2014 in the company.(3) From the results of the analysis, it can be seen how the countermeasures that can be applied to PT. CMAP so that the number of accidents in the factory area is expected to be reduced and achieve zero accidents.The work accident control effort that has been carried out by this company is to provide first aid kits and safety-first signs.The results of the identification of hazards in the production area of PT.CMAP indicates that inhalation of hazardous materials (particulate foam) is frequent and moderate.In general, the results of the work accident risk analysis at PT. CMAP is in the low category.However, there are still 2 of the 9 sub-divisions of the following production processes that are important to note, namely Foam Cutting and Finishing.Several risk controls that can be applied to PT CMAP include engineering: administrative controls and personal protective equipment.Regarding further research, it is better to design an appropriate K3 culture to be applied to PT. CMAP.The results of the average risk assessment show that the production area at PT. CMAP is still at a low risk level with 2 moderate accidents, namely equipment scratches and inhalation of harmful gases.In addition, the production area at PT. CMAP is still at a low risk level in addition to 2 moderate sub-divisions.The level of risk in each type of work accident and production subdivision.Risk control aims to reduce and even prevent work accidents to zero accidents.Based on the results of the risk evaluation, several risk controls can be applied at PT. CMAP.For engineering controls, providing safety barriers to cutting machines to prevent accidents when workers are careless, and installing exhaust fan ventilation systems for work parts that produce hazardous gases and particulates from the remaining crushed foam.Administrative controls, such as preventing workers from getting bored, tired and losing concentration by controlling workers' working hours or changing work shifts; Provide training and counseling on the safe use of machines and equipment with standard work procedures regularly and continuously every time, for example once a month; Checking equipment such as sharpening a dull knife to become sharp; and Provide warning signs for wearing personal protective equipment (PPE) and the presence of hazardous materials (at certain locations).Personal protective equipment, for the type of risk of being hit, wear personal protective equipment from the danger of being hit, it can be the use of a helmet to protect the head and a safety boot to protect the feet from falling equipment/materials.This type of slip risk can be avoided by wearing anti-slip shoes when working in a work environment where there is a risk of slipping.The risk of cutting equipment can use protection on parts of the body that are vulnerable to the risk of cutting work equipment such as protective gloves that are resistant to knife cuts.The type of risk of inhalation of hazardous materials, apart from good air circulation, inhalation of hazardous materials can be avoided by using a mask or respirator appropriate to the type, concentration, and duration of exposure of the worker to the hazardous substance [11].
Application of HIRARC at PT. PAL Indonesia
PT. Pal Indonesia is a company engaged in shipping construction.Based on the results of research and discussion, it can be concluded as follows: The results of hazard identification using the Hazard Identification, Risk Assessment, and Risk Control (HIRARC) method on the work of the fuel pipe installation system have 7 aspects with 10 potential hazards, when the diesel generator system work there are 4 aspects with 7 potential hazards, and the work of the mooring system has 4 aspects with 7 potential hazards.The results of the risk assessment using the Hazard Identification, Risk Assessment and Risk Control (HIRARC) method on the activity of the fuel pipe installation system against the danger of leaking fuel storage tanks obtained a value of 16 and oil storage tank leaks, gas leaks in the network, compressed air leaks in pipelines obtained a value of 12.When the activity of the diesel generator system against the danger of spilled goods/oil/fuel obtained a value of 16 and the danger of electric current, scuffed cables obtained a value of 12. Also, the activities of the mooring system against the dangers of heavy objects, rigging, operator error obtained a value 12 and the danger when the ship docks or exits the dock is obtained a value of 9 [12].
Conclusion
The HIRARC method is a systematic approach to occupational safety that can be tailored to the specific needs of each company.It is designed to identify potential hazards and risks, assess the likelihood and severity of those risks, and then take steps to control or mitigate them.The HIRARC method is particularly useful in identifying problems related to work safety.In the HIRARC method, the level of risk is categorized based on a traffic light system, with the red zone indicating high risk, the yellow zone indicating medium risk, and the green zone indicating low risk.The approach to risk control and mitigation is then adjusted according to the risk level.For example, in the red zone, the focus is on hazard elimination, while in the yellow zone, the focus is on implementing safeguards and in the green zone, the focus is on using personal protective equipment (PPE).A study conducted by [13] in the construction industry in Malaysia found that the HIRARC method was effective in identifying and controlling workplace hazards.Similarly, a study by [14] in the oil and gas industry in Malaysia found that the HIRARC method was useful in identifying potential hazards and risks associated with offshore drilling operations.The five case studies, although they are from different industries, share some similarities in terms of the application of HIRARC.All companies have identified potential hazards in their workplaces and have taken actions to reduce or eliminate the risks associated with those hazards.The risks were categorized into high, medium, and low levels, and appropriate control measures were recommended to address the identified risks.
Another similarity between these case studies is the importance of conducting regular risk assessments.In these cases, conducting regular risk assessments was recommended to identify and address potential hazards in the workplace.Furthermore, these studies emphasize the importance of creating a safer work environment for workers.This is achieved through implementing appropriate control measures, providing personal protective equipment, and conducting training to raise awareness and promote safe working practices.
While the correct use of PPE is a standard procedure for workplace safety, additional recommendations can enhance it.Engineering controls modify the environment to minimize hazards, while administrative controls focus on work practices and policies.Hazard elimination/substitution removes or replaces hazards, and regular maintenance and inspections identify issues.However, these recommendations have limitations.Engineering controls can be costly, administrative controls depend on human compliance, hazard elimination/substitution may not always be feasible, and maintenance/inspections may not cover all hazards.Combining multiple measures, including PPE, is the most effective approach for workplace safety. | 4,708.2 | 2023-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Business"
] |
Osserman Conditions in Lightlike Warped Product Geometry
In this paper, we consider Osserman conditions on lightlike warped product (sub-)manifolds with respect to the Jacobi Operator. We define the Jacobi operator for lightlike warped product manifold and introduce a study of lightlike warped product Osserman manifolds. For the coisotropic case with totally degenerates first factor, we prove that this class consists of Einstein and locally Osserman lightlike warped product.
Introduction
The Riemann curvature tensor is one of the central concepts in the mathematical field of differential geometry. It assigns a tensor to each point of a (semi-)Riemannian manifold that measures the extent to which the metric tensor is not locally isometric to that of Euclidean space. It expresses the curvature of (semi-)Riemannian. Curvature tensor is a central mathematical tool in the theory of general relativity and gravity.
The geometry of a pseudo-Riemannian manifold ( ) , M g is the study of the curvature 4 *
R T M ∈⊗
which is defined by the Levi-Civita connection ∇ .
Since the whole curvature tensor is difficult to handle, the investigation usually focuses on different objects whose properties allow us to recover curvature tensor. One can for example associate to R an endomorphism on tangent bundle of a manifold. In [1] P. Gilkey The Riemannian curvature tensor of a Levi-Civita connection is algebraic on It is obvious that Motivated by the recent works on lightlike geometry, we consider in this paper lightlike warped product (sub-)manifolds and examine Osserman conditions depending on geometric properties of the factors.
In Section 2, we present background materials of lightlike geometry. In Section 3 we define lightlike warped product Osserman (definition 3.2) and present some important results of our research (Theorem 2, Theorem 3, Theorem 4). Section 4 is concerned with an example given in the neutral semi-Riemannian space 6 3 R ..
Preliminaries
Let ( ) A null submanifold M with nullity degree r equipped with a screen distribution The Gauss and Weingarten formulas are , Since ∇ is a metric connection, using (12)-(14) we have Let P the projection morphism of TM onto ( ) It follows from (17) and (18) that Let R and R denote the Riemannian curvature tensors on M and M respectively. The Gauss equation is given by Using (10) and (12) We say that the screen distribution ( ) S TM is totally umbilical if for any section N of ( ) ltr TM on a coordinate neighbourhood From now on, we assume that the frames ( ) i.e.
It is straightforward to check that g defines a non-degenerate metric on M and that for 0 r = it coincides with g. The ( )
Lightlike Warped Product Geometry and Osserman Conditions
As it is well known, Jacobi operators are associated to algebraic curvature maps (tensors). But contrary to non-lightlike manifolds, the induced Riemann curvature tensor of a lightlike submanifold For degenerate warped product setting, we consider the associated non-degenerate metric g defined by (30) of a lightlike warped product metric. We denote by g and # g the natural isomorphisms with respect to g . The equivalent relation of (3) is given by , N g being totally degenerate, the Riemannian curvature tensor 1 R and its Weyl tensor vanish identically. Moreover 1 N is conformally Osserman. By Theorem 5 in [12], R is an algebraic curvature tensor. If we restrict our study on the product ( ) , it is obvious that N is a conformally Osserman manifold. The lightlike warped product metric g belongs to the conformal class of Due to Proposition 2 in [12], we establish the following two results for coisotropic warped product ( ) Proof. From (22), the induced Riemannian curvature tensor is Using (20) and (27), for all The pseudo-Jacobi operator ( ) R J X is given by Therefore N is pointwise Osserman.
■ From Proposition 2, theorem 5 in [12] and Theorem 4.3 in [9], we proved the following result that characterizes any screen distribution of a coisotropic warped product of a semi-Riemannian space form with the first factor totally null. This case consists of a class of null warped products that is Einstein and pointwise Osserman. , , ,
Example
Let M be submanifold of 6 3 given by , ; sin | 1,006.6 | 2020-03-25T00:00:00.000 | [
"Mathematics"
] |
Bridging resonant leptogenesis and low-energy CP violation with an RGE-modified seesaw relation
We propose a special type-I seesaw scenario in which the Yukawa coupling matrix $Y^{}_\nu$ can be fully reconstructed by using the light Majorana neutrino masses $m^{}_i$, the heavy Majorana neutrino masses $M^{}_i$ and the PMNS lepton flavor mixing matrix $U$. It is the RGE-induced correction to the seesaw relation that helps interpret the observed baryon-antibaryon asymmetry of the Universe via flavored resonant thermal leptogenesis with $M^{}_1 \simeq M^{}_2 \ll M^{}_3$. We show that our idea works well in either the $\tau$-flavored regime with equilibrium temperature $T \simeq M^{}_1 \in (10^9, 10^{12}]$ GeV or the $(\mu+\tau)$-flavored regime with $T \simeq M^{}_1 \in (10^5, 10^9]$ GeV, provided the light neutrinos have a normal mass ordering. We find that the same idea is also viable for a {\it minimal} type-I seesaw model with two nearly degenerate heavy Majorana neutrinos.
Introduction
A special bonus of the canonical (type-I) seesaw mechanism [1][2][3][4][5] is the thermal leptogenesis mechanism [6], which provides an elegant way to interpret the mysterious matter-antimatter asymmetry of our Universe. The key points of these two correlated mechanisms can be summed up in one sentence: the tiny masses of three known neutrinos ν i are ascribed to the existence of three heavy Majorana neutrinos N i (for i = 1, 2, 3), whose lepton-number-violating and CP-violating decays result in a net lepton-antilepton number asymmetry Y L which is finally converted to a net baryon-antibaryon number asymmetry Y B as observed today.
In the standard model (SM) extended with three right-handed neutrinos and lepton number violation, it is the following seesaw formula that bridges the gap between the masses of ν i (denoted as m i ) and those of N i (denoted as M i ): where M ν represents the light (left-handed) Majorana neutrino mass matrix, v 174 GeV is the vacuum expectation value of the SM neutral Higgs field, M R stands for the heavy (righthanded) Majorana neutrino mass matrix, and Y ν is a dimensionless coupling matrix describing the strength of Yukawa interactions between the Higgs and neutrino fields. The eigenvalues of M ν (i.e., m i ) can be strongly suppressed by those of M R (i.e., M i ) as a consequence of M i v (for i = 1, 2, 3), and that is why m i v naturally holds. Although such a seesaw picture is qualitatively attractive, it cannot make any quantitative predictions unless the textures of M R and Y ν are fully determined [7]. Without loss of generality, one may always take the basis in which both the charged-lepton mass matrix M l and the heavy Majorana neutrino mass matrix M R are diagonal (i.e., M l = D l ≡ Diag{m e , m µ , m τ } and M R = D N ≡ {M 1 , M 2 , M 3 }). In this case the undetermined Yukawa coupling matrix Y ν can be parametrized as follows -the so-called Casas-Ibarra (CI) parametrization [8]: where U is the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) neutrino mixing matrix [9][10][11] used to diagonalize M ν in the chosen basis (i.e., U † M ν U * = D ν ≡ Diag{m 1 , m 2 , m 3 }), and O is an arbitrary complex orthogonal matrix. This popular parametrization of Y ν is fully compatible with the seesaw formula in Eq. (1), but the arbitrariness of O remains unsolved. Note that it is the complex phases hidden in Y ν that govern the CP-violating asymmetries ε iα between the lepton-number-violating decays N i → α + H and N i → α + H (for i = 1, 2, 3 and α = e, µ, τ ) [6,[12][13][14]. In particular, the flavored asymmetries ε iα depend on both (Y [15][16][17][18][19][20][21][22]. Given the CI parametrization of Y ν in Eq. (2), one can immediately see that ε i have nothing to do with the PMNS matrix U [23][24][25], while ε iα will depend directly on U if O is assumed to be real [26][27][28][29][30][31].
Note also that both U and D ν in Eq. (2) are defined at the seesaw scale Λ SS v, which can be related to their counterparts at the Fermi scale Λ EW ∼ v via the one-loop renormalization-group equations (RGEs) [32][33][34][35][36][37]. In this connection the RGE-induced correction to the CI parametrization of Y ν has recently been taken into account [38] 1 : where T l = Diag{I e , I µ , I τ }, and the evolution functions I 0 and I α (for α = e, µ, τ ) are given by in the SM with g 2 , λ, y t and y α standing respectively for the SU(2) L gauge coupling, the Higgs self-coupling constant, the top-quark and charged-lepton Yukawa coupling eigenvalues [38]. Eq. (3) tells us that the unflavored CP-violating asymmetries ε i should also have something to do with the PMNS matrix U at low energies because of a slight departure of T l from the identity matrix. This new observation makes it possible to establish a direct link between unflavored thermal leptogenesis and low-energy CP violation under the assumption that O is a real matrix [38,39], but one may still frown on the uncertainties associated with O.
In this work we simply assume the unconstrained orthogonal matrix O to be the identity matrix (i.e., O = 1), so as to reconstruct the Yukawa coupling matrix Y ν in terms of not only M i at the seesaw scale but also m i and U at low energies. Considering the fact of y 2 e y 2 µ y 2 τ 1 in the SM, we find that I e I µ 1 and I τ 1 + ∆ τ are two excellent approximations, where denotes the small τ -flavored effect [38]. Then the expression of Y ν in Eq. (3) can be somewhat simplified and explicitly written as in which the scale indices Λ SS and Λ EW have been omitted for the sake of simplicity, but one should keep in mind that the values of m i and U αi (for i = 1, 2, 3 and α = e, µ, τ ) are subject to the Fermi scale Λ EW . With much less arbitrariness, we are going to show that such a special RGE-modified seesaw scenario allows us to account for the observed baryon-to-photon ratio η ≡ n B /n γ (6.12 ± 0.03) × 10 −10 7.04Y B in today's Universe [40] by means of flavored resonant thermal leptogenesis with M 1 M 2 M 3 [41][42][43][44] 2 . We find that our idea works in either the τ -flavored regime with equilibrium temperature T M 1 ∈ (10 9 , 10 12 ] GeV or the (µ + τ )-flavored regime with T M 1 ∈ (10 5 , 10 9 ] GeV, if the mass spectrum of three light Majorana neutrinos has a normal ordering. In addition, we show that the same idea is also viable for thermal leptogenesis in a minimal type-I seesaw model [50,51] with two nearly degenerate heavy Majorana neutrinos.
Resonant leptogenesis
In the type-I seesaw scenario the lepton-number-violating decays N i → α + H and N i → α + H are also CP-violating, thanks to the interference between their tree and one-loop (self-energy and vertex-correction) amplitudes [6,[12][13][14]. Given M 1 M 2 M 3 , however, the near degeneracy of M 1 and M 2 can make the one-loop self-energy contribution resonantly enhanced [41][42][43][44][45][46][47][48][49]. As a result, the flavor-dependent CP-violating asymmetries ε iα between N i → α + H and N i → α + H decays (for i = 1, 2 and α = e, µ, τ ) are dominated by the interference effect associated with the self-energy diagram [42,43]: where ξ ij ≡ M i /M j and ζ j ≡ Y † ν Y ν jj / (8π) with the Latin subscripts j = i running over 1 and 2. Taking account of the expression of Y ν in Eq. (6), we immediately arrive at together with The flavored CP-violating asymmetries in Eq. (7) turn out to be 2 For such a heavy Majorana neutrino mass spectrum, the role of N 3 in thermal leptogenesis is expected to be negligible because its contribution has essentially been washed out at T M 1 M 2 .
where α = e, µ, τ and j = i = 1, 2; and ζ j = . One can see that ε iα ∝ ∆ τ holds, and hence ε iα will be vanishing or vanishingly small if O = 1 is taken but the RGE-induced effect is neglected. Note that the first term in the square brackets of Eq. (10) depends only on a single combination of the two so-called Majorana phases ρ and σ of U [7], denoted here as φ ≡ ρ − σ; and the second term is only dependent on the Dirac phase δ of U . So a direct connection between the effects of leptonic CP violation at high-and low-energy scales has been established in our RGE-assisted seesaw-plus-leptogenesis scenario.
In the flavored resonant thermal leptogenesis scenario under consideration, the CP-violating asymmetries ε iα are linked to the baryon-to-photon ratio η as follows [52,53]: where κ 1α and κ 2α are the conversion efficiency factors, and the sum over the flavor index α depends on which region the lepton flavor(s) can take effect. To evaluate the sizes of κ iα , let us first of all figure out the effective light neutrino masses Then the so-called decay parameters K iα ≡ m iα /m * can be defined and calculated, where 1.08 × 10 −3 eV represents the equilibrium neutrino mass and H(M 1 ) = 8π 3 g * /90M 2 1 /M pl is the Hubble expansion parameter of the Universe at temperature T M 1 with g * = 106.75 being the total number of relativistic degrees of freedom in the SM and M pl = 1.22 × 10 19 GeV being the Planck mass.
• For M i 10 12 GeV (for i = 1, 2), all the leptonic Yukawa interactions are flavor-blind. In this case the unflavored leptogenesis depends on the overall CP-violating asymmetry ε i = ε ie + ε iµ + ε iτ 0 in our scenario, as one can easily see from Eq. (10).
• For 10 9 GeV M i 10 12 GeV, the τ -flavored Yukawa interaction is in thermal equilibrium and thus the τ flavor can be distinguished from e and µ flavors in the Boltzmann equations [52,53]. In this case one has to consider two classes of lepton flavors: the τ flavor and a combination of the indistinguishable e and µ flavors. We are then left with the flavored CP-violating asymmetries ε iτ and ε ie + ε iµ together with the flavored decay parameters K iτ and K ie + K iµ , and the latter can be used to determine the corresponding conversion efficiency factors. [ 21,54]. Given the initial thermal abundance of heavy Majorana neutrinos, the efficiency factor κ (K α ) can be approximately expressed as [21,55] where z B (K α ) 2 + 4K 0.13 α exp (−2.5/K α ). We proceed to numerically illustrate that our resonant leptogenesis scenario works well. First of all, the values of I 0 and ∆ τ at the seesaw scale are illustrated in Fig. 1 with Λ SS ∈ [10 5 , 10 12 ] GeV in the SM. Adopting the standard parametrization of U [7], we need to input the values of eleven parameters: two heavy neutrino masses M 1 and M 2 (or equivalently, M 1 and d); three light neutrino masses m i (for i = 1, 2, 3); three lepton flavor mixing angles θ 12 , θ 13 and θ 23 ; and three CP-violating phases δ, ρ and σ (but only δ and the combination φ ≡ ρ − σ contribute). For the sake of simplicity, here we only input the best-fit values of θ 12 , θ 13 , θ 23 , δ, ∆m 2 21 ≡ m 2 2 − m 2 1 and ∆m 2 31 ≡ m 2 3 − m 2 1 (or ∆m 2 32 ≡ m 2 3 − m 2 2 ) extracted from a recent global analysis of current neutrino oscillation data [56,57] Given the above inputs, we can estimate the size of K α with the help of Eq. (12). It is found that K e 2.4, K µ 2.9 and K τ 2.6 in the normal neutrino mass ordering case; or K e 44.9, K µ 20.8 and K τ 26.4 in the inverted mass ordering case. Now that K α > 1 holds in either case, any lepton-antilepton asymmetries generated by the lepton-number-violating and CP-violating decays of N 3 with M 3 M 1 M 2 can be efficiently washed out. It is therefore safe to only consider the asymmetries produced by the decays of N 1 and N 2 . Now let us use the observed value of η to constrain the parameter space of φ and d by allowing m 1 (or m 3 ) and M 1 to vary in some specific ranges; or to constrain the parameter space of m 1 (or m 3 ) and M 1 by allowing φ and d to vary in some specific ranges, and by taking account of both the τ -flavored regime with T M 1 ∈ (10 9 , 10 12 ] GeV and the (µ + τ )flavored regime with T M 1 ∈ (10 5 , 10 9 ] GeV. We find no parameter space in the inverted neutrino mass ordering case, in which the conversion efficiency factors are strongly suppressed. Our RGE-assisted resonant leptogenesis scenario is viable in the normal neutrino mass ordering case, and the numerical results for the τ -and (µ + τ )-flavored regimes are shown in Figs. 2 and 3, respectively. Some brief discussions are in order.
• The τ -flavored regime (i.e., T M 1 ∈ (10 9 , 10 12 ] GeV). As can be seen in the upper panels of Fig. 2, φ is mainly allowed to lie in two possible ranges: [0, 2π/5] and [π, 7π/5]; and the dimensionless parameter d satisfies d 4 × 10 −5 . These two ranges of φ differ from each other just by a shift or reflection; and they are symmetric about φ = π/5 and φ = 6π/5, respectively. Such a feature can easily be understood. Considering ε ie + ε iµ + ε iτ = 0 and M 1 M 2 , we have η ∝ ε 1τ + ε 2τ ∝ sin 2(φ − ϕ τ ) with ϕ τ ≡ arg U * τ 1 U τ 2 e iφ being dominated by the CP-violating phase δ whose value is around 19π/20. And thus if φ is replaced by π + φ (or 7π/5 − φ) and 2π/5 − φ (or 12π/5 − φ), the value of η will keep unchanged. Note that even if φ = 0 holds, there can still exist some parameter space for the four free parameters. In this special case the Dirac CP phase δ, which is sensitive to leptonic CP violation in neutrino oscillations, is the only source of CP violation in our flavored resonant leptogenesis scenario. As shown in the lower panels of Fig. 2, M 1 varies in the range (10 9 , 10 12 ] GeV and m 1 0.01 eV holds. But for a given value of d, the parameter space of M 1 is generally constrained to a specific range; and when d decreases, the allowed range of M 1 increases correspondingly. For the most part of the allowed range of φ, the smallest neutrino mass m 1 can approach zero with a given value of d, and the value of η is almost independent of m 1 when m 1 becomes small enough since η is dominated by the term containing m 2 m 1 in this case. When the value of φ approaches the edge of the allowed range of φ, there will be a lower limit on m 1 which can be seen from the orange band in the lower-left panel of Fig. 2. This feature is mainly a consequence of the reduction of ε iτ in magnitude, which is proportional to sin 2(φ − ϕ τ ).
• The (µ + τ )-flavored regime (i.e., T M 1 ∈ (10 5 , 10 9 ] GeV). It is obvious that in this case the parameter space is largely reduced as compared with that in the τ -flavored regime.
It is finally worth mentioning that the normal neutrino mass ordering is currently favored over the inverted one at the 3σ level, as indicated by a global analysis of today's available experimental data on various neutrino oscillation phenomena [56][57][58]. This indication is certainly consistent with our RGE-assisted resonant leptogenesis scenario.
On the minimal seesaw
Since we have focused on resonant leptogenesis with M 1 M 2 M 3 based on the type-I seesaw mechanism, it is natural to consider a minimized version of this scenario by switching off the heaviest Majorana neutrino N 3 . That is, we can simply invoke the minimal type-I seesaw model [50,51] with two nearly degenerate heavy Majorana neutrinos to realize resonant leptogenesis. In this case the Yukawa coupling matrix is a 3 × 2 matrix, and thus the arbitrary orthogonal matrix O in the CI parametrization of Y ν is also a 3 × 2 matrix. To remove the uncertainties associated with O, we may take corresponding to the normal (m 1 = 0) or inverted (m 3 = 0) neutrino mass ordering. Then the expression of Y ν in Eq. (6) can be simplified to with m 1 = 0, m 2 = ∆m 2 21 and m 3 = ∆m 2 31 ; or with m 3 = 0, m 2 = −∆m 2 32 and m 1 = −∆m 2 32 − ∆m 2 21 . In other words, the mass spectrum of three light neutrinos is fully fixed by current neutrino oscillation data in the minimal seesaw model, so the uncertainty associated with the absolute light neutrino mass scale disappears. Another bonus is that one of the Majorana phases of U (i.e., ρ) can always be removed thanks to the vanishing of m 1 or m 3 , and therefore we are left with only two low-energy CP-violating phases (i.e., δ and σ) which affect the flavored CP-violating asymmetries ε iα . In our numerical calculations we simply input the best-fit values of θ 12 , θ 13 , θ 23 , δ, ∆m 2 21 and ∆m 2 31 (or ∆m 2 32 ) as given below Eq. (13). Then the observed value of η can be used to constrain the parameter space of σ and d by allowing M 1 to vary in some specific ranges; or to constrain the parameter space of M 1 and d by allowing σ to vary in (0, 2π]. We find that in this minimal type-I seesaw model our RGE-assisted resonant leptogenesis scenario is viable only for the normal neutrino mass ordering with m 1 = 0 and only in the (µ + τ )-flavored regime. The numerical results are briefly illustrated in Fig. 4.
An immediate comparison between Fig. 3 and Fig. 4, which are both associated with the (µ+τ )-flavored regime for resonant leptogenesis, tells us that the parameter space in the minimal seesaw case is slightly larger. This observation is attributed to the smaller cancellation among the contributions of three flavors, since the efficiency factor for the e flavor [i.e., κ(K e )] is much larger than those for µ and τ flavors [i.e., κ(K µ ) and κ(K τ )] in the minimal seesaw scenario. Note that if κ(K e ) = κ(K µ ) = κ(K τ ) held, η would vanish due to ε ie + ε iµ + ε iτ = 0. As shown in Fig. 4, σ is mainly located in two disconnected intervals [0, 3π/10] and [π, 13π/10]. But these two intervals are different from each other only by a shift (σ → σ + π) or a reflection (about σ = 13π/20); and each of them has a symmetry axis (σ = 3π/20 or σ = 23π/20). We see that M 1 6.3 × 10 5 GeV holds, and d is allowed to vary in a wide range between 10 −11 and 10 −7 . When the value of M 1 deceases, the lower and upper bounds of d are both reduced; meanwhile, the allowed range of σ becomes smaller. That is why when M 1 is smaller than 10 7 GeV and σ is switched off (i.e., δ is the only source of CP violation), it will be very difficult (and even impossible) to make our RGE-assisted resonant leptogenesis scenario viable.
One may certainly extend the above ideas and discussions from the SM to the MSSM, in which the magnitude of ∆ τ is expected to be enhanced by taking a large value of tan β. In this case it should be easier to obtain more appreciable CP-violating asymmetries ε iα , simply because they are proportional to ∆ τ . So a successful RGE-assisted resonant leptogenesis can similarly be achieved in the MSSM case. In this connection the main concern is how to avoid the gravitino-overproduction problem [59][60][61][62][63], and a simple way out might just be to require M 1 10 9 GeV and focus on thermal leptogenesis in the (µ + τ )-flavored regime.
Summary
Based on the type-I seesaw mechanism, we have reconstructed the Yukawa coupling matrix Y ν in terms of the light Majorana neutrino masses m i , the heavy Majorana neutrino masses M i and the PMNS matrix U by assuming the arbitrary orthogonal matrix O in the CI parametrization of Y ν to be the identity matrix. To bridge the gap between m i and U at the seesaw scale Λ SS and their counterparts at the Fermi scale Λ EW , we have taken into account the RGE-induced correction to the light Majorana neutrino mass matrix. This RGE-modified seesaw formula allows us to establish a direct link between low-energy CP violation and flavored resonant leptogenesis with M 1 M 2 M 3 , so as to successfully interpret the observed baryon-antibaryon asymmetry of the Universe. We have shown that our idea does work in either the τ -flavored regime with equilibrium temperature T M 1 ∈ (10 9 , 10 12 ] GeV or the (µ + τ )-flavored regime with T M 1 ∈ (10 5 , 10 9 ] GeV, provided the mass spectrum of three light Majorana neutrinos is normal rather than inverted. We have also shown that the same idea is viable for a minimal type-I seesaw model with two nearly degenerate heavy Majorana neutrinos. | 5,416.6 | 2020-03-13T00:00:00.000 | [
"Physics"
] |
Quasiprobability distribution functions from fractional Fourier transforms
We show, in a formal way, how a class of complex quasiprobability distribution functions may be introduced by using the fractional Fourier transform. This leads to the Fresnel transform of a characteristic function instead of the usual Fourier transform. We end the manuscript by showing a way in which the distribution we are introducing may be reconstructed by using atom-field interactions.
Introduction
It has been already shown that quasiprobability distribution functions may be reconstructed by the measurement of atomic properties in ion-laser interactions [1] and two-level atoms interacting with quantized fields [2,3].Such measurements of the wave function are realized usually by measuring atomic observables, namely, the atomic inversion and polarization.
Ideal interactions, i.e., without taking into account an environment, have shown to lead to the reconstruction of the Wigner function [3] by taking advantage of its expression in terms of the parity operator.However, the interaction of a system with its environment [11] leads to s-prametrized quasiprobability distribution functions [12][13][14] where D(α) = exp(αâ † − α * â), with â and â † the annihilation and creation operators of the harmonic oscillator, respectively, is the Glauber displacement operator [15].The state D(α)|k = |α, k is a so-called displaced number state [16].Note that, in order to reconstruct a given quasiprobability function it is needed to do displace the system by an amplitude α and then measure the diagonal elements of the displaced density matrix.The parameter s defines different orderings and therefore different quasiprobability distribution functions (QDF).The Glauber-Sudarshan P -function [15,17] is given for s = 1, and is used to obtain averages of functions of normal ordered creation and annihilation operators; s = −1 gives the Husimi Q-function, used to obtain averages of functions of anti-normal ordered creation and annihilation operators, while s = 0 is used for the symmetric ordering and gives the Wigner function.
Equation (1) may be rewritten as that, by using the commutation properties under the symbol of trace, and the system in a pure state |ψ , may be casted into Recent studies have openned the possibility of measuring, instead of observables, non-Hermitian operators [18].It would be plausible then that such measurements could be related to complex quasiprobability distributions like the McCoy-Kirkwood-Rihaczek-Dirac distribution functions [5,6,8,19].
In this contribution we would like to introduce other kind of complex quasiprobabilities that, although they could be introduced simply by taking s as a complex number, we introduce them in a formal way by considering the fractional Fourier transform (FrFT) [20][21][22] of a signal.Then, by writing the Dirac-delta function in terms of its FrFT, we are able to write a general expression for complex quasiprobability distributions in terms of the Fresnel transform.Indeed, the representation of these complex quasiprobability distributions in terms of a Fresnel transform implies that they are solutions of a paraxial wave equation [3].Finally, by using an effective Hamiltonian for the atom-field interaction, we show how this quasiprobability distribution function may be reconstructed.
Fractional Fourier Transform
Up to a phase, the fractional Fourier Transform of a signal ψ(x) can be written by the following expression [20][21][22] F that may be expressed in terms of an integral transform as where Then, if we consider equation ( 6) as a propagator, Dirac's delta distribution function takes the form Now, if we apply the fractional Fourier transform to the Dirac delta function we obtain Then, applying the inverse fractional Fourier transform to equation ( 8) we obtain an alternative representation of the Dirac delta distribution function From the above equation it may be seen that there is a phase multiplying the usual integral representation of the Dirac delta function, that although could be omitted by using properties of the delta function, we keep in order to obtain a quasiprobability distribution function as a fractional Fourier (Fresnel) transform of the characteristic function.
Probability distribution in the phase space
We define J(q, p) a probability distribution in the phase space as then, using equation ( 9), the distribution J(q, p) may be rewritten as and because equation ( 11) takes the form that by using the equivalence may be casted into the expression The above quasiprobability distribution function is defined for a range of parameters α and β, however, for the sake of simplicity, we will consider the case cot α = − cot β = π.We may relate the quasiprobability distribution function J(q, p) to the Wigner function, by noting that, for cot α = − cot β = π, equation ( 15) has the form According to trace representation of Wigner function [14] we write the distribution J(q, p) as the Fresnel transform of the Wigner function It is easy to show that the quasiprobability distribution ( 18) can be normalized Therefore, for normalization reasons, the quasiprobability distribution is finally given in the form that, by applying the change of variables β = u/ √ 2 + iv/ √ 2 takes the form with α = q/ √ 2 + ip/ √ 2. From the above expression it is direct to show that the Wigner function and the function J(α) may be easily related by the differential relation The above quasiprobability function may be written as a trace by noting that that leads to the trace representation of J(q, p) Last equation allows us to show that J(q, p) is correctly normalized, for this we do the double integration where we have defined and with By replacing equation (29) into equation ( 28) we obtain  = e −iθn , (30) that shows that equation ( 26) is correctly normalized 4. Kirkwood distribution and J(q, p) distribution Being the QDF J(q, p) and Kirkwood distributions complex functions we show now some differences between them.The Kirkwood distribution is defined as [8,19,23,24] du dv e iup−ivq e i uv 2 T r ρe iv q−iup , or an alternative way to write it as an expectation value [25] is
Number state
The Kirkwood K(q, p) and J(q, p) distributions for number state |n , are represented by the following equations and where, H n (x) and L n (x) are Hermite and Laguerre polynomials, respectively.
Superposition of two coherent states
Now, we consider a superposition of two coherent states as: where , such that the Kirkwood K(q, p) and the J(q, p) distributions for the superposition of two coherent states, |ψ ± , is given by and respectively.We plot both distribution in Figures 1 and 2. In both figures a more and d) we see the distribution J(q, p), for the same number state, again, the real and imaginary parts, respectively.
uniform behaviour may be seen in the QDF J ± (q, p) than in the Kirkwood function.In fact, the real and imaginary parts of the distribution we have introduced here, look like Wigner function for number states (Fig. 1) and Scrhödinger cat states (Fig. 2).
Reconstruction of distribution J(α)
It is not difficult to show that the real part of QDF J(α) may be measured.This can be achieved by measuring the atomic polarization in the dispersive interaction between an atom and a quantized field [3], whose Hamiltonian reads Therefore, by measuring the polarizations σx and σy we are able to measure the QDF J(α).
Conclusions
We have introduced a set of parametrized (in terms of α and β) quasiprobability distribution functions, equation (15), by using the fractional Fourier transform.This has lead us to generalize QDF to Fresnel transforms of the characteristic function instead of their usual Fourier transforms.We have also shown how such QDF may be recosntructed in the dispersive atom-field interaction.We have also given a (differential) relation that allows the calculation of the newly introduced QDF from the Wigner function.
Figure 1 .
Figure 1.In figures a) and c) we can see the phase space distribution of the real and imaginary parts of the Kirkwood function for a number state |n = 3 .In figures b)and d) we see the distribution J(q, p), for the same number state, again, the real and imaginary parts, respectively.
with σz = |e e|−|g g|, the Pauli matrix corresponding to the atomic inversion operator, where |g and |e represent the ground and excited states of the two-level atom.The
Figure 2 . 1 √ 2 (π 2 − 4 2χ,
Figure 2. In figures a) andc) we can see the phase space distribution of the real and imaginary parts of the Kirkwood function for two superposition of coherent states |ψ + wiht q 1 = −q 2 = 4 and p 1 = p 2 = 0.In figures b) and d) we see the distribution J(q, p), again, the real and imaginary parts, respectively. | 2,009 | 2019-02-16T00:00:00.000 | [
"Physics"
] |
Role model and prototype matching : Upper-secondary school students ’ meetings with tertiary STEM students
12(1), 2016 Abstract Previous research has found that young people’s prototypes of science students and scientists affect their inclination to choose tertiary STEM programs (Science, Technology, Engineering and Mathematics). Consequently, many recruitment initiatives include role models to challenge these prototypes. The present study followed 15 STEM-oriented upper-secondary school students from university-distant backgrounds during and after their participation in an 18-months long university-based recruitment and outreach project involving tertiary STEM students as role models. The analysis focusses on how the students’ meetings with the role models affected their thoughts concerning STEM students and attending university. The regular self-to-prototype matching process was shown in real-life role-models meetings to be extended to a more complex three-way matching process between students’ self-perceptions, prototype images and situation-specific conceptions of role models. Furthermore, the study underlined the positive effect of prolonged role-model contact, the importance of using several role models and that traditional school subjects catered more resistant prototype images than unfamiliar ones did. Eva Lykkegaard conducted this work as part of her PhD study at Centre for Science Education at the University of Aarhus in Denmark. Her dissertation was a longitudinal study of STEM oriented upper secondary school students’ ongoing educational choice processes. Using mixed methods her research focused on critical moments in students’ choice processes, that changes their educational trajectories and in addition the students’ perceived match between scientists and science students and their own identities.
Introduction
The past couple of decades, concerns about the participation in science, technology, engineering and mathematics (STEM) have been endemic in the Western World (European Commission, 2004) and have sparked a substantial amount of research and development activities.This has led to an increased focus on the role that potential identities related to STEM disciplines play in students' choice of study and therefore also on how young people conceive what a science student is like (Archer et al.,[74] 12(1), 2016 Eva Lykkegaard and Lars Ulriksen 2010; Schreiner & Sjøberg, 2007).Students' choice processes and their conceptions of science students build on their prior experiences with the field of science as well as what they expect a possible future study and career in STEM to be like (Holmegaard, Madsen, & Ulriksen, 2014).
The students' experiences with science teaching affect their attitudes to science (Osborne, Simon, & Collins, 2003;Taconis & Kessels, 2009), and based on an analysis of the physics curriculum in the Danish upper-secondary school, Krogh (2006) points at the void between the values of the young people and the values expressed in the content of the curriculum as well as in the teaching and learning activities students are engaged in.
Students who do not have parents or other close relatives or acquaintances studying or working within STEM have to build their expectations about studying STEM and following a STEM career on culturally available ideas about what science and scientists are like.Previous studies have found that science tend to be conceived as gendered (being for men rather than for women (cf.Archer et al. (2012) for further references)), classed and racialized (being for whites and the middleclass (Archer, Dewitt, & Osborne, 2015;Carlone & Johnson, 2007)).When students consider which study to pursue they also consider what identities are available in a particular disciplinary culture and context, how these imaginable identities fit with their conceptions of who they are and who they wish to become, and whether pursuing this line of study proposes an attractive and realistic identity development (Holmegaard, 2015).
This identity focus as well as previous studies addressing how scientists are generally conceived (Andersen, Krogh, & Lykkegaard, 2014) has made targeted recruitment projects using STEM role models popular (Andree & Hansson, 2013;Henriksen, Jensen, & Sjaastad, 2014).The tacit assumption behind these recruitment projects is that interaction with role models will make students experience attractive STEM-related identities and thus make them more inclined to choosing a STEM program.For students from university-distant backgrounds role models offer an opportunity to form images about what STEM studies and careers could be.It is, however, limited to which extent it has been investigated how this 'meet, match and matriculate' sequence actually influences students' aspirations.
Using role models in relation to students about to choose a tertiary study program builds on the assumption that the impression the student gets of the model will affect his or her choice -and preferably make them choose STEM.The first impression of another person is formed within a split second (Willis & Todorov, 2006) and is affected by the situation in which the specific interaction occurs, by prior information about the person, as well as the person's identity display (Brewer, 1988).
A student in the process of forming an opinion about a role model will process all available identity indications and match them against his or her pre-established, situation-specific prototypes: "A prototype describes just one person, who is considered as a particularly typical representative of the group in question (for example 'the typical teacher')" (Hannover & Kessels, 2004, p. 53).The process of linking another person (e.g. a role model) to a prototype continues until a match appears satisfactory to the individual.Once an individual has linked another person to a particular prototype, this linkage is quite persistent.If new information about the other person does not fit the matched prototype, the individual will not search for a new general prototype but rather create an appropriate subdivision of the linked prototype that matches the new information (Brewer, 1988).
In order to evaluate the appropriateness of a specific study program, it has been proposed that students engage in self-to-prototype matching, which is "the strategy of making situation choices on the basis of a rule of maximizing the similarity between the self and person-in-situation prototypes" (Setterlund & Niedenthal, 1993, p. 770).In the educational choice process, students thus compare their conception of themselves with their prototypical images of students studying different [75] 12( 1), 2016 Role model and prototype matching study programmes.Taconis and Kessels found that "the actual choice of a specific academic profile could be predicted by students' perceived similarity between their self and the respective prototypes" (Taconis & Kessels, 2009, p. 1129).
The influence of self-to-prototype match on students' educational choice processes have been studied in relation to the students' prototypical images of students who would have science and math as favourite subjects (Hannover & Kessels, 2004), science teachers (Kessels & Taconis, 2012), science peers (Taconis & Kessels, 2009) and scientists (Andersen et al., 2014).In this article the methodology is extended to explore the following research questions: 1. How do students match themselves to real life STEM role models?And 2. How is this matching process affected by the setup of the role-model meeting and by the students' individual preferences concerning role models and experiences with the disciplines?
The role model project
The role model project was a STEM recruitment and outreach project at the University of Aarhus in Denmark lasting 18 months and including five visits at the university.The project ran in three iterations.In this paper, results will be presented from the 79 students participating in the first round.
The purpose of the project was to increase the STEM recruitment and to qualify upper-secondary school students' tertiary educational choices, particularly for students from so-called 'university-distant backgrounds', that is, students whose socio-economic, cultural or geographic background could make them less inclined to pursue a tertiary education.Students were selected by teachers from the students' schools based on written applications, the students' university-distant backgrounds and their skills and interests in STEM.All students took advanced mathematics, which is mandatory for admission to university STEM education in Denmark.
Due to their family background, the students had no direct knowledge about what it means to attend university.A specific aim of the project was therefore to make the students able to picture themselves as university students at all.Through facilitated role model meetings the project made knowledge about what being a university student means more accessible to the participants.The students met two different types of STEM role models -mentors and match-making models.
Assigned mentors
The 79 upper-secondary school students were randomly divided into six groups, and each group was assigned two mentors.The students met with their mentors at the five all-day meetings at the university and chatted with them in-between using a specially-designed Facebook group.
The 12 mentors were second or third year STEM students at the University of Aarhus.Like the uppersecondary school students, they had university-distant backgrounds in order to enhance the possibility that the students would perceive an identity match with their mentors.
The mentors were paired in couples maximizing their joint coverage of the different STEM areas, for instance pairing up a mentor from physics with one from biology.Hence, the upper-secondary school students' specific STEM interests were not necessarily represented by their two mentors, but the mentors offered insight into life as a university student for somebody with a family background similar to their own.
The mentors were trained before and during the role model project.It was stressed that the mentors' job was not to recruit students for their specific STEM programs, but to offer as truthful a representation of university life as possible.
Self-chosen match-making models
During the third university visit, the upper-secondary school students met with three match-making models for 10 minutes each.The purpose of the match-making was to inform the students' educational choice processes by providing information about specific STEM programs their mentors might not be able to offer.
The match-making session consisted of two phases: 1) An informal exploration phase where the upper-secondary school students circulated among the 20 match-making models and their posters entitled 'Me and My Study Program'.2) Three self-chosen meetings where match-making models gave a five-minute presentation followed by five minutes of questions from the upper-secondary school students.
In the first phase, the upper-secondary school students could select the role models they perceived to match based on study program, poster and/or the model's identity display.In the second phase, their interests and first impressions were supplemented through interaction with the match-making models.
Twelve of the twenty match-making models were also mentors, but eight additional match-making students were recruited in order to have all different STEM programs at the University of Aarhus represented during the match-making session.Some upper-secondary school students visited their own mentor(s) in the match-making session if they presented a program of the student's interest, but frequently it would be the first meeting between the upper-secondary school students and the models and there would be no follow-up meeting later.
Prior to the match-making, all 20 match-making models were instructed to prepare a handmade poster and a five-minute talk.It was stressed that the posters should include information about the match-making models as persons and truthful presentations of the life as a student at their particular STEM programs.They were told not to act as student counsellors or to prepare a show for the students.Instead, they should give personal accounts of their background, living situations, future dream job, but also of their experiences with everyday study life, time schedules, teaching methods, books and exams at their particular STEM programs.
Sample
Sixteen focus students were selected from the 79 project participants on the basis of their applications for the role model project.The selection sought to capture as diverse a range of educational considerations as possible (Flyvbjerg, 2006).All sixteen students agreed to participate in the study, but one student did not respond to any requests after the first two interviews.Findings regarding the remaining 15 focus students are exemplified in-depth through three case students.
Methods
The 15 focus students were interviewed individually eight times during their final 18 months in upper-secondary school and additionally one and two years after their graduation.The interviews were semi-structured, and each interview followed the thread and theme from the previous interviews: emerging educational considerations, attitudes towards university studies in general and attitudes towards STEM programs and STEM students in particular.
The first interview, where the students' individual backgrounds and initial educational aspirations were probed, the fourth interview, conducted within two weeks of the match-making session and examining students' conceptions of the individual role models, and the eighth interview, where students
Eva Lykkegaard and Lars Ulriksen
[77] 12(1), 2016 validated preliminary analyses of their educational choice process and conceptions of role models, all took place at the students' schools and lasted for about an hour each.The remaining five interviews lasted about 15 minutes each and were conducted by telephone while the students were at home.Either case provided students with a safe and familiar setting.
The first seven interviews were fully transcribed and condensed into a two-page vignette for each student presenting their background and their experiences with the project and the different role models, highlighting the influence on their educational choice processes.At the eighth interview, the 15 students read their own vignette and had the opportunity to correct misinterpretations.During this member-check (Guba & Lincoln, 2005), only a few students made minor adjustments to the educational choice narratives.
This article mainly draws on the fourth interview, although the analysis also includes the vignettes as well as the status upon completing upper-secondary school and the interviews after one and two years respectively.
The fourth interview, made shortly after the match-making sessions, was conducted in three sequences.
Students' conception of individual role models:
Students were presented with pictures of their two mentors and of the match-making models they had visited and were asked to openly characterise each of them.They were then asked to compare the different role models by labelling respectively the most and least of different characteristics such as 'collaborative' and 'healthy' and students were asked to argue which of the role models they 'could learn the most and least from' and 'would rather and rather not be in a study group with'.Items were selected based on Andersen et al. (2014).Students were urged to reply to all labels to express their impression of the role models relative to each other, even if it meant that a role model could be labelled, for instance, least collaborative, while being considered collaborative by the respondent.The students were further asked to explain and justify their decisions and labelling.
Students' perceived match between individual role models and their prototype images: Stu-
dents were asked to argue whether or not each of the role models fitted their image of someone in that particular study program.
Students' perceived match between individual role models (and the model's study programs)
and their self-conceptions: Students were asked to argue 'which model they were most similar to' and 'whose study program they would rather and rather not pursue'.
Transcriptions of the interviews with each student were reviewed and sequences where students mentioned the mentors and match-making models were singled out.These sequences were then analysed with respect to how students experienced the role models, how they related their experiences with the role models, with the STEM disciplines, and with their self-conceptions, and finally whether this process of relating and balancing different elements became discernible in the students' reflections concerning the STEM programs and their educational choice processes.This was done for each interview and finally the results from the analysis of each individual student were compared and combined, and patterns across the individual responses were revealed.
The empirical data consists to a large extent of the students' own statements about how they experienced the mentors, the match-making models and related activities.However, these statements are not taken at face value.Analysis of the interviews largely followed the steps suggested by Kvale (1996).However, the second and third contexts of interpretation described by Kvale; the common sense understanding and the theoretical understanding (Kvale, 1996, p. 214) were merged into one.This means that the interpretation was based on a combination of a general textual analysis and an analysis guided more specifically by the theoretical framework of the present study.
Role model and prototype matching
[78] 12(1), 2016 Gender was not a focus of the project and gender issues were not prompted in interviews.As it were, there was nothing in the interviews that suggested this to be an issue of particular importance to the participating students.This does not mean that gender is without importance or relevance in students' choice of program, but we have decided not to address this issue in the present analysis.
Although the design has similarities with the studies by Kessels, Taconis and Hannover (Hannover & Kessels, 2004;Kessels & Taconis, 2012;Taconis & Kessels, 2009), there are some important differences.Firstly, we had the students replying to the items in a verbal interview as opposed to the written format in previous studies.This offers the opportunity to record the considerations of the students in the matching.Secondly, we asked students to relate prototypes to actual role models and relate themselves to the role models as opposed to comparing themselves to imagined prototypes only.
Thirdly, we conducted qualitative analyses of the interviews rather than statistical analyses.Although this prevents an analysis of correlations, it allows for a nuanced discussion of the students' relation to STEM and stereotypes.
Results and discussions
This section is divided into two sections.The first subsection 'The matching process' presents and discusses results concerning research questions 1 and the subsequent subsection 'Elements affecting the students' matching processes' presents and discusses results in relation to research question 2.
The matching process
In order to answer research question one, we will give in-depth presentations of three case students and their matches with different STEM role models.Based on these cases and the patterns in the three students' matching processes, we propose a general model for the matching process.The model is derived from an analysis of interviews with all 15 focus students.
Mary
Mary is the kind of student which STEM recruitment projects using role models wish to target.She has a profound interest in mathematics, a high work ethic and a natural skill for STEM, but she cannot really see herself pursuing a math career.The following extracts from Mary's member-checked vignette illustrate her educational choice process prior to the role model project.
Mary's parents fled during the Vietnam War.Before they fled, they both studied mathematics.
In Mary's meetings with the two math role models (Michelle and Reba) were not effective in making her reconsider studying mathematics.Instead, she decided to study to become a dentist right upon graduation.She had not mentioned this idea earlier, but explained that it was a trajectory her family and friends were very happy about.
Ewan
Ewan's educational choice process is an example of how the use of role models under the right circumstances can recruit students into STEM programs.
The following is a short extract from Ewan's member-checked vignette concerning his educational trajectory prior to the role model project.
There is no pressure on Ewan from home about him completing any specific study, although his parents have made his school a high priority ever since he was young.However, Ewan is a very committed student, both when it comes to classes and social events.In general, Ewan is not interested in "old theories" but rather in more recent inventions, and hence he views theories as a "necessary evil" in many study programs.He does not care for "studying just for the sake of studying", especially not for several years.He would like a job in which he can both cooperate with others and have some management function.He imagines that at some point he will be made project manager and enjoy "a good salary and good working conditions with a relatively big amount of leisure time".At the same time he would like to "get out and experience the world and make an impression and make a difference to others."Ewan wanted either to go to university to study science of some sort, or go to a school of engineering.The School of Engineering appealed to him the most since they mainly worked with practical projects.He had a feeling that the people in the science departments were less focused towards cooperation than the engineers (and Ewan) were.
Role model and prototype matching
[80] 12(1), 2016 Ewan liked the role model project "where you sort of got a really nice insight into the university."He thought it was great to meet his mentors (Nadine from physics and Owen from molecular biology), and described them as "a good guide", but he could not easily see himself in any of them.He thought they clearly signalled that they attended the university and were (too) invested in their studies.However, Ewan's attitude towards tertiary STEM students changed a bit as a result of his interaction with his mentors: "Nadine, who studies physics, I don't know, she was very nice too...You can tell by looking at someone who studies physics that if they study anything, it will be physics.And maybe she is a bit the odd one out there, but then, there are a lot of people who study physics who do not look like complete nerds." At the match-making, Ewan heard about the geology program from Martin.In Martin, Ewan found one he felt very similar to: "I don't really think he is extreme in any aspect, and I'm not extreme", and Ewan also found geology to be a very attractive study program involving "a bit more traveling, and there was a bit more practical work instead of just reading."He did not find it to be as dreary a subject as the other university programs.The brief meeting with Martin at the match-making motivated Ewan to seek more information about the geology program, and after the match-making, he chose to try out a one-week university internship at the geology study.After the internship, Ewan decided that this was what he wanted to study, and after a gap year, Ewan entered the geology study program.
Tabitha
Tabitha's educational choice process illustrates that different role models may play different parts in the ongoing process of making an educational choice.Extracts from her vignette represents her educational choice process prior to the role model project.
Tabitha's mum died when she was little.Her father has not quite gotten the education he had wanted, and consequently it is very important to him that Tabitha completes a long tertiary education and gets as good opportunities as possible.Tabitha herself also enjoys school and especially likes being one of the best students in school.Furthermore, Tabitha is very concerned with "creating balance in her life" when it comes to her social life, her classes, and her everyday life, in which she does not care to deal with "thinking topics" all the time.Tabitha is highly interested in biology.This interest may stem from her grandmother, who from when Tabitha was a young girl used to take her to feed ducks, explain things to her, and take her to different museums.As a child she wanted to be an animal attendant, but now she plans to study "rubber boots biology" -she likes the "tangible" and more "natural" biology.Tabitha's alternatives to the biology program are psychology or English.She finds both fields to be extremely interesting, but she cannot picture herself working in them.
Like Mary, Tabitha's mentors were Molly from biology and Michelle from mathematics.Tabitha was pleased with her mentors and especially Molly, with whom she could identify to a high degree: "She wants a lot of experiences and wants to travel, in a way that you get to thinking that she has a lot of the qualifications I also feel I have, and on the other hand she is just really geeky […] if I am interested in something, I can also become a bit nerdy about it." In this way, Molly has played an important part in the formation of Tabitha's new image of the university student as knowledge seeking and social as well.
Eva Lykkegaard and Lars Ulriksen
[81] 12(1), 2016 Tabitha had an old interest in geology, and she had visited the geology study when in lower secondary, "… and it was one of the worst things I have ever experienced; it was simply a struggle to stay awake."During the match-making, Tabitha decided to visit Martin from geology, but this meeting did not leave her with a good impression of the geology program either.Tabitha perceived Martin as a lively and outgoing guy, but she got the impression that, At the match-making, Tabitha also visited Mike who was studying chemistry.She was very surprised at how he was: "You sit there thinking that you were that cool guy in upper-secondary school, weren't you?That guy who was smart, but cool at the same time […] I was really expecting to meet a geek." She was quite impressed by Mike, and after the meeting she was convinced that it would be "cooler" to study chemistry than biology.At this point, chemistry was what she wanted to study the most.
After the match-making, Tabitha therefore chose a one-week university internship at chemistry to investigate her new chemistry interest.
"During that, I found out that it wasn't that I couldn't keep up, and it wasn't that the topics weren't interesting enough, I just don't think I can entertain myself with this [chemistry] for five years […] biology comes more naturally to me than chemistry."
Tabitha's perceived attractiveness of the chemistry study program was thus not reinforced after the match-making meeting with Mike.
Tabitha's reiterated meetings with Molly throughout the role-model project confirmed her belief that she should study biology.Biology would allow her to become immersed in her studies and still remain socially active.After she graduated upper-secondary school, she started studying biology.
From a two-way to a three-way matching process
The three case students' matching processes suggest that the 'self to real-life role model' matching process of the students in this study is more complex than the self-to-prototype matching process described in previous research.Instead of making a two-way comparison between role models and
Role model and prototype matching
selves, the three case students engaged in a three-way matching process, as illustrated in Figure 1.In this process they evaluated: 1) The match between role model and self.
2) The match between self and prototype, and finally, 3) The match between role model and prototype.
Each of the three matching stages in Figure 1 involves academic, social and professional comparisons.Students' self-conception and conception of the role models involves information about subject preferences and abilities; work ethics; social needs, skills and statuses; cultural background; current and future family lives as well as spare time activities and dreams for the future.Students' conception of STEM student prototypes, on the other hand, primarily involves information about study methods; (lack of) spare time and cooperation; study work load; and future job possibilities and working conditions.
Exposing the students to real-life role models also changes the setup of the students' way of engaging with prototypes in the matching process.In self-to-prototype matching studies of Kessels and Taconi's, the students were asked to relate a person to the stereotype of a particular discipline (e.g., physics).As such, the matching object is methodically controlled.This is not the case when students meet role models in real life.In a real-life context we cannot control to which prototype the students match the role models.The upper-secondary school students in this study could match the role models with prototypes of university students in general, prototypes of STEM students in general or with prototypes of specific subgroups of STEM students, such as the prototypical math student, biology student etc.However, they could also match the role models according to a host of other situationspecific prototypes based on the role models' identity displays (ethnicity, job aspirations, socioeconomics, gender etc.).No matter if the role models were selected to represent a particular discipline, the real-life role models presented themselves as individuals and it could not be controlled how the upper-secondary school students related to them and to what extent the meetings made them revise their prototype images.
Revisiting the three case students focusing on prototype revisions
The stories of Mary, Ewan and Tabitha offer insights into prototype images that are revisable and prototype images that are not.
Mary evaluated Michelle and Reba's identity displays according to a math student prototype, see Figure 2A and B.
Mary had a rather negative math student prototype that did not match her cooler self-image.She assumed the prototypical math student to be geeky and "much too clever".She perceived Michelle to match and reinforce this prototype image.
Conversely, Mary did not think Reba matched the math prototype, but this did not make her revise her prototypical image.Therefore, Mary's perceived personal match with Reba was not enough to make her consider a mathematics (and economics) study.12( 1), 2016 Ewan's overall conception of science students was that they were not that focused towards cooperation.His prototype of the subdivision of physics students conceived them as quite geeky, but his meeting with his mentor Nadine made him revise this image a bit, "there are a lot of people who study physics who do not look like complete nerds." Ewan's meeting with Martin made him create another subdivision of the STEM student prototypethat of a geology student.Prior to the meeting with Martin, Ewan did not have any information about geology and he had not formed a prototype of a geology student.Therefore, not surprisingly, Ewan perceived Martin to match the geology student prototype (since the prototype, so to speak, was based on Martin!), see Figure 3A.Even though Ewan did not think he matched his general prototypical image of a science student, he did believe he matched the subdivision of the prototypical geology student.
Tabitha's perceived matches with Martin, Mike and Molly are presented in Figures 3B-D.Tabitha had a rather sloppy prototype image of a geology student that did not match her more serious self-image.She perceived Martin to match this prototype image and therefore she did not need to revise that prototype.
Mike neither matched Tabitha's prototype of a chemistry student nor her self-conception; she perceived him as cooler than both.At the same time, she perceived chemistry as a more difficult and (therefore) prestigious discipline than biology, which had been her preferred discipline.Meeting Mike, who challenged the chemistry student prototype, opened chemistry as an option and made her revise her prototype image.However, her experience during the internship tarnished the image of chemistry as a cool and attractive discipline and even the presence of students as cool as Mike was not sufficient to make her apply.Hence, the lack of match which Tabitha experienced between the cool Mike and her chemistry prototype initially made her revise her ideas about chemistry and allowed her to picture herself studying chemistry.However, this change was not strong enough to subdue the sense of chemistry as 'unentertaining' she got from the internship, and she eventually decided that chemistry was not for her.Role model and prototype matching [84] 12(1), 2016 Tabitha's meeting with Molly made her revise her prototypical image of a tertiary (biology) student, allowing her to combine being hardworking, geeky and socially competent in one person.After this revision Tabitha had no problems seeing herself pursuing tertiary biology.
Summing up, the students engaged in a three-way matching process where they related role models to themselves, the role models to their prototypes and the prototypes to themselves.Apparently, the personal match between student and role model was frequently the one that the students immediately related to, rather than to the role models as representatives of a discipline.The match between students and prototypes only became central if the student had no clear impression of the model.
The match between role model and prototype in some cases challenged the students' prototypes.We cannot conclude in what circumstances the prototype was maintained by revising existing prototype or by creating a sub-division (like Ewan did after meeting Nadine) and in what circumstances it was challenged (as Tabitha was after meeting Mike).What we can say is that 'self-to-real-life role model' matching is complex, context-specific and involves more than the two components in the self-toprototype matching.
Elements affecting the students' matching processes
The case-based exposition of students' meetings with different role models highlighted three elements that influenced the matching processes.These results will be unfolded one by one in the following and subsequently summarised in order to answer the second research question.
Different set-ups for role model meetings
The two types of role models (mentors and match-making models) influenced the upper-secondary school students differently: the mentors primarily made the students reconstruct their general prototypical images of university students, while the match-making models rather made the students reflect on program-specific prototypes.A new focus student, Peter, is introduced to illustrate this dual role model effect.Peter used different prototypes when matching with the match-making student John (from physics, see Figure 4A) and his mentor (Sophia, from biology, see Figure 4B).
Peter perceived a high degree of match with both John and Sophia.He had an interest in physics and matched John according to a physics student prototype.Peter felt he could identify with John, for instance in their shared commitment to physics.He also appreciated John's way of explaining and defining things: "I like it when people use relevant terminology and firmly support the notion that when you discuss a subject, you use the terminology related to that subject." Peter matched Sophia according to a 'student coming from a university-distant background' prototype: 12(1), 2016 "Sophia is a person who outside of school is quite similar to me.She is from the countryside, and in many ways I can identify with her.The way she views things.[…] She has been a good counsellor when it comes to understanding the level at university […] She has not influenced my educational choice directly since she is a biologist and therefore does not know much about physics.She has, however, helped make me more certain that the university was a place for me to be." John fit well into Peter's physics student prototype, while Sophia made Peter more confident that students with the university-distant background they shared could indeed enter university.The meeting with Sophia thus made Peter revise his university student prototype.
Students' perceived matches with role models more often affected their conception of the general university student than their conception of more program-specific prototypes.Thus, several students said that getting to know the role models had convinced them that entering university was an option for young people with university-distant background.Fewer students reported that they had changed their orientation towards specific STEM programs.However, the match-making session facilitated students to engage in the three-way matching process and estimate how well the match-making models matched subject-specific prototypes and thus for many students, like Ewan, served as make-orbreak points regarding considerations about the models' specific study programs.
Students' different role model preferences
Students obviously had different self-conceptions and also different prototype images.Moreover, they experienced and interpreted the individual role models they met differently.One example is Mary and Tabitha's different descriptions of their mentors Molly and Michelle.
Mary described Molly as being "friendly and pedagogical" and Michelle as "bone-dry" whereas Tabitha perceived Michelle as "one of the pedagogical ones" and Molly "a bit more like a geek".Another disagreement in the girls' description of Michelle was that Mary picked her to be the least collaborative of the role models she had met: "I just think that mathematicians are a lot like that... because I myself like working like that, because you sit by yourself and think and so on, and you can work at your own pace and stuff like that, because it is difficult to find someone who is at the exact same level as yourself.Either it is someone better or someone worse." Tabitha, however, considered Michelle as the most collaborative of the role models: "It's probably the 'teacher thing' again that influences it, she is good at explaining what she does herself, and at the same time good at understanding others when you have to talk to her." Here it becomes evident that Mary and Tabitha match Michelle according to different prototypes.
Mary matches Michelle to a math student prototype.Tabitha, on the other hand, knows that Michelle has an aspiration of becoming an upper-secondary school teacher and matches her to this profession prototype, but also argues based on her personal experience with Michelle.The different conceptions are based on differences in the students' conception of the individual as well as in which prototype they apply.
Another obvious diversity in role model conception is Ewan's and Tabitha's characterisations of the geology role model Martin.Tabitha disapproved of his laid back style: "He seemed like it wasn't that his studies were horribly difficult, and that actually suited him rather well."Ewan viewed that as an attractive attribute.He found Martin to be the role model he could learn the most from because "he seemed nice".Conversely, Tabitha believed he was the one she could learn the least from: Role model and prototype matching [86] 12(1), 2016 "because, again, I think the topic is one of the easier ones, but maybe it is just my own prejudices that go into overdrive here, and then in real life it's just a hundred times harder than biology." Ewan could easily see himself in a study group with Martin because he perceived him as "very relaxed and easy, and someone who knows what he's doing," whereas Tabitha selected Martin as the role model she preferred not to be in a study group with: "Again, it is because he seems less serious." Ewan's argumentation relies on his immediate impression of Martin (remember, he did not prior to the meeting have a prototypical image of a geology student), whereas Tabitha's argumentations are based on her prototypical and negative image of a geology student and of geology as a discipline.
Interestingly, Tabitha herself comments that she bases her answers on a prototypical image "my prejudices".
Students' different discipline familiarity
When students were asked to label role models the most and least collaborative etc., they generally answered based on their personal impression of the role model.The students only turned to prototypical labelling when they had nothing else to base their answers on (Tabitha, for instance, labelled Michelle the least healthy because "The canteen at mathematics has huge pieces of cake") or when they had particular strong prototype images.
It turned out that students' strong prototype images (e.g.Mary's prototypical math student image and Tabitha's prototypical geology student image) were particularly related to school subjects.The students did not have robust prototypical images of subjects they did not know from primary or secondary school.As Ewan said: "I don't really know if I have any specific ideas about who studies molecular biology." This means that role models representing study programs which the students have not previously heard about, will usually create new prototypes (like Ewan creates a geology prototype), whereas role models representing traditional study programs will be judged according to existing prototypes and will probably only be able to slightly revise the prototype (e.g.Mary's robust math prototype).
Summing up:
• The set-up of the role model meeting affects the matching process: Mentors and match-making models affected students differently.The mentors influenced the students' conceptions of whether studying at university was a viable opportunity considering their social backgrounds.The match-making models mainly affected the students' views of the study programs.Since the majority of the match-making models were indeed the mentors, this elucidates how the role models presented themselves (or were conceived) differently in the two different set-ups.The difference is not surprising, as the mentors and the match-making models were instructed differently in the two set-ups, and moreover, only match-making models shared the same specific subject interests as the upper-secondary school students.However, it emphasizes how role model set-ups affect student outcomes.• The students' individual role model preferences affect the matching process: Students have different self-conceptions, different prototype images and thus experienced and interpreted role models differently.• The students' familiarity with specific study programs affects the matching process: The interrelation and hierarchies of prototypical discipline images are complex.Role models from unfamiliar disciplines are, for good and bad, much more influential on students' prototype images because they may define the prototype, while role models from traditional disciplines will be compared to established prototypes.
Conclusions
This paper studies the interaction of upper-secondary students with role models in a recruitment and information project at tertiary level.We found that the students engaged in a three-way matching process where they matched (1) their impression of the role model with their self-image, (2) their impression of the role model with their discipline-related prototype, and (3) the prototype with their self-image.We also found that the students frequently related to the role models as university students in general and at an individual level rather than as representing disciplines.
These findings correspond with the understanding of young people's choice processes when deciding which line of study to pursue.When the students relate to the role models as individuals and university students in general while also matching them with their discipline prototypes, it reflects the students' need to find a study path that not only matches their interests, but offers an identity that is viable and obtainable.Similarly, Holmegaard, Ulriksen, and Madsen (2014) found that students' choice processes included a continuous match between the identities they believed to be available when choosing one line of study compared to another, and considerations concerning how these identities fit with what the students experienced as attractive and viable identities.
Our findings are also in line with the studies by Archer and colleagues who emphasise that class and gender affect the process of identifying recognizable and attractive identities (Archer & DeWitt, 2015;Archer et al., 2015).The matching process thereby evaluates the available identities as well as how interesting the discipline appears to be.Further, it suggests that the individual identity and the discipline-related identity should concur.
These findings add to the understanding of the self-to-prototype studies.Previous studies have analysed the matching of prototypes to self, to peers, to teachers or to researchers and revealed the ideas the students hold of particular disciplines when they are not exposed to real-world representatives (Andersen et al., 2014;Hannover & Kessels, 2004;Kessels & Taconis, 2012;Taconis & Kessels, 2009).The methods used by Kessels and colleagues include written surveys with closed answers, while Andersen and colleagues combine written surveys with coding of qualitative, oral interviews.Our study suggests that these different methodological approaches to some extent limit what can be implied from the mentioned studies.
Comparing the findings by Kessels and Taconis (2012) with the study by Andersen et al. (2014), indicates a difference between the written and the oral interview.While the written survey imposes particular prototypes and requires the students to take a stance concerning these, the oral interviews allow the students to reflect on and modify their conceptions of the prototypes and of themselves.In the present study, the prototypes are further challenged and reflected on through the three-way matching process.A further methodological difference between some of the studies is that while Taconis and Kessels (2009) exclude students with low self-clarity from their analysis, the lack of clarity and fixed self-perception among the students is a point in its own right in the study by Andersen et al. as it is in the present paper.
These differences can be expected to also affect the range of conclusions drawn from the studies.
Studies in which students are responding to fixed prototypes without the opportunity to challenge them and before having been exposed to real-life exponents of the prototypes, will first of all reveal the prejudices existing at an early stage and at one point in time.If students in this context express that they experience science prototypes as alien and unattractive, this could prevent the students from pursuing additional knowledge or insights into what possibilities pursuing a STEM path would open.
The prototypes can be expected to act in the first steps of students' decision processes.
The study by Andersen et al. suggests that the rather rigid prototype conception could be challenged if students are asked to reflect on it.The present paper has investigated a more complex matching process involving real-life role models which allows for the role model to interact with the student on Role model and prototype matching [88] 12(1), 2016 a personal as well as a discipline-related level, and this can be expected to make the prototype less prominent in the students' evaluation of different opportunities.Our findings emphasize that the students' decision processes do involve considering prototypes, but the students do not only consider the discipline or science in general.They relate to the role models as whole persons with identities that relate to being a university student, to studying science, but, importantly, also to other aspects of life.Prototypes, in other words, are only one component in a more full conception of identity.Further, as argued by Holmegaard (2015), the process of choosing is a negotiation process occurring over time and including a lot of uncertainty and doubts.In order to understand the role of prototypes, role models and other classed and gendered experiences and conceptions, it is important to apply methodologies that allow for that kind of complexity.
Future research could further explore this difference in young people's relating to and being affected by prototypes and real-life role models to clarify in what way the prototypes affect the students' choice processes at different stages of the decision process.
Implications
The study finally suggests some practical implications for future role model projects.It seems that role model projects should have some duration in time to present the students with different set-ups and because prolonged contact is the most effective for challenging prototypes.Further, the students should have the opportunity to meet different role models, since individual students perceive role models differently.Finally, it should be considered that role models representing traditional and nontraditional subjects have different opportunities and challenges for influencing students.It appears that students should not meet role models from familiar and unfamiliar, but related disciplines (e.g.biology and molecular biology) at the same time, because there is the danger that students create a subdivision of the familiar discipline for the unfamiliar discipline, thus reducing the opportunity to experience the new discipline as an independent opportunity.
Figure 1 :
Figure 1: The three-way matching process involved when students meet role models.
Figure 2 :
Figure 2: Mary's matches with the two math role models
Figure 3 :
Figure 3: Ewan's and Tabitha's matches with different role models
Figure 4 :
Figure 4: Peter's matches with two role models and the situation-specific prototype linkages During the role model project, Mary could not see what she was meant to use her mentors (Molly from biology and Michelle from mathematics) for.She did not talk a lot with them and had great trouble with seeing herself in them."Ithinktheyare nice and all, but it isn't really like I hang out with them.Or like they are the kind of people I hang out with."Despite their mutual interest in mathematics, Mary did not see any similarities with Michelle whatsoever.Mary describes herself as being fun and into sports and American TV shows as opposed to Michelle, whom she sees as bone-dry and "scientific".Throughout the role model project, Mary's trajectory leads her still further away from pursuing her interest in mathematics.Prior to the match-making, Mary was convinced that she would not pursue a math career.Instead, she decided to visit Reba who studied mathematics and economics "to see if that could be interesting instead."Mary was happy to meet Reba because she was not as geeky as Mary's prototypical image of a STEM student, and because Mary felt more similar to Reba than to her mentors: "We are quite similar personally.We have the same values and want to see life".Mary, however, did not attribute Reba's qualities to being a typical student of mathematics (and economics): their home they have always kept books of math problems, and now that Mary is older, they also have several brainteasers.Mary's parents feel that they have fought hard to give her good opportunities and that she should not waste the chance to get a good education.Mathematics is Mary's absolute favourite subject, and therefore she also thought it would be "absolutely awesome to study [that at the university]".She likes fiddling around with math problems; she finds it fun to spend a long time making something add up, and when it does, it makes her happy.She gets that from her father.Lately, however, Mary has given up logy that involves the human body, and since she prefers working with people to working in an office, she has decided to become a physiotherapist.Eva Lykkegaard and Lars Ulriksen[79]12(1), 2016 "She looks like someone who sells women's clothes, I really hadn't expected her to study maths[...]But maybe it's one of those things where since we were little, we have always thought that university was for geeks, those people who are much too clever compared to the rest of us."Conversely,Michelle fitted Mary's image of a typical math-student much better.
Tabitha's meeting with Martin and geology once again supressed geology as a suitable study program for her -although she eventually filed geology as her second option in her university application. | 11,300.8 | 2016-04-26T00:00:00.000 | [
"Education",
"Engineering"
] |
Blockchain and High-Performance/Low-Cost Ambient Sensors Mounted on Open Field Servers-Based IoT-Oriented Information-Sharing System for Agricultural Fields
— Our research group focuses on creating various application systems of these systems tends to focus on agricultural (agri-) advancement and security issues. First, we monitor agriculture and related environments (e.g., rice fields, meadows, and gardens) and obtain environmental agri-information (e.g., temperature, moisture, and the quality of soil) over extended periods. To do so, we use an existing Japanese high-performance/low-cost field server (FS) system, which operates sensor units uniformly. Furthermore, we develop and implement a blockchain and Twitter-based record sharing system connecting with the FS for the benefit of traditional agri-researchers, workers, and their respective managers. In regard to the study results, presenting the accuracy data quantitatively is difficult; we are unable to show the success and error rates for the systems’ data transmitting and receiving, nor the examination operation time. We believe the holistic system holds the potential to improve not only agri-businesses but also agri-skills and overall security levels.
I. INTRODUCTION
Various software and hardware agricultural (agri-)systems have been developed to help manage risks, such as security issues and the increasing impact of severe disasters. Obtaining and sharing real-time information is a crucial factor for success in practical modern agri-management. Across multiple projects, we have focused on two main directions.
First, to monitor outdoor fields (such as rice fields, meadows, and gardens) and obtain environmental information over long periods, we developed a highperformance/low-cost field server (FS, National Agriculture and Food Research Organization (NARO), Tsukuba, Japan)based system that uniformly operates sensor units. This is a diverse ambient sensor cloud system using Open FS. Our system was designed based on the assumption that some components can be accessed via the Internet. Thus, the system can monitor values from many kinds of units and dynamically adjust managing schedules according to detected conditions. FS approaches have been experimentally validated [1]- [15]. Submitted Second, we developed wearable measuring systems, combining diverse components, such as advanced sensors, and gadgets that incorporate 3-axis-acceleration sensor(s) and gyro-sensor(s). We analyzed users' motions according to dynamics and statistics [16]- [19]. We also connected isolated nodes via Digi-Mesh. These approaches can improve users' agri-jobs' skills and enhance security in agri-fields. The expansion and complexity of agri-management, including social requirements and methodologies, have exceeded those systems [1]- [19]. Many fixed FS(s) systems have been developed by various facilities and companies [1]- [3].
Recent promising studies in diverse research and business fields have investigated blockchain-based networks and similar systems [4]- [11]. Kamilaris, Fonts, and Prenafeta-Boldύ [4] presented the latest blockchain technology-based trends in agriculture and food supply fields, focusing on food science and technology. Omar et al. [5] and Cichosz, Stausholm et al. [6] constructed and presented concrete examples of blockchain technology-based platform systems for health care, and practical treatments for diabetic patients. Regarding blockchains for diverse vehicles, Liu et al. constructed and presented blockchain-enabled security systems according to electric vehicle clouds and edge computing [7], and Cebe et al. looked at a lightweight blockchain framework for forensic applications of networkconnected vehicles [8].
Internet of things (IoT)-based blockchain security and integrity have been considered in the literature. Yu et al. presented blockchain-based solutions to enhance the security and privacy levels of the IoT [9]. Machado and Fröhlich used blockchain-based systems to verify the IoT data integrity of cyber-physical systems [10]. With respect to robotics, Strobel et al. managed byzantine robots with blockchain technology for a swarm robotics decision-making scenario [11].
Cases more closely related to our focus include past studies of the (mesh-shaped) network system for agri-research and business-based management [1]- [3], [12]- [15]. Fukatsu, and Hirafuji [12] designed, developed, and handled web-based sensor network systems with FSs for practical agriapplications. The systems obtained diverse data, such as humidity, temperature, and soil components (e.g., nitrogen (N), phosphoric acid (H3PO4), and potassium (K)), from real , Japan) IoT-based frost event prediction system for agrifields with consideration of recent precision agriculture techniques [13]. Karim and Karim [14] constructed and managed a monitoring system using the Web of Things (WoT) for precision agriculture.
In this trial, considering past technical concepts and trends, we developed and implemented a blockchain-based agrirecord sharing system for common agri-workers and managers. Specifically, we selected Hyperledger Fabric Certification (CA, Hyperledger Fabric CA Project), handled with the Go language, which we determined to be a promising infrastructure that allows for the sharing of basic text and numerical data at various organizations. We can achieve quick data-sharing approaches for FS data.
A. Outline
First, we reviewed past academic and companies' results and accomplishments, and met with some predecessors who have engaged in primary industries. Then, we selected some promising techniques and electronic gadgets. We developed approximate schedules, designed, and mechanically constructed the systems, and scripted various command codes for processing. Although similar fixed FS-based systems exist, they are not optimized for network formation(s) with mobile systems, and do not cooperate with wearable sensing systems (WSs).
Thus, we aim to create new integrated structures to support these options and enhance their utility and flexibility against sudden accidents in real situations. As preliminary trials, we conducted indoor and outdoor experiments to estimate their utilities to some extent. We used typical FS constructions surrounded by small solar panels to supply constant and stable electricity. The panels were attached to the solid frames surrounding the FS to allow for balance adjustment. Twitter (Twitter Inc., California State, the U.S.) is one of the most robust, user-friendly SNS in diverse aspects and worldwide reviews. That's why we selected, trust, and utilize Twitter and its peripheral services in this successive study.
B. System
In Figs. 1-4, we present the main structure of the blockchain-based and oriented system. This study consists of three phases: (1) designing and confirming the validity of the entire system, (2) constructing and tuning various minor system settings, and (3) conducting experiments in indoor and outdoor settings. To inform Phase (1), we performed a literature review to identify achievements and similar research and tools. Then, we designed systems to be practically executed in outdoor farmlands. During this phase, we incorporated the opinions of farmers with about 10 or more years of professional agri-experience. In Phase (2), we built the system and verified the parts judged to be the essentials of the operation. In Phase (3), we performed operational experiments in indoor and outdoor settings.
In Fig. 1, vector ① indicates datasets sent from Wi-Firouters installed on Arduino microcomputer boards to general cloud servers of Twitter (Twitter Inc., USA). We utilized a commonly distributed Twitter-API for general Arduinodevelopers. For vector ②, after we open the source codes of Twitter's website, we search the intended tab codes indicating uploaded FS data with Twitter using Python-3 codes and extract the datasets. For vector ③, we upload the datasets to the private blockchain-based networks. Fig. 2 illustrates the present study's construction of Hyperledger Fabric [20]- [24]. Fig. 3 presents the deploying model of hierarchical CAs, and Fig. 4 shows the construction and flowchart of the blockchain-formed applications. Fig. 5 broadly illustrates the FS (main body unit). We use Arduino UNO3 (Arduino Inc., Italy), and code in "Arduino 1.0 language" or "Processing language." Target measures included temperature, humidity, location of users and FS, strength of natural light (including sunlight), and distance between FS (in the future, including WS) and obstacles. Hyperledger is based on a different information-sharing style than other formats (e.g., Bitcoin Core, Lightning Network, Ethereum, or Quorum). This setup makes the information-sharing process much safer. Unlike other blockchain infrastructures, Hyperledger-based systems cannot broadcast datasets extracted from Twitter to outsiders. For example, a notary-based system prevents double dealings and ensures the uniqueness of a transaction.
For the data-uploading obtained from FSs, we first set the following two formats, noting a set threshold for long sentences: (1) For longer sentences than the threshold, we present long sentences of obtained data from sensors (e.g., generated voltage value, time, temperature, or humidity) as comma separated values (CSV)-format sentence-datasets. (Fig. 6, 7, and 8). (2) For shorter sentences than the threshold, we use small, relatively simple datasets, output from the aforementioned Arduino.
III. RESULTS
In this study, motivated by improving the skills and conditions of agri-workers in primary industries, we created FSs, WSs, and real-time wireless networking-based systems. We incorporated a blockchain-based methodology, Twitter cloud services, and private databases to accumulate and store a variety of promising data. Based on our results, the proposed approaches can improve the detection of critical or fatal situations and the stability of the overall system. Although we applied the integrated system with a certain level of accuracy, we were unable to quantify the accuracy data because we did not define appropriate trial time ranges, and the error data were rather random. Thus, we are unable to report success and error rates (%) for the data transmission and reception or the operation time. However, from our experimental ranges for these errors, we suggest that the errors were not due to the characteristics of the Hyperledgerinfrastructure. Over several years of research, we have accumulated basic data from several digital gadgets. However, their accuracy, endurance, and validity should be confirmed. We were unable to present numerical sets of fixed quantitative data concerning blockchains and databases, so we must seek more suitable and valid methods to assess the systems' operations (Fig. 8).
For the FS's main body unit, we could attach more robust frames, digital video cameras, small edge-computers to deal with various data four-solar panels, and mechanisms to automatically adjust their positions.
Another remaining problem is attaining more data from diverse settings and situations (such as breeding farms, plastic farms, construction sites, or more intense or sudden natural impacts). We may consider combining machine learning and quantum computing technologies to support data collection and stability over longer experiment times.
Moreover, we could perform desktop analyses and interview key experts and users. The results of these investigations could be incorporated into computer systems and fed back to experimental users. We could also undertake further experiments and practical applications of the system to understand how to enhance the system for use in real settings. We have been planning consultation and supporting projects to improve the security and productivity of real agriworkers. Above all, we hope to start launching practical consulting and proposals to real users within 10-20 years, both in Japan and globally.
Furthermore, as these techniques are economically robust, scalable, and distributed flexibly, they could be applied to other fields and settings, such as factories or residential districts, to identify sudden accidents or crimes. Considering various non-fixed factors, such as installation locations, outdoor field conditions, and different approaches to various incidents, such as systems' tumbling, sudden halting, or other irregular accidents, we can solve diverse social problems. | 2,493.4 | 2021-10-14T00:00:00.000 | [
"Computer Science"
] |
Cricket Scrapper: A Tool Developed to Extract Cricket Players Data
Objective: To develop a generic web scraping tool that can extract cricket player’s data for research purposes. Methods: Cricket as a sport has greatly plagued the world with its importance and entertaining nature. For a team the performance of each individual player has great importance for their victory. In this paper, an automated data scraping mechanism is devised to extract and store data of cricket players from a cricket website. This website contain web-enabled database of cricket players where the data is contained on multiple web pages and proposed mechanism is designed to extract this variation as per requirement of researcher. Findings: This tool will be helpful for data scientists/researchers to extract dataset as per their requirements. Applications: This generic tool will be applicable in analyzing the formation of cricket team as per rating of individual player. Moreover, this tool will contribute as an essential ingredient for research related in the domain of cricket analytics.
Introduction
The sports industry is well known for a large number of people including sports enthusiasts, professional in sports industry and has great contributions for researchers. Researchers and other analysts are interested in exploring different patterns of games and individual players as well. In parallel to this, the volume of data is increasing incredibly that need to be analyzed for different purposes (predictive analysis and current analysis).
The cricket industry has gone through immense height in prestige and became an important aspect of our economy. Massive amounts are being spent for the development of a team and heavy revenue is generated after the success of that team in any tournament or tour matches. With the economic factor involved, the need to statistically analyze the performance of a player and important decision making process based on that analysis has never been greater.
Cricket, a game of immense thrill, entertainment and pleasure has taken the sports industry by storm. Due to the monumental importance lay towards cricket, the economical factor involved has also sky rocketed. Nowadays, analysis of each player to evaluate their performance in past games has gathered great significance towards the success of any franchise or international teams. To enable a successful analysis, the need of gathering consistent, concrete and complete data is imperative.
In the world of internet and technology, web scraping is generally considered as web indexing, which performs the operation of indexing on web pages with the help of a web crawler or a bot. A web scraper outlining standards and strategies are differentiated. Development and design of a web scraper is based on the basic parameters on, provide the link of the desired data source, a concrete mechanism to extract the data from specified source and finally storing the extracted data into a desirable format to be used hereafter.
In this study, it was inevitable to deny that the availability of cricket players' data is scares. On the web there are numerous websites providing information of players around the globe as well as a detailed informative data on their performance (match by match).
In order to analyze a player's performance by implementing various statistical approaches, and performing cricket analytics methodologies, a mechanism to rapidly extract multidimensional data of that player must be present. Our study presents a mechanism, where innings by innings data of a player is gathered through a tool, which works on the sole principles of web scraping 1 . The rest of the study in this paper is organized as follows. Section 2 presents the detail why the need of a scraping tool is vital for rapidly extracting the data available on the web. Section3 presents the methodology and multiple channels used to develop such a tool and Section 4 and 5 finally discuss and conclude this study.
Aim of the Present Study
To extract content from the web is critically important if one wants to work on updated content. This process of accessing desired content is known as "web content extraction" 2 . This process increases storage operations and indexing of data. The presented tool is generic for all webenabled databases having cricket player's information.
The main objective is to develop an automated generic scraping framework that can work dynamically to extract information about cricket for data analytics. This web scraping tool can enable a user to just type the name of the player, after which the developed scraper will automatically indexes the page where that player's data is located and delivers the question to the user to select the format of which the data is to be extracted of that player.
Storing, tabulated data from a web page into a .csv format file is time consuming and challenging to manage. Researchers have explored automated data scraping for different case studies such as soccer analysis 3 weather analysis 4 . Some researchers have worked on specific features of soccer, for instance performance rating index of a player with its usage 5 . For cricket, researchers have used publication databases to extract data for assessment of workload and its results on fast bowlers in terms of injury and performance 6 .
In contrast with the related work, it was observed that there is no benchmark data or data extraction source available for analysts and researchers for exploration of data for dynamic purposes. In addition, there is no generic tool available that can extract data for cricket analytics with customized settings. Due to this purpose, it was decided that an automated scraping framework should be devised to fetch the specified data from a website containing consistent and detail amount of that data.
For this purpose, a cricket analytics website, proven to be a perfect foundation to implement our scraping mechanism and extract data of every player around the world. The realization of this tool did not only scrimp the utilization of resources such as time and effort but also provided the user to extract data of every match, innings by innings, batting and bowling both into a .csv format with separate files for batting and bowling data of a desired player.
Methodology
To conduct web scraping and develop a fully-fledged scraping tool, the use of any programming language mostly python is considered for different case studies 4,7 . Some researchers have also worked using R studio for data scraping 8 . However, in our scenario, it was decided to use C# as the development language of the tool and implementation was carried out in asp.net 4.5 framework because it was aimed to develop cricket analytics tool that can be used for research purposes.
As it was decided to develop the tool, by gathering data from Howstats.com, the Document Object Model (DOM) of that website was taken into account. This study reflected us with the insight that Howstats.com was indeed a dynamic website and displayed data through AJAX requests.
However, the aim was to extract data of a player (innings by innings) which was present in tabular format on that website. There was a limited use of JavaScript, in displaying the match data, so it was not difficult to decipher a link to the data available. Assuring that the use of typical and traditional methods for scraping should not be adopted. The main reason for this was to avoid any unwanted circumstances and legal issues from the web server.
Therefore, the process of scraping had to be automated. HTML Agility pack was discovered, HTML Agility pack is an agile HTML parser that builds read/ write DOM and supports plain XPATH or XSLT 9 . It is a library in C# that enables us to parse HTML, PHP or even .aspx files. For better understanding, HTML Agility pack is used to implement scraping of multiple web pages present on the internet. Our objective to use this library was to breakdown the HTML page on which the tabular data of player has been located and extracts data from that HTML page.
After breaking down the desired page, the major focus was revolving around how to extract and save the tabular data from that web page to a ".csv file" format. This issue was addressed, by discovering Language Integrated Query (LINQ) in asp.net framework with AngleSharp. The use of AngleSharp enables us to define which capabilities are present for the browsing engine and making the engine appear in the form of a "browsing context", which makes the browsing engine considered a headless tab, resolving the legal issues of scraping the web page as well 10 . Through the utilization of AngleSharp library, a new document can be opened and in that document the elements and that were required can be extracted and saved in a .csv format by providing LINQ queries on that document. This mechanism provides automated extraction of player's data that can be used for further analysis in this domain 11 .
Discussion
By the succession of this study, it was discovered that scraping techniques constitutes of many tools to be implemented and considered. Despite of the fact there are web-scraping techniques available but for dynamic websites, it is still challenging researchers to extract data of their use. Researchers are still facing problems of unavailability of quality research data to conduct advance analysis on cricket data. In addition, there is an abundance of data available on cricket players but the methodologies implemented to scrap the data are widespread and heavily dependent on the nature of the website.
Conclusion
The nature of this study was to reflect and exhibit different tools that can be implemented to leverage a scraping tool, for the possession of any desired data available on the web. An importance of scraping is also demonstrated, when there are no API's available that can be used to extract data from a website directly. After analysis of available resources and their justification of use, an automated scraper was developed to extract information of cricket players with customized settings. This research work aims to contribute towards statistical comparisons between players of different teams irrespective of their team ranking in international formats of cricket, generating a rating system based on their performance in recent and past matches and ranking them according to that rating. | 2,367.4 | 2019-07-01T00:00:00.000 | [
"Computer Science"
] |
The regulation of catalytic activity of the menkes copper-translocating P-type ATPase. Role of high affinity copper-binding sites.
The Menkes protein is a transmembrane copper translocating P-type ATPase. Mutations in the Menkes gene that affect the function of the Menkes protein may cause Menkes disease in humans, which is associated with severe systemic copper deficiency. The catalytic mechanism of the Menkes protein, including the formation of transient acylphosphate, is poorly understood. We transfected and overexpressed wild-type and targeted mutant Menkes protein in yeast and investigated its transient acyl phosphorylation. We demonstrated that the Menkes protein is transiently phosphorylated by ATP in a copper-specific and copper-dependent manner and appears to undergo conformational changes in accordance with the classical P-type ATPase model. Our data suggest that the catalytic cycle of the Menkes protein begins with the binding of copper to high affinity binding sites in the transmembrane channel, followed by ATP binding and transient phosphorylation. We propose that putative copper-binding sites at the N-terminal domain of the Menkes protein are important as sensors of low concentrations of copper but are not essential for the overall catalytic activity.
Copper is an essential trace element: its ability to redox cycle between Cu(I) and Cu(II) states is utilized by cuproenzymes participating in redox reactions. However, these same properties make excess copper toxic to biological systems (1). Finely tuned complex mechanisms of copper homeostasis have evolved to allow the regulated uptake of copper, its delivery to target proteins, and detoxification by chelation and/or efflux from the cell (2)(3)(4). Copper-translocating P-type ATPases found in a variety of organisms are implicated in the delivery of copper to some cuproenzymes and in the efflux of copper from the cell (2)(3)(4).
The Menkes (MNK) 1 protein (ATP7A) is a copper-translocating P-type ATPase expressed in most tissues except the liver (5)(6)(7)(8). Mutations in the Menkes gene that cause the loss of function of the MNK protein result in Menkes disease in humans, a potentially lethal X-linked disorder associated with severe systemic copper deficiency. Menkes patients suffer from neurological and connective tissue abnormalities as a result of copper deficiency, which reduces the activity of copper-dependent enzymes (9). Through clinical and laboratory studies on Menkes disease patients, the role of the MNK protein in the absorption of dietary copper from gut epithelium, delivery of copper to cuproenzymes, and efflux from the cell were established (9,10). P-type ATPases are multispanning membrane proteins that translocate ions (e.g. H ϩ , Na ϩ , K ϩ , Ca 2ϩ , Cu ϩ , and Cd 2ϩ ) across biological membranes against an electrochemical and concentration gradient using ATP as an energy source (11,12). The catalytic cycle of P-type ATPases is characterized by the coupled reactions of cation translocation and ATP hydrolysis with a transient aspartyl phosphate formed as a part of the reaction cycle. The phosphorylation results in the enzyme changing its conformation from the high affinity cation and nucleotide binding state, E1, to the low affinity, E2, state. This transition coincides with cation translocation from the cytosolic to the lumenal side of the membrane. The release of the cation is followed by the hydrolysis of the aspartyl phosphate bond, and the return of the enzyme to the E1 state. Fig. 1 shows a proposed model for the reaction cycle of MNK based on the model proposed for classical P-type ATPases (13)(14)(15). The E1-P conformation is also characterized as ADP-sensitive because the enzyme can be dephosphorylated by ADP (reverse reaction), whereas the E2-P state is ADP-insensitive (13,14). Investigating the properties of transient aspartyl phosphate provided important structure-function information on the catalytic mechanism of P-type ATPases.
Although H ϩ , Na ϩ /K ϩ and Ca 2ϩ P-type ATPases have been studied extensively (15), copper P-type ATPases have been discovered relatively recently (16), and the mechanism of catalysis is still poorly understood. Thus, even the hallmark of P-type ATPases, the formation of the acylphosphate intermediate, has not been assessed in detail. Apart from eight highly conserved domains, the structure of human copper P-type ATPases differs considerably from other enzymes of that family (17). The most prominent feature of human Menkes protein and the related Wilson (WND) protein, a copper P-type ATPase expressed in the liver, is six repeats of the putative metalbinding motifs (GMXCXXC) at the N-terminal domain (5)(6)(7)(8).
Copper binding properties of the putative metal-binding sites (MBSs) of human copper P-type ATPases have been a major focus of studies on the structure of these enzymes. Several reports have established a stoichiometrical binding of Cu(I) to the N terminus of MNK and WND (18,19). In addition, the copper exchange between this domain and a cytosolic copper chaperone, ATOX1, has been demonstrated in vitro (20,21). At least some of the MBSs appear to be involved in regulation of copper-stimulated trafficking of MNK, which is believed to be essential for copper absorption into the body and copper detoxification (22,23). However, despite these findings, the role of MBSs in the catalysis of copper translocation, the pivotal function of MNK, is yet to be fully understood.
The observation that functional complementation of the yeast copper P-type ATPase, Ccc2, can be provided by the MNK and WND proteins has become the basis for an indirect assay. The ⌬ccc2 yeast cannot grow in copper/iron-depleted medium, as in the absence of the Ccc2 protein, copper cannot be delivered and incorporated into cuproenzyme Fet3, the function of which is essential for high affinity iron uptake. The expression of MNK or WND complements the growth of ⌬ccc2 yeast through, presumably, high affinity copper transport (24). Therefore, any MNK or WND mutant unable to complement the ⌬ccc2 phenotype has been considered inactive. The analysis of various MNK or WND mutants suggested that at least some MBSs were essential for the complementation of the yeast copper transport system and were thus presumed to be involved in the catalysis of copper translocation (25)(26)(27). In contrast to these reports, we have demonstrated directly using an in vitro vesicle 64 Cu translocation assay and in vivo whole cell 64 Cu accumulation assays that the MBSs of MNK are not essential for 64 Cu translocating activity of the MNK protein in mammalian cells (28).
In the current study, we overexpressed the wild-type and targeted mutant MNK in yeast and provided the first detailed analysis of transient phosphorylation of the human MNK protein. Through these studies, we examined the role of putative MBSs in catalysis, and we propose that their role is high affinity copper sensing/activation of the MNK protein.
Isolation of Menkes Protein-enriched Membranes-Total yeast protein extract was prepared from overnight cultures according to Ref. 34, and the level of expression of MNK was assessed by the Western immunoblotting analysis (see below). The selected yeast clone with the highest level of MNK expression was grown overnight in 500 ml of YPD (2% glucose, 1% yeast extract, and 2% Bacto-peptone). After 16 h, yeast were harvested, washed extensively with Milli-Q water, and homogenized using glass beads in 10 mM Tris-HCl (pH 7.4)-250 mM sucrose supplemented with an antiprotease mixture (Roche Diagnostics GmbH, Mannheim, Germany) and 5 mM dithiothreitol or 10 mM ascorbate. The homogenate was centrifuged at 10,000 ϫ g for 20 min, and the supernatant was collected and centrifuged at 50,000 ϫ g for 20 min and 110,000 ϫ g for 60 min. The resultant pellet (vesicles) was resuspended in the buffer described above supplemented with 0.2 mM dithiothreitol or 0.2 mM ascorbic acid. Protein concentration of the vesicle preparation was determined using Bio-Rad reagent (35). Membrane vesicles from Chinese hamster ovary (CHO) cells stably transfected with the MNK cDNA constructs were prepared as described previously (28).
Western Immunoblotting Analysis-Vesicles were lysed in 0.2% SDS prior to Western immunoblotting analysis. Proteins were resolved on a 4 -20% SDS-polyacrylamide gradient gel (Novex, San Diego, CA) and transferred onto a nitrocellulose membrane as described previously (36). MNK was detected using polyclonal rabbit antibodies raised against the N-terminal or C-terminal region of MNK (28). MNK was visualized using an enhanced chemiluminescence kit (Roche Diagnostics GmbH). As pure MNK is not currently available the relative amounts of MNK in purified vesicles from transfected cells was normalized against the level of wtMNK on the same blot by using laser densitometry (model 300A, Molecular Dynamics Inc., Sunnyvale, CA).
Copper Transport Assay-64 Cu transport assays were conducted according to the method described previously (28,37). 64 Cu was obtained from Australian Radioisotopes, ANSTO (Lucas Heights Research Laboratories, Lucas Heights, New South Wales, Australia).
[ 32 P]ATP Phosphorylation Assay-The [␥-32 P]ATP phosphorylation assay was conducted using MNK-enriched membrane vesicles from yeast and carried out on wet ice (0 ϩ 2°C) in 20 mM MOPS (pH 6.8) buffer supplemented with 150 mM NaCl, 5 mM MgCl 2 and 50 M dithiothreitol or 100 M ascorbic acid as reducing agents. Various concentrations of CuCl 2 , the copper chelator BCS or inhibitors of the reaction were added. When studying concentration-dependent inhibition of MNK by orthovanadate, vesicles were preincubated with the inhibitor in the absence of copper for 5 min on ice, and then copper and other ingredients were added, and samples were processed as described above. Each incubation contained 20 g of vesicle protein. The reactions were initiated by adding 1 M [␥-32 P]ATP (10 Ci/mmol; GeneWorks, South Australia, Australia) and stopped at various time points. The MNK protein was immunoprecipitated using anti-N terminus MNK antibodies (28) and Pansorbin (Calbiochem Biosciences, La Jolla, CA) as a source of Protein A. The immunoprecipitate was washed, and protein(s) were eluted and subjected to SDS-polyacrylamide gel electrophoresis as described in Ref. 37 followed by the autoradiography using the Kodak Biomax-MS film and Biomax-MS amplifying screen (Eastman Kodak Co.). The film was exposed for 24 -72 h at -70°C. Autoradiograms were analyzed using laser densitometry.
RESULTS
The wild-type and mutant MNK proteins were expressed stably and at a relatively high level in yeast. A similar content of the wild-type and mutant MNK protein in membrane vesicles (Fig. 2) provided a significant advantage in terms of the use of yeast over the mammalian expression system, in which the expression of MNK mutants used in the present study was unstable and variable despite the constant selection with an antibiotic G418 (22,28). The MNK protein expressed in yeast had a smaller apparent molecular weight than MNK expressed in CHO cells from the same cDNA construct (Fig. 2). The protein was not truncated, as the Western immunoblotting analysis detected both yeast and CHO cells expressed MNK using anti-MNK antibodies raised against the C terminus and the N terminus of the protein (data not shown). A similar observation has been reported previously and may be due to an altered posttranslational modification(s), such as hypoglycosylation, of MNK (26).
Yeast with a deletion of the CCC2 gene are unable to grow on the copper/iron-depleted medium, as under these growth conditions, copper is not delivered to the copper-dependent Fet3 protein(as described above). The ⌬ccc2 phenotype was complemented by wtMNK and mMBS1-3 as indicated by the growth of yeast on the copper/iron-deficient medium (Fig. 3). The ⌬ccc2 complementation was independent of the level of expression of wtMNK, as two clones of yeast expressing significantly different levels of wtMNK (Fig. 2) both complemented the growth of ⌬ccc2 yeast (Ref. 27 and results not shown). As predicted, the D1044E and mHD mutants, which lacked essential catalytic domains, as well as the empty vector control, could not complement the growth of ⌬ccc2 yeast ( Fig. 3 and data not shown). The mutation of all six MBSs (mMBS1-6) resulted in the loss of ⌬ccc2 complementation by MNK (Fig. 3).
The 64 Cu-translocating activity of the wtMNK protein expressed in yeast from human MNK cDNA was similar to that overexpressed in CHO cells from the wild-type human MNK cDNA (28) (Fig. 4). It obeyed Michaelis-Menten kinetics with an apparent K m ϭ 2.0 Ϯ 0.4 M copper, and an apparent V max ϭ 0.63 Ϯ 0.05 nmol of copper/min/mg of protein (ϮS.E.). Orthovanadate inhibited the activity of wtMNK with ϳ50% inhibition of 64 Cu translocation occurring in the presence of 50 M orthovanadate (28,36) (Fig. 4). The ATP concentrationdependence of copper translocation also followed Michaelis-Menten kinetics, with an apparent K m ϭ 17 Ϯ 7 M ATP (ϮS.E.) comparable to the K m ATP values determined for other P-type ATPases (38,39). Consistent with results obtained for other P-type ATPases, vesicles prepared from the D1044E and mHD mutants (11), as well as from the empty vector-transfected yeast, had no detectable 64 Cu translocating activity (Fig. 4). To fully understand the mechanism of copper translocation by the Menkes P-type ATPase, we have investigated transient phosphorylation of MNK and its copper dependence using isolated membrane vesicles. The wtMNK protein was phosphorylated by [␥-32 P]ATP on wet ice (0 ϩ 2°C) in a time-dependent manner with the maximum phosphorylation occurring within 20 s. (Fig. 5A). A labeling longer than 20 s resulted in irreversible acylphosphate-independent phosphorylation of MNK. 2 In the presence of a copper chelator, 1 mM BCS, there was no significant phosphorylation observed, suggesting that copper was essential for the formation of acylphosphate (Figs. 5A and 6). Hydrolysis of the [ 32 P]MNK complex with 100 mM hydroxylamine is consistent with the acylphosphate nature of the intermediate (Fig. 5A). "Pulse-chase" of MNK acylphosphate with 1 mM "cold" ATP demonstrated the transient nature of the phosphorylated intermediate, with almost a complete turnover of MNK being observed by 60 s (Fig. 5, D and E). The formation of MNK acylphosphate was reversible in the presence of 1 mM ADP or 1 mM BCS, as indicated by rapid dephosphorylation of the [ 32 P]wtMNK complex (Fig. 7). The results suggested that, by analogy with other P-type ATPases, under the experimental conditions, at least 70% of the phosphorylated MNK protein was present in the ADP-sensitive E1-like state, as ϳ30% phosphorylated intermediate remained phosphorylated following pulse-chase with ADP ( Figs. 1 and 7B). Presumably, the latter represents the E2-P ADP-insensitive state of MNK. As expected for a P-type ATPase, orthovanadate inhibited the phos- was required to inhibit the formation of acylphosphate significantly (Fig. 5A).
The inability of the D1044E mutant to form an acylphosphate intermediate from [␥-32 P]ATP (Fig. 5C) indicated that the invariant aspartate residue within the conserved (among all P-type ATPases) DKTG motif is the most likely residue phosphorylated during the reaction cycle. The lack of phosphorylation of the mHD mutant indicated that the alteration of the conserved motif within the ATP-binding loop probably prevented the ATP binding and consequently resulted in the inability of the mutant to be acyl-phosphorylated (data not shown) and transport 64 Cu (Fig. 4).
The formation of acylphosphate was copper concentrationdependent, with the maximum level of phosphorylation observed at 5 M copper (Fig. 6). A further increase in copper concentration resulted in the inhibition of phosphorylation, due, most likely, to substrate inhibition and/or protein denaturation. This is in agreement with the 64 Cu translocation vesicle assay, in which the transport of 64 Cu could not be measured at Ͼϳ5-6 M copper (27). Together, these results indicated the heterologously expressed human wtMNK in yeast was a fully active copper pump that had all the features characteristic of a P-type ATPase and was essentially indistinguishable from MNK expressed in mammalian cells. The formation of wtMNK acylphosphate intermediate appeared to be copperspecific, as no detectable phosphorylation was observed in the presence of other heavy metals, such as cadmium, zinc, and mercury (Fig. 5F).
The MNK mutant with the first three N-terminal MBSs mutated (mMBS1-3) remained catalytically active with respect to 64 Cu translocation, although its activity was reduced by 40 -50% compared with wtMNK (Fig. 4). The apparent kinetics parameters for mMBS1-3 were K m ϭ 4.0 Ϯ 0.7 M copper and V max ϭ 0.37 Ϯ 0.03 nmol of copper/min/mg of protein, and with respect to ATP, K m ϭ 13 Ϯ 4 M ATP (ϮS.E.), similar to the wtMNK (see above). The mutant protein formed transient aspartyl phosphate from [␥-32 P]ATP and turned over in the presence of 1 mM ATP identically to wtMNK (Fig. 5, D and E).
The substitution of Cys to Ser in all six MBSs of MNK resulted in the mutant mMBS1-6, which had no detectable 64 Cu-translocating activity under the assay conditions (1-5 M 64 Cu). Because of the inhibitory effect of copper (see above), we were unable to test whether the mMBS1-6 mutant could transport 64 Cu at concentrations Ͼ5 M copper. However, the mutant protein was transiently phosphorylated in a copperspecific and copper concentration-dependent manner and was turning over, as judged by the pulse-chase experiment in the presence of 1 mM ATP, in a fashion similar to the wtMNK and mMBS1-3 proteins (Fig. 5, B, D, E, and F). The phosphorylation of mMBS1-6 also appeared to be reversible in the presence of ADP or BCS (Fig. 7).
In order to clarify the role of MBSs in catalysis and to understand potential reasons for apparently conflicting results between phosphorylation assays suggesting that the mMBS1-6 mutant is catalytically active, and the lack of 64 Cutranslocating activity and ⌬ccc2 complementation (Fig. 3), we attempted to simulate the conditions of the yeast growth assay in vitro, i.e. severe copper limitation. Thus, the phosphorylation assay was conducted in the presence of copper to stimulate the phosphorylation, but copper was rendered unavailable by increasing concentrations of the copper chelator BCS, which is commonly added to yeast growth medium in order to conduct the ⌬ccc2 assay. The results of this experiment indicated that the phosphorylation of mMBS1-6 was up to 2-fold lower than wtMNK in the presence of BCS (Fig. 8). This was consistent with a faster rate of dephosphorylation of mMBS1-6 than wtMNK in the presence of BCS (Fig. 7A), suggesting that the mutant copper transporter had a lower affinity for copper than the wild-type MNK protein.
Importantly, orthovanadate, a common inhibitor for P-type ATPases (40), appeared to be a more potent inhibitor of the phosphorylation of mMBS1-6 than wtMNK (Fig. 9). Orthovanadate is a structural homologue of orthophosphate that binds to the invariant Asp 1044 residue of MNK in the ADPinsensitive E2 state (see Fig. 1). Therefore, in the absence of high affinity copper-binding sites in mMBS1-6, a relatively higher proportion of the mutant appears to be present in the E2-like conformation (Fig. 7B), which can be more susceptible to the orthovanadate inhibition, as documented for other Ptype ATPases.
The level of Tx-MNK mutant in membrane vesicles was essentially identical to the level of the wtMNK (Fig. 10A). However, the mutant was unable to complement the ⌬ccc2 phenotype in yeast (Fig. 10B) and could not transport 64 Cu in the vesicle transport assay (Fig. 10C). These findings agree with our previous report using the Toxic milk mouse mutant form of the Wilson protein (41). Importantly, the Tx-MNK mutant protein could not form transient acylphosphate using [␥-32 P]ATP (Fig. 10D). That suggests the Tx-MNK mutation in the putative transmembrane domain 8 of MNK affected high affinity copper binding, potentially within the cation channel, that prevented the mutant MNK from acquiring the putative high affinity ATP binding conformation. DISCUSSION This report presents the analysis of acylphosphate formation by the human MNK copper-translocating P-type ATPase and provides evidence for the role of putative MBSs as high affinity copper sensors. This function is predicted to be essential for the physiological role of MNK in the cell, where bioavailable copper is found at very low concentrations (42). The heterologous expression in yeast of the wild-type and mutant MNK circumvented the problems of instability and, often, poor levels of expression of MNK mutants in mammalian cells (28). Importantly, the catalytic properties of the wtMNK protein expressed in yeast were found to be essentially identical to the protein expressed in mammalian cells (28).
The ⌬ccc2 yeast growth complementation assay, commonly used to assess the effect of mutations on the activity of copper P-type ATPase mutants, is based on the ability of normal yeast to grow in the copper/iron-deficient environment. Under these conditions, the ⌬ccc2 yeast are unable to complement the high affinity iron uptake, which is dependent on the delivery of copper to a cuproenzyme Fet3 by the high affinity copper Ptype ATPase, Ccc2 (25). On that basis, any copper P-type ATPase mutant that is unable to allow the growth of ⌬ccc2 yeast on copper/iron-deficient medium has been regarded as inactive (26,27). The major limitation of the assay is that it is functional only under copper-deficient conditions generated by the addition of BCS. Consequently, the reduced affinity of an MNK mutant for copper may be manifested as the inability to complement the growth of ⌬ccc2 yeast under copper/iron-depleted conditions. Indeed, when in the present study the mMBS1-6 mutant was analyzed for copper-stimulated phosphorylation, ADP-and BCS-dependent dephosphorylation, and turnover, it appeared to be almost as active as the wtMNK protein (Figs. 5-9). However, the mutant protein could not rescue the ⌬ccc2 phenotype. To clarify the role of MBSs in the catalytic mechanism of MNK, we analyzed the acyl phosphorylation of wtMNK and mMBS1-6 by attempting to simulate in vitro the copper-deficient conditions of the ⌬ccc2 complementation assay (Fig. 8), in which extracellular copper was depleted by the copper chelator BCS. The stronger inhibition of phosphorylation of mMBS1-6 than wtMNK by lower amounts of BCS indicated decreased affinity of the mutant for copper (Fig. 8). Furthermore, a structural homologue of P i , orthovanadate, had a stronger inhibitory effect on the acyl phosphorylation of mMBS1-6 than wtMNK (Fig. 9). Interestingly, the concentration of orthovanadate required to significantly inhibit acyl phosphorylation of wtMNK was substantially higher than in the case of non-heavy metal P-type ATPases. A similar observation has been reported recently for a bacterial copper P-type ATPase CopA (43). These results can be best explained if a high affinity copper binding to MBSs in wtMNK would lead, directly or indirectly, to a higher proportion of the enzyme present in the P i /vanadate-insensitive E1 state. Unlike for other P-type ATPases, we were unable to measure detectable formation of the E2-P intermediate of MNK using 32 P i (data not shown) that may have resulted from the E1 7 E2 equilibrium being shifted toward the E1 conformation, which has a low affinity for P i (Fig. 1). This is in agreement with a poor inhibition, compared with other P-type ATPases, of acylphosphate formation by orthovanadate, the structural homologue of P i (Fig. 9). Alternatively, the affinity of MNK for P i may be much lower than for other P-type ATPases. It appears that the mMBS1-6 mutant has a decreased affinity for copper. The implication of this is that under the conditions of increased copper concentrations, the catalytic cycle of the mutant protein is similar to wtMNK, whereas under physiological (very low bioavailable) copper concentrations, the mutant protein is unable to perform its catalytic function. Overall MBSs appear to be "internal regulators" of MNK activity, as they provide the enzyme with high affinity copper sensors that, upon the binding of copper, increase the proportion of MNK in the E1 state and thus facilitate initiation of catalysis. In the absence of the MBSs, the affinity of the mutant protein for copper would be reduced, and as a result, the protein would appear inactive in the ⌬ccc2 assay conducted under copperdepleted conditions.
An analogous situation has been found in the yeast calcium/ manganese P-type ATPase, Pmr1, the N-terminal EF hand-like domain of which contains high affinity calcium binding sites (44). It has been shown that although mutations of these calcium binding sites altered the kinetics of the enzyme by increasing the apparent K m value for Ca 2ϩ , the overall catalytic activity of the mutant proteins was reduced by less than 50% (44). In our study, we were unable to measure the 64 Cu-translocating activity of mMBS1-6 in yeast. Unlike calcium, copper binds to proteins nonspecifically with a high affinity that often leads to the inhibition of their catalytic activity. In the case of MNK, the presumed inhibitory effect of copper was observed at Ͼϳ5 M copper (28) (Fig. 6). Should the mMBS1-6 mutant have an increased K m value for copper, which could be expected based on the phosphorylation studies using BCS and or-thovanadate (Figs. 7-9), we would be unable to analyze the catalysis of 64 Cu transport in membrane vesicles in vitro at Ͼ5 M copper.
In our previous study, we overexpressed the mMBS1-6 mutant in CHO cells and analyzed 64 Cu transport using whole cells and MNK-enriched membrane vesicles (28). We had found, in contrast to the current study, that the mutant MNK had a reduced but measurable 64 Cu-transporting activity in vitro (28). These data could be explained if some copper ligands and/or cell type-specific protein-protein interactions contributed to 64 Cu translocation by the mutant MNK expressed in CHO cells as opposed to yeast cells. Importantly, the expression level of endogenous hamster MNK, which shares Ͼ95% identity with human MNK, was not increased in these CHO cells transfected with the mMBS1-6 mutant as determined by Northern analysis (results not shown). It has been reported earlier that the mMBS1-6 mutant expressed in CHO cells, unlike its wild-type counterpart, could not undergo copperstimulated trafficking from the trans-Golgi network to the plasma membrane, where it is expected to efflux copper from the cell (22,28). The mutant-transfected cells also had a copper hyperaccumulation phenotype and reduced copper resistance, but only when they had been exposed to increased concentrations of copper (28). In light of the findings presented in the current study, one can propose that due to decreased affinity for copper, the mMBS1-6 mutant was unable to transport low physiological concentrations of copper, but it "became" catalytically active when higher concentrations of copper were presented to cells. Under physiological conditions, copper may be delivered to the MBSs of MNK via the high affinity copper chaperone ATOX1 (20,21), which would permit the initiation of copper translocation under these conditions. Importantly, the mutation of conserved Met-1393 to Val, as occurs for the Toxic milk mouse mutation in the Wilson protein (30), has also resulted in an inactive MNK protein (41) (Fig. 10). The Met-1393 residue is highly conserved in copper P-type ATPases and is proposed to be located within the putative transmembrane domain 8. A soft Lewis base, methionine, in a transmembrane domain may be involved in the co-ordination of copper and therefore constitute a part of a high affinity copper binding site in the cation channel of the MNK protein. Until now, there has been no information on the order of events in the catalytic cycle of copper P-type ATPases. According to Fig. 1, which is based on the Ca 2ϩ P-type ATPase paradigm, copper is expected to bind to the MNK protein and lead to conformational changes essential for high affinity ATP binding and hydrolysis. It can be expected, therefore, that the disruption of high affinity copper binding sites in the cation channel would prevent ATP binding and phosphorylation. Here, we demonstrated that the M1393V mutation not only causes the loss of 64 Cu-translocating activity but results in the mutant protein being unable to become transiently phosphorylated. Although more evidence may be required to prove unequivocally that Met-1393 constitutes a part of the transmembrane copper channel, information provided here suggests that the binding of copper in the putative copper-binding sites within transmembrane domains is required for ATP hydrolysis. This finding also emphasizes the fact that although the mutation of the N-terminal MBS1-6 has some effect on catalysis, it does not appear to prevent copper binding to those sites, presumably in transmembrane domain(s), which are associated with conformational changes essential for high affinity ATP binding and the acylphosphate formation. Furthermore, these sites appear to be copper-specific, as no stimulation of acyl phosphorylation of MNK by the heavy metals cadmium, zinc, and mercury has been observed (Fig. 5F).
Studies on transient phosphorylation of hamster MNK by [␥-32 P]ATP have been previously reported (45), but the conditions of the assay favored significant non-acylphosphate phosphorylation, 2 37°C for 90 s, as opposed to the more conventional conditions for P-type ATPases (0 ϩ 2°C for Ϸ20 s) used in the current study. In addition, the rate of acylphosphate turnover was very slow in the earlier studies (ϳ50% after 5 min at 37°C), in contrast with the results presented here (at least 75% after 60 s on ice), which are more consistent with the established models for P-type ATPases (45).
Unlike in lower organisms, human MNK copper P-type ATPase has the dual function of delivery of copper to cuproenzymes in the secretory pathway (29,46) and efflux from the cell mediated by copper-regulated trafficking to the plasma membrane (4). Accumulated evidence suggests that MNK has evolved to acquire both functions by regulating the catalytic activity and intracellular localization through interaction of copper with N-terminal MBSs (4).
In conclusion, in this study, we analyzed the mechanism of MNK phosphorylation and provided evidence that although the putative MBSs do not participate directly in the catalytic cycle of the protein, they appear to be essential for the sensing of very low concentrations of copper in the environment or, alternatively, in capturing low concentrations of copper and supplying it to the copper-binding sites in the channel. | 6,778.8 | 2001-07-27T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Flexible and Porous Nonwoven SiCN Ceramic Material via Electrospinning of an Optimized Silazane Solution
The processing of nonwoven porous ceramics by combining the polymer‐derived ceramic (PDC) route with electrospinning offers an excellent strategy for developing new porous ceramic structures. Currently, the manufacturing of nonwoven porous materials from preceramic polymers is conducted by trial and error approaches. The necessity to predict the e‐spinnability conditions from properties assessment offers a potential tool to control the manufacture and the resulting material morphology. This work assesses the relationship between the preceramic polymer solutions and the resulting electrospun nonwoven morphology. For this, a commercially available liquid oligosilazane (Durazane 1800) is selectively cross‐linked to achieve a reliable and spinnable preceramic polymer (HTTS), which is then dissolved in tetrahydrofuran (THF). Based on the investigation of the rheological behavior of various polymer concentrations, three different polymer solution regimes (diluted, semidiluted, and concentrated) are identified and correlated with the resulting morphology of the e‐spun material (spherical particles, beaded fibers, and seamless fibers). After the pyrolysis, the nonwoven ceramics manufactured from the solution with 65 wt% of HTTS is converted to a SiCN ceramic nonwoven with 86% of open porosity, profiling as a promising candidate for developing high‐performance filter systems and catalytic supports in harsh environments.
DOI: 10.1002/adem.202100321 The processing of nonwoven porous ceramics by combining the polymer-derived ceramic (PDC) route with electrospinning offers an excellent strategy for developing new porous ceramic structures. Currently, the manufacturing of nonwoven porous materials from preceramic polymers is conducted by trial and error approaches. The necessity to predict the e-spinnability conditions from properties assessment offers a potential tool to control the manufacture and the resulting material morphology. This work assesses the relationship between the preceramic polymer solutions and the resulting electrospun nonwoven morphology. For this, a commercially available liquid oligosilazane (Durazane 1800) is selectively cross-linked to achieve a reliable and spinnable preceramic polymer (HTTS), which is then dissolved in tetrahydrofuran (THF). Based on the investigation of the rheological behavior of various polymer concentrations, three different polymer solution regimes (diluted, semidiluted, and concentrated) are identified and correlated with the resulting morphology of the e-spun material (spherical particles, beaded fibers, and seamless fibers). After the pyrolysis, the nonwoven ceramics manufactured from the solution with 65 wt% of HTTS is converted to a SiCN ceramic nonwoven with 86% of open porosity, profiling as a promising candidate for developing high-performance filter systems and catalytic supports in harsh environments.
Commercially available preceramic polymers usually are composed of cyclic and linear units with a complex structure and low molecular weight, which is a drawback for the spinnability. [4] Therefore, to date, it is mandatory to use organic polymers as spinning aid and additional intermediated steps to achieve e-spinnability. However, suitable organic polymers have a meager carbon yield after pyrolysis and act as sacrificial filler, which harms the mechanical stability.
To avoid using an organic polymer as e-spinning additive, we used a solid polysilazane synthesized from a commercially available oligosilazane [25] to achieve e-spinning conditions. Suitable e-spinning solutions had to be developed to ensure a continuous e-spinning process and the processing of regular-shaped fibers. Furthermore, it was necessary to investigate the polymer to ceramic transformation by pyrolysis. This work aimed to develop an approach to establish the e-spinning process for the processing of nonoxide ceramic nonwoven with high interconnectivity of open porosity for applications in harsh environments, e.g., in the field of energy, catalysis, and high-performance filtration systems.
Synthesis of HTTS
As the Durazane 1800 used in this work, the most commercially available silazanes are liquid oligomers. In view thereof, for the processing of fibers, a solid and processable polymeric precursor must be achieved. For this reason, we applied the selective crosslinking procedure used by Flores et al. [25] The simplified reaction mechanisms and their physical state before and after the cross-linking reaction are shown in Figure 1.
The main advantage of using selective cross-linking via dehydrocoupling reaction is to tailor the silazane molecular weight. The reaction is slow and can be easily stopped at any time using an appropriate inhibitor, such as calcium borohydride bis(tetrahydrofuran) (Ca(BH 4 ) 2 .2THF). The resulting polysilazane HTTS has an increased molecular weight (M w ) of 8010 g mol À1 in comparison with 4836 g mol À1 for Durazane 1800 with a polydisperse index (M w /M n ) of 9.4. [25] In view thereof, the previously liquid and oligomer Durazane 1800 becomes a soluble thermoplastic polymer with properties suitable for the e-spinning process.
Rheological Investigation of the Preceramic Polymer Solutions
The investigation of the rheological behavior of the HTTS solutions provides important information about the interactions between the polymer chains in solution, which is crucial to understand and predict the polymer morphology required for e-spinning. [23,24,26] However, although this knowledge is well established for the e-spinning of organic polymers, there is no report in the literature about the rheological characterization of preceramic polymers intended for e-spinning.
The synthesized HTTS preceramic polymer was solubilized in THF with different concentrations ( Table 1) to evaluate the rheological behavior. The typical apparent viscosity of the prepared solutions as a function of shear rate is shown in Figure 2.
The analysis of the curves shown in Figure 2 clearly distinguishes between two different groups regarding the rheological behavior. Samples with less than 50 wt% of the polymer have a predominantly Newtonian behavior. Therefore, increases in the shear rate have no significant effect on the viscosity. A pronounced shear-thinning behavior appears for samples with higher polymer concentrations (>60 wt%). This typical viscoelastic behavior is attributed to the polymer chain alignment in the applied shear stress direction and consequently reducing the viscosity of the respective solutions with higher shear rates through disentanglement.
Furthermore, the results indicate that the solution state switches from a diluted to a semidiluted regime in the range of 50-60 wt% of HTTS concentration, indicating that the Figure 1. Synthesis of the HTTS polysilazane from the oligosilazane Durazane 1800 by selective cross-linking. Reaction mechanism details are described elsewhere. [25] www.advancedsciencenews.com www.aem-journal.com polymer chains gradually start to interact through chain entanglement by increasing the polymer concentration. This change from diluted to semidiluted state allows the formation of fibers, instead of beaded fibers or even droplets during electrospinning. [23,27] As the polymer concentration is increased to more than 65 wt%, a second transition point occurs, characterized by a remarkable increase in the viscosity. This phenomenon is better visualized by plotting the mean viscosities at the plateau regime (η ∞ ) as a function of the polymer concentration ( Figure 3).
As will be further discussed in the e-spinning experiments section, this transition leads to seamless fibers, as an increase in the solution concentration leads to a higher degree of polymer chain entanglements. [26,27]
Manufacturing of Preceramic Polymer Fibers by e-Spinning
The e-spinning process with the HTTS solutions was performed as described in the experimental procedure. The respective spinning results are shown in Figure 4. Figure 4 clearly shows the strong influence of the HTTS concentration, i.e., the solution viscosity on the morphology of the resulting materials. As predicted by the rheological behavior, the e-spinning of the diluted solutions ranging from 30 to 50 wt% of HTTS yields only near-spherical polymer particles, as the Taylor cone formation was not possible. The Taylor cone formation is a precondition for the processing of regular-shaped fibers via the e-spinning process. [28,29] Thus, the polymer solutions were directly ejected from the needle tip in the form of a spray jet. For this interval of concentration, the term e-spraying should be more appropriate to describe this processing scenario. In summary, for diluted solutions, the lack of chain entanglements avoids the drawn jet continuity and leads only to the deposition of small droplets at the collector.
Particles with a mean diameter size of 7.8 AE 2.6, 8.4 AE 2.0, and 9.4 AE 1.4 μm were obtained from samples HTTS_1, HTTS_2, and HTTS_3, respectively. It is worth mentioning that the increase in polymer concentration also leads to the formation of more homogeneous particles with a narrower size (Figure 4 and 5). However, in addition to particles, the formation of small fibrils was observed for sample HTTS_3, which is in agreement with the rheological results and confirms the beginning of the transition to a semidiluted regime.
The increase in the polymer concentration to 60 wt% (sample HTTS_4) leads to the formation of beaded fibers. This morphology results from two types of instability during the e-spinning process, the Rayleigh and varicose instabilities, [30][31][32][33] which favor spheres formation because of the smallest surface area to diminish the surface tension. In the sample with 65 wt% HTTS concentration (HTTS_5), the viscosity seems to have inhibited beads-onstring fiber formation, leading to seamless fibers. Increasing the HTTS concentration in the solution to 70 wt% (HTTS_6), the high viscosity combined with the low solvent concentration leads to nonhomogeneous tape-like morphology with a broader size distribution, here named as flat-scattered fibers. For better visualization, the measured average fiber diameters are shown in Figure 5.
Investigation of the Pyrolysis Behavior and NonWoven Ceramic Formation
The pyrolysis of organosilazanes up to 1000 C in N 2 leads to the formation of an amorphous ceramic composed of a SiC x N 4Àx (0 ≤ x ≤ 4) matrix and a segregated carbon phase (so-called free carbon). [34][35][36][37] For ceramic fibers processed through pyrolysis, the crosslinking step is crucial to convert the thermoplastic preceramic polymer to a thermoset, avoiding shape changes during the ceramization. [38] For this purpose, considering the availability of vinyl groups in HTTS, a radical initiator (AIBN) was used to perform the cross-linking step. Figure 6 shows the resulting fibers pyrolyzed as described in the experimental procedure.
After pyrolysis, defect-free cylindrical SiCN fibers with a diameter <4 μm were obtained, which is far below the reported diameter for the same ceramic fibers processed by melt spinning www.advancedsciencenews.com www.aem-journal.com (30-100 μm) [25] and also lower than commercially available SiC fibers (%10 μm). [39] Although the fabricated fibers are still in the range of micrometers, their quality in terms of homogeneity and morphology is remarkable, as no additional organic polymer (spinning aid) was used. The decomposition gases derived from organic additives would evolve during pyrolysis, which could lead to defects and a higher shrinkage, both very deleterious for the fiber properties. Further optimization of the e-spinning process should lead to fibers with a diameter in the range of nanometers. Another essential characteristic of the PDCs technology is the significant shrinkage during the pyrolysis, mainly assigned by the remarkable increase in density and the mass loss during the conversion from a polymer to ceramics. The fiber average diameter and the nonwoven dimensions were used to estimate the shrinkage for sample HTTS_5 during pyrolysis (Figure 7).
Comparing the fiber diameter before and after pyrolysis, a shrinkage of 22% is noticed, as the fiber mean diameter decreases from 4.1 μm (before pyrolysis) to 3.2 μm (after pyrolysis). The shrinkage is one of the drawbacks of the PDCs technology, and this phenomenon must be considered for further product designing. Regarding the ceramic nonwoven porosity, www.advancedsciencenews.com www.aem-journal.com the percentage of total porosity was estimated at approximately 86%. The calculation was performed as described in the experimental section by considering the measured SiCN ceramic true density (ρ T ¼ 2.416 g cm À3 ) and the calculated nonwoven density (ρ A ¼ 0.347 g cm À3 ). Finally, it is noteworthy to mention that because of the thin ceramic fibers the porous nonwoven ceramic is flexible and presents sufficient mechanical strength to be bent and handled without breaking or losing its structural integrity ( Figure 8). The good mechanical properties, the well-known thermal and chemical stability, and the good shapeability open many promising applications, such as in filtering systems, catalyst supports, and porous burners.
Conclusions
Many preceramic polymers have a much lower molecular weight compared with organic polymers. Therefore, it is crucial to add organic polymers as spinning aid to enable processing via electrospinning. However, such organic additives act as sacrificial filler, which leads to a decreased ceramic yield, higher shrinkage, and the formation of pores within the formed ceramic fibers during pyrolysis and results in inferior mechanical properties. The approach developed here overcomes these problems. In this work, solutions of the polysilazane HTTS in THF, synthesized by selective cross-linking of the oligosilazane Durazane 1800, were specially designed and comprehensively investigated for the electrospinning process. The rheological measurements of different feedstock solutions were correlated with the resulting fiber morphology after electrospinning, to predict the morphology of the e-spun preceramic polymers. For diluted solutions with a polymer concentration lower than 50 wt%, only nearspherical particles with diameters in the range of 7-10 μm were obtained, as no interactions of the polymer chains are present. By increasing the polymer concentration up to 60 wt%, the HTTS solution rheology reveals a shear-thinning behavior, indicating the transition to a semidiluted state due to a certain degree of chain entanglements. Fibers with beads-on-string morphology are typical for this solution regime. For concentrations higher than 65 wt%, the substantial increase in the viscosity and a viscoelastic behavior due to a higher degree of chain entanglements lead to seamless fibers during the electrospinning process.
The pyrolysis behavior was investigated for samples with a concentration of 65 wt% of HTTS containing AIBN as curing agent. The pyrolysis led to defect-free ceramic SiCN fibers with a diameter <4 μm. Due to the resulting thin diameter, the porous ceramic nonwoven possesses excellent flexibility. Finally, the ceramic nonwoven porous structure indicates a total porosity of about 86%.
The developed approach enables the electrospinning of preceramic polymers without e-spinning aids and helps to conduct this process methodically. Further improvements of the electrospinning process combined with the functionalization of the
Experimental Section
Materials: The commercially available oligosilazane Durazane 1800 (Merck KGaA, Germany) was selected as the SiCN precursor. Tetra-nbutylammonium fluoride (TBAF) (1 M in THF) was used as the catalyst, while calcium borohydride bis(tetrahydrofuran) (Ca(BH4) 2 ·2THF) was used as an inhibitor to stop the reaction, both purchased from Sigma Aldrich (Germany). THF p.a. (Neon, Brazil) was used for the synthesis and the e-spinning experiments, and the initiator 2,2 0 -azobisisobutyronitrile (AIBN), used for curing, was supplied by Akzo Nobel (The Netherlands). The educts' preparation and handling and all synthesis reactions were conducted in a dry argon atmosphere.
Selective Cross-Linking of the Oligosilazane: The procedure adopted to perform the selective chemical cross-linking of the oligosilazane Durazane 1800 based on a previous work reported by Flores et al. [25] In a typical reaction, 20 g of Durazane 1800 was dissolved under vigorous stirring in 40 g of THF, and 0.25 wt% of the catalyst TBAF, concerning Durazane 1800, was added dropwise to the reaction. The solution was stirred for 90 min, and an excess of the inhibitor (Ca(BH 4 ) 2 ·2THF) was added to stop the reaction. The mixture was stirred for another 5 min and afterward filtered off. Finally, the solvent was removed under reduced pressure at room temperature, obtaining a solid and colorless polysilazane, hereafter called HTTS.
E-Spinning Experiments and Pyrolysis: Solutions varying from 30 to 70 wt% of HTTS in THF were prepared for the electrospinning experiments. The respective solution was placed in a 5 mL syringe coupled with a metallic needle of 0.7 mm inner diameter clamped to a high-voltage power supply applying 10 kV and fixed at 10 cm from an aluminum collector. The sample was extruded at a flow rate of 1 mL h À1 controlled by an infusion syringe pump arranged horizontally.
For the pyrolysis, 1.5 wt% of the curing agent AIBN, referred to HTTS amount, was added to the spinning solution. Then, the resulting polymer nonwoven was cured at 70 C for 4 h to enable cross-linking before melting. Afterward, the sample was placed in a carbon crucible and heated up to 1000 C (2 K min À1 ) under nitrogen flow in a tubular furnace.
Rheological Characterization: Parallel plate rotational rheology of the polymer solutions was performed in a Thermo Scientific Haake Mars II rheometer (Thermo Fisher Scientific, USA) using a PP60 measuring geometry (60 mm diameter). A 0.15 gap was used in the tests, and the shear rate varied from 0.1 to 250 s À1 in 60 s. All measurements were performed at a temperature of 23 AE 1 C.
Fiber Morphology Investigation: The e-spun samples were analyzed to assess the fiber morphology before and after the pyrolysis by scanning electron microscopy TM3030 (Hitachi, Japan), operating at an acceleration voltage of 15 kV. The micrographs obtained were analyzed by the ImageJ software (National Institutes of Health, USA), and 200 measurements for either fibers or particles were used to estimate the reported mean values.
Investigations of the NonWoven Properties: The ceramic nonwoven porosity (ε) was estimated by using Equation 1 where ρ A is the nonwoven apparent density and ρ T is the true density of the derived SiCN material. For the apparent density, the volume was obtained by measuring the nonwoven dimensions, while for the true density, the ceramic volume was determined with a gas pycnometer (Accu Pyc II 1340 -Micromeritics Instruments Corporation, USA). | 3,974.6 | 2021-06-29T00:00:00.000 | [
"Materials Science"
] |
On the Path towards a “Greener” EU: A Mini Review on Flax (Linum usitatissimum L.) as a Case Study
Due to the pressures imposed by climate change, the European Union (EU) has been forced to design several initiatives (the Common Agricultural Policy, the European Green Deal, Farm to Fork) to tackle the climate crisis and ensure food security. Through these initiatives, the EU aspires to mitigate the adverse effects of the climate crisis and achieve collective prosperity for humans, animals, and the environment. The adoption or promotion of crops that would facilitate the attaining of these objectives is naturally of high importance. Flax (Linum usitatissimum L.) is a multipurpose crop with many applications in the industrial, health, and agri-food sectors. This crop is mainly grown for its fibers or its seed and has recently gained increasing attention. The literature suggests that flax can be grown in several parts of the EU, and potentially has a relatively low environmental impact. The aim of the present review is to: (i) briefly present the uses, needs, and utility of this crop and, (ii) assess its potential within the EU by taking into account the sustainability goals the EU has set via its current policies.
Introduction
Today, flax is grown in more than 50 countries around the globe [1], with Canada, India, Russia, Kazakhstan, and China being some of the major producers [2,3] (Figure 1). In 2020, Canada and France were the two biggest exporters of flaxseed and flax fiber, respectively [4]. It has been estimated that the market of flax is expanding rapidly [5,6], indicating renewed interest, possibly attributed to recent research developments [7], as well as the recognition of the multiple applications of flax [8]. Concurrently, crops that could contribute to climate change mitigation by reducing the environmental impact of agriculture are under the spotlight [9]. The degradation of the environment (amongst others) has led the United Nations to deliver a set of Sustainable Development Goals that aim to achieve collective prosperity for both people and the planet [10]. In the European Union, the Common Agricultural Policy (CAP), the European Green Deal (EDG), and Farm to Fork (F2F) aim to achieve the same goals [11][12][13]. Policymakers, researchers, and governing bodies across the EU are constantly searching for smart solutions to the everlasting problems of climate change and food insecurity. The utilization of resourceful crops could be a viable answer [14]. The present review aims to concisely present the potential of flax and evaluate whether or not it constitutes a crop that could facilitate the implementation of current European agricultural policies and strategies.
Uses and Applications
Flax is a multipurpose crop. In fact, the Latin term "usitatissimum" in the scientific name of flax translates as "the most useful" one, due to its multiple uses [15]. The two main products of the crop are its seeds and its fibers. The fibers have many applications in the textile industry and are considered rather strong, mainly due to their high cellulose content [16,17]. The mechanical and physical properties of flax fibers are presented in Table 1. From dressing fabrics and bed sheeting to twine and ropes, the fibers of flax have been heavily utilized for industrial purposes [16]. Some of the highest-quality textiles such as damasks and lace are made from the fibers of flax (also known as linen) [16]. The fibers are also utilized in the paper industry and, interestingly, they have been used in the printing of banknotes [16]. They can be used to reinforce recycled paper, improving its strength, in the production of insulation batts instead of glass fibers, the production of wound dressing cloths [18], and the production of geotextiles [19]. Recently, the automotive industry started utilizing flax fibers as an eco-friendly source of composite materials [20].
The seeds of flax are rich in bioactive substances such as alpha-linolenic acid (ALA), proteins and lignan, rendering them as a nutritious source of human and animal food [21,22] (Table 2). Flaxseeds can be consumed as whole seeds or milled powder [23] and are rich (~40%) in oil [24]. The oil is edible and very nutritious [25,26] (Table 3) and has a pleasant taste and aroma [27]. It also has a high content of linolenic acid (48.5-68.5 %), a low content of saturated fatty acids, and is rich in ω-3 and ω-6 [27]. According to Madhusudhan [28] and Bilalis et al. [29], flax oil is amongst the best sources of omega-3fatty acids and perhaps the richest source of a-linolenic acid.
Flax oil is rich in bioactive compounds that can potentially prevent inflammation, hormonal disorders, cardiovascular diseases, infections, bone diseases, cancer, and many more [30,31] by modulating several signaling pathways [32]. Besides their suggested application in the prevention of disorders, flaxseeds and flax oil have also been found to possess remedial properties [23]. Patients with peripheral artery disease [33], cardiovascular diseases [34], and hemodialysis patients with dyslipidemia [35] have reportedly been found to have a positive response to the consumption of flaxseed. Parikh et al. [36] suggested that the consumption of flaxseed might alleviate arrhythmias and conditions that may result in heart dysfunction. Similarly, Tang et al. [23] stated that the consumption
Uses and Applications
Flax is a multipurpose crop. In fact, the Latin term "usitatissimum" in the scientific name of flax translates as "the most useful" one, due to its multiple uses [15]. The two main products of the crop are its seeds and its fibers. The fibers have many applications in the textile industry and are considered rather strong, mainly due to their high cellulose content [16,17]. The mechanical and physical properties of flax fibers are presented in Table 1. From dressing fabrics and bed sheeting to twine and ropes, the fibers of flax have been heavily utilized for industrial purposes [16]. Some of the highest-quality textiles such as damasks and lace are made from the fibers of flax (also known as linen) [16]. The fibers are also utilized in the paper industry and, interestingly, they have been used in the printing of banknotes [16]. They can be used to reinforce recycled paper, improving its strength, in the production of insulation batts instead of glass fibers, the production of wound dressing cloths [18], and the production of geotextiles [19]. Recently, the automotive industry started utilizing flax fibers as an eco-friendly source of composite materials [20]. The seeds of flax are rich in bioactive substances such as alpha-linolenic acid (ALA), proteins and lignan, rendering them as a nutritious source of human and animal food [21,22] ( Table 2). Flaxseeds can be consumed as whole seeds or milled powder [23] and are rich (~40%) in oil [24]. The oil is edible and very nutritious [25,26] (Table 3) and has a pleasant taste and aroma [27]. It also has a high content of linolenic acid (48.5-68.5%), a low content of saturated fatty acids, and is rich in ω-3 and ω-6 [27]. According to Madhusudhan [28] and Bilalis et al. [29], flax oil is amongst the best sources of omega-3-fatty acids and perhaps the richest source of a-linolenic acid.
Flax oil is rich in bioactive compounds that can potentially prevent inflammation, hormonal disorders, cardiovascular diseases, infections, bone diseases, cancer, and many more [30,31] by modulating several signaling pathways [32]. Besides their suggested application in the prevention of disorders, flaxseeds and flax oil have also been found to possess remedial properties [23]. Patients with peripheral artery disease [33], cardiovascular diseases [34], and hemodialysis patients with dyslipidemia [35] have reportedly been found to have a positive response to the consumption of flaxseed. Parikh et al. [36] suggested that the consumption of flaxseed might alleviate arrhythmias and conditions that may result in heart dysfunction. Similarly, Tang et al. [23] stated that the consumption of flax-based food products could help patients with diabetes. The phenolic compounds of flax have also been proposed as possible antibiotic alternatives [18]. Fatty acids such as ω-3 and ω-6, which are major components of flaxseed oil, can be used in cosmetics [37] as they improve the health of hair and have been associated with the regeneration of skin tissues [27]. Table 2. Flax seed nutritional value per 100 g of seed. Data retrieved from https://fdc.nal.usda.gov/ fdc-app.html#/food-details/169414/nutrients (accessed on 10 January 2023) [38].
Content
Amount Unit Besides its seeds and oil, it should also be mentioned that flax sprouts and microgreens have been proven to be highly nutritious [39]. The consumption of sprouts (young seedlings of freshly germinated seeds) and microgreens (young seedlings that have developed their first true leaves) has recently attracted increasing attention [40]. According to some studies, flax sprouts and microgreens are richer in essential micronutrients (such as Fe, Mn, and Zn) and have higher antioxidant capacity compared to the seeds [40,41], and are rich in water-soluble proteins, free amino acids, and free fatty acids [42].
According to Liu et al. [43], the inclusion of linseed (seeds and/or seed oil) in the rations of goats can regulate their blood lipid content. Similarly, Brito and Zang [44] concluded that flaxseeds could be beneficial to the health of dairy cows. Weston et al. [45] found that flaxseed consumption increases the lifespan and improves the liver function in hens, and Popescu et al. [46] noted an enhancement in the intestinal health of flaxseedfed chicken. Based on the findings of Neelley and Herthal [47], feeds that include flax can reduce the chances of laminitis in horses, while Ngcobo et al. [48] reported that the consumption of flax could improve the semen quality of livestock. The high nutritional value of flax not only benefits the health of productive animals but also the quality of the meat, dairy, and eggs they produce, thereby improving the quality of value-added animal products [49]. For instance, the quality of pork meat was improved in flax fed pigs [50] and the quality of eggs was improved in hens [51]. These findings, alongside the nutritional value of flax seeds, can also be perceived as a potential enhancement of food security (FS). In order to achieve FS, food must not only be adequate but also nutritious [52]. This aspect of FS is now more relevant than ever, as the rising anthropogenic CO 2 emissions have been suggested to negatively affect the nutrient content of major food crops including rice, potato, wheat, maze, and barley [53][54][55]. According to the literature, high CO 2 atmospheric levels could instigate micro, macro, and protein deficiencies [53][54][55]. Interestingly, in a study by Hacisalihoglu and Armstrong [56], the authors evaluated several flax varieties and distinguished six of them (Omega, Clli1374, Clli1418, Clli1821, Clli643, and Clli2033) as superior due to their high nutritional content under elevated CO 2 stress. Lastly, the profile of flaxseed oil makes it suitable for biodiesel production [58]. Even though flaxseed-based biofuels have a lower energy potential (48.8 MJ/kg) in comparison to fossil diesel fuels (57.14 MJ/kg) [59], the use of plant based-fuels can reduce the emissions of greenhouse gases and, in the near future, facilitate climate neutralization. The stems of the plant can also be turned into pellets for energy production [60]. Other uses include phytoremediation applications [61], biochar production [62], as a nematicide [63], and as a food preservative in the food industry [18,64].
Crop Needs and Management
The fertilization needs of flax have been the subject of several studies. Based on the literature, flax responds positively to the application of nitrogen (N) fertilization as it promotes vegetative growth, canopy development and structure, and improves the yield [65][66][67]. Unsurprisingly, a wide range of N fertilization rates (20-150 kg N per ha) have been tested and proposed for flax in the literature [66,67]. Of course, the optimal level of N fertilization also depends on the soil properties and the cultivar [66]. Excessive use of N fertilization in fiber flax has been proven to negatively affect the yield as the prolongation of the vegetative phase results to greater susceptibility lodging and diseases [68]. Notably, in a study by Herzog et al. [65], the authors concluded that N fertilization is negatively correlated with the quality of flaxseed, as increasing N fertilization rates reduced the alinoleic acid content in linseed oil. On the contrary, balanced phosphorus (P) fertilization can improve the quality traits of flax seed oil [69]. Moreover, P fertilization has been found to improve the dry matter accumulation, the yield (both in seed and in fiber), and the yield components of flax [69][70][71][72][73]. Similarly, the application of potassium (K) fertilization can increase the biomass of the plants and the grain yield [74,75], although according to Berti et al. [76], K fertilization (as well as P) does not affect the content, composition, and yield of flax oil. In a review by Cui et al. [67], the authors concluded that the application of organic fertilizers in flax significantly improves the quality of the grains, but its effects on the yield were in doubt. In the same study, the authors recommended (based on the literature) that the optimal fertilization in flax ranges among 75-150 kg N/ha, 35-75 kg P 2 O 5 /ha, and 35-52.5 kgK 2 O/ha. The combined application of organic and inorganic fertilization has also led to promising results as it improves the agronomic traits of the crop whilst maintaining soil fertility and enhancing the efficiency of the fertilizers [77,78].
Similar to fertilization, the irrigation needs of the crop are dependent to the climate, the soil properties, and the cultivar [79], thereby a wide range of drought tolerance has been reported in flax [80]. Several studies highlight the importance of irrigation, as the short root length of flax often prevents it from reaching deep underground water [81,82], and water insufficiency can negatively affect the agronomic traits and the yield of flax [83][84][85]. Other studies suggest that flax could be regarded as a drought tolerant crop, and its tolerance could partially be attributed to proline, which can regulate cell osmosis [86]. The water needs of flax in the literature vary vastly. Case in point, according to Singh et al. [87], 450-750 mm of water per season is sufficient for flaxseed in India, yet in a study by Kakabouki et al. [88], in Greece, less than 400 mm of water is enough for the crop to perform well. Overall, fiber flax cultivars are characterized by higher water needs compared to flaxseed cultivars [89]. Interestingly, N fertilization can affect the crop's irrigation needs and mitigate drought stress [90,91]. This finding is in accordance with the work of Rajabi-Khamseh et al. [92], who observed that the use of plant-growth promoting microorganisms (PGPM) had a positive effect on the yields of flax and mitigated drought stress.
It should be noted that the literature suggests that PGPMs can improve the nutritional status of plants and the availability of N [92]. Thingstrup et al. [93] concluded that the presence of arbuscular mycorrhiza fungi (AMF) is crucial for flax when the soil P levels are below 40 mg P per kg of soil. Similarly, Rahimzadeh and Pirzad [94] found that AMF and phosphate solubilizing bacteria can improve the performance of the crop and the quality of flaxseed. Studies conducted regarding the efficiency of PGPM-based biofertilizers in flax have reported promising results both quantitatively and qualitatively [95,96]. As the agri-environmental policies of the EU aspire to reduce chemical inputs in agriculture and promote organic farming [12], their potential could be of great significance. However, and despite the fact that PGPM-based products are often considered cost effective, environmental-friendly, and organic compliant [97], the complexity of their production process and their practicality [98] are disadvantages that have not yet been fully addressed and require further research.
PGPMs have also been proven to protect flax from pest infestations [99]. Even though several pests [100] and diseases [101] have been reported in flax, weeds pose arguably the greatest threat to the yield [102]. Flax is characterized as a poor weed competitor [103], mainly due to its slow growth rate during the early growth stages [104]. Based on the literature, severe weed infestations can significantly reduce the yields of both flaxseed and fiber flax, or even result in complete crop failure [105]. Chemical management constitutes the most well-adopted weed management practice; however, as the extensive use of herbicides has been correlated with the degradation of the environment [12], several alternative weed management strategies have been studied and/or proposed for flax. These strategies include the selection of more competing cultivars [102], altering the seeding rate or date [102,106], crop rotations [107], and the use of mulches [108]. These alternatives alongside the use of organic fertilizers and biopesticides [109] also enable flax to be grown organically. Notably, the performance of flax has been reportedly improved in organic systems under no-till regimes [29,107].
Flax in the Era of "Green" Policies
Before we investigate the compatibility of cultivating flax with the goals of the European Union (EU), we first need to define these goals. Recently, the EU has launched the Common Agricultural Policy (CAP) 2023-2027 [110], has committed to the United Nations Sustainable Development Goals (SDGs) [10], and aspires to implement the European Green Deal (EDG), and Farm to Fork (F2F) [12,13]. All these policies/initiatives set their own objectives, with most of them being consensually oriented or even overlapping (e.g., all of them emphasize the importance of sustainability). Here, we propose an easy and quick method to describe and organize their "common ground" by following the One Health (OH) paradigm. According to the World Health Organization, the OH can briefly be described as the effort to "optimize the health of people, animals and the environment" [111]. Therefore, we will evaluate flax based on how it benefits the environment, the well-being of humans, and the health of the livestock (Figure 2). Moreover, the financial potential of flax should also be considered. Afterall, flax is regarded as a cash crop that generates a good revenue in several parts of the world [8]. According to a recent report on the global flax market insights, the seed market that currently sits at over 400 million USD is expected to reach approximately 725.9 million USD by 2028 [6]. Likewise, the linen fabric industry is expected to slowly but steadily expand (the Compound annual growth rate is estimated to increase at least by 3% by 2029), and the EU is estimated to hold, if not the biggest, one of the biggest shares of the market [5]. One of Europe's (and the UNs') main objectives is the eradication of poverty [10]. A significant portion of unprivileged people are located in rural areas and often their main income derives from agricultural activities [120]. Therefore, it is crucial to provide farmers with alternatives that could improve their income, whilst tackling the degradation of the environment. At a farm level, the adoption of dual-purpose (DP) cultivars could increase the profits of farmers [121]. Even though the majority of the seed oil varieties are not suitable for fiber extraction, some varieties can sufficiently yield fiber as a by-product [122]. In some studies, the yields of such cultivars have been recorded to reach up to approximately 1000 and 2000 kg of fiber and oil, respectively, per ha [122]. Via the utilization of DP flax cultivars, it is possible to exploit the crop to its full financial potential. Shaikh et Most of the applications of flax on human and animal health have been discussed in the Section 2. The environmental benefits can be assessed based on the crop's carbon (CO 2 − eq emitted per kg of product) and water (m 3 required per kg of product) footprints. As elaborated above, flax is cultivated in several countries and areas, on soils with a wide range of different properties, and under different climatic conditions. As expected, the literature is filled with contrasting findings when estimating the CO 2 − eq per kg of flax fiber or seed. Case in point, according to Niels de Beus et al. [112], the carbon footprint of flax fiber is approximately 0.9 kg CO 2 − eq kg −1 , while Dissanayake et al. [113] reported it at over 11 kg CO 2 − eq kg −1 . Niels de Beus et al. [112] concluded that the fertilization is the most influential factor in the quantification of carbon footprint, thus the wide range of results is understandable. However, and despite of the inconclusive CO 2 − eq estimations in the literature, it is possible to compare the climate impact of flax with that of other fiber crops. Cotton is arguably regarded as the major natural fiber crop around the world [114], and in the EU, its fibers are widely used in the textile industry [115]. According to a report by Sadin and Ross [116], on average, the carbon footprint of cotton (0.5-4 kg CO 2 − eq per kg of fibers) could even be as much as four times higher than that of flax (0-0.8 kg CO 2 − eq per kg of fibers). Similarly, it has been reported that the average CO 2 − eq per kg of flaxseed oil is less than half that of sunflower oil [117]. Regarding the water footprint of flax fibers, according to a report by the Institute for Water Education (IWE) of UNESCO, producing 1 kg of them requires roughly three times less water, compared to cotton fibers [118]. Admittedly, the water footprint (m 3 of water per kg of product) of the flax seeds and flaxseed oil is not promising, as the IWE report found that it is 0.6-2 times higher than that of the three most important seed oil crops of the EU: rapeseed, sunflower, and soya (and their respective seed oils) [119]. However, it should be noted that this study was conducted based on global data during 1996-2005, and that the significantly higher water per seed kg of flax was mainly attributed to a significantly higher green water footprint. For instance, the green water footprint of flaxseed and rapeseed oil was estimated at 8618 and 3226 m 3 per seed tones (respectively), whilst the blue water footprint of the two oils was estimated at 488 and 438 m 3 per seed tones (respectively) [118]. Therefore, perhaps the findings of this report could be partially unindicative or outdated.
Moreover, the financial potential of flax should also be considered. Afterall, flax is regarded as a cash crop that generates a good revenue in several parts of the world [8].
According to a recent report on the global flax market insights, the seed market that currently sits at over 400 million USD is expected to reach approximately 725.9 million USD by 2028 [6]. Likewise, the linen fabric industry is expected to slowly but steadily expand (the Compound annual growth rate is estimated to increase at least by 3% by 2029), and the EU is estimated to hold, if not the biggest, one of the biggest shares of the market [5]. One of Europe's (and the UNs') main objectives is the eradication of poverty [10]. A significant portion of unprivileged people are located in rural areas and often their main income derives from agricultural activities [120]. Therefore, it is crucial to provide farmers with alternatives that could improve their income, whilst tackling the degradation of the environment. At a farm level, the adoption of dual-purpose (DP) cultivars could increase the profits of farmers [121]. Even though the majority of the seed oil varieties are not suitable for fiber extraction, some varieties can sufficiently yield fiber as a by-product [122]. In some studies, the yields of such cultivars have been recorded to reach up to approximately 1000 and 2000 kg of fiber and oil, respectively, per ha [122]. Via the utilization of DP flax cultivars, it is possible to exploit the crop to its full financial potential. Shaikh et al. [123] proposed that the woody straws of DP flax, which are essentially by-products of the scutching process, can be used in the production of low-cost paper. Similarly, according to Papadopoulos and Hague [124], flax shives can be used in the production of particleboards. Therefore, it is possible to reduce agro-waste materials that in many parts of the EU are disposed through burning (despite of the strict regulations that prohibit this practice [125]) whilst simultaneously providing additional revenue to farmers.
The EU has recognized the potential of flax. In a recent study by the European Parliamentary Research Service (EPRS) on the future of crop protection in the Union, the authors proposed flax as a resilient crop that can acclimatize and could be adopted by farmers all across the EU [126]. Notably, the authors regarded flax as an oil crop that could be introduced in lieu of other major crops that are more susceptible to biotic stress (pests and diseases). Interestingly enough, this is the definition of alternative crops (ACs), as it was proposed by Isleib [127]. In fact, in their study, EPRS mentioned flax as an AC. Provided that flax could indeed be considered as an AC in the EU, this would add more to its value as a crop. According to the literature ACs have been proposed as a versatile tool that could facilitate the implementation of the EGD and simultaneously enhance Food Security [14,128]. Nonetheless, the introduction (or re-introduction in the case of retroactive crops) of ACs has its limitations [129]. In most cases, ACs are characterized by limited information on the proper cultivation practices, lackluster market presence, and few available cultivars [129]. Of course, this is not the case with flax. As mentioned above, the flax market is expanding, the literature thrives with information regarding its cultivation, and there are more than 120 registered cultivars in the European common catalogue of varieties of agricultural plant species [130]. Moreover, due to the limited research focusing on ACs, they are usually excluded from policymaking [129]. However, this is hardly the case with flax. On the other hand, the unadaptable attitude of farmers could be an obstacle. Studies have found that farmers are often reluctant to adopt an AC, partially due to limited knowledge or information [131]. The perceptions and attitudes of farmers have also been found to correlate with their education and age [132]. However, the Commission has already included generational renewal in its action plans, aiming to attract young and educated farmers and entrepreneurs in rural areas [133].
Conclusions
Flax is a versatile crop that can acclimatize to several parts of the EU. It constitutes a source for a variety of industrial products while simultaneously having a high nutritional value that could strengthen any food system and enhance food security. The compounds in its seed oil can be used in medicine. It can be grown organically or under no-till regimes, and it can be grown with relatively low inputs depending on the soil and precipitation. Rather than being a "miracle crop", flax exemplifies the goals of recent EU agricultural policymaking. Further studies should be conducted for policymakers within the EU to utilize the crop to its full potential in future agri-environmental strategies. [5], FAOSTAT [8], and USDA [41,51].
Conflicts of Interest:
The authors declare no conflict of interest. | 6,281.8 | 2023-03-01T00:00:00.000 | [
"Economics"
] |
Fast Facial Detection by Depth Map Analysis
In order to obtain correct facial recognition results, one needs to adopt appropriate facial detection techniques. Moreover, the effects of facial detection are usually affected by the environmental conditions such as background, illumination, and complexity of objectives. In this paper, the proposed facial detection scheme, which is based on depth map analysis, aims to improve the effectiveness of facial detection and recognition under different environmental illumination conditions. The proposed procedures consist of scene depth determination, outline analysis, Haar-like classification, and related image processing operations. Since infrared light sources can be used to increase dark visibility, the active infrared visual images captured by a structured light sensory device such as Kinect will be less influenced by environmental lights. It benefits the accuracy of the facial detection. Therefore, the proposed system will detect the objective human and face firstly and obtain the relative position by structured light analysis. Next, the face can be determined by image processing operations. From the experimental results, it demonstrates that the proposed scheme not only improves facial detection under varying light conditions but also benefits facial recognition.
Introduction
The processes of digital image processing such as detection and recognition are similar to those of human vision.To enhance the effectiveness of digital image processing, numerous approaches focus on 3D image processing methodology especially depth map scan and related topics.To make closer interaction between human and device, the game console Wii released by Nintendo in 2006 had raised the studies on detections of pose, gesture, action and motion, and related topics.Further on 3D detection, the Kinect released by Microsoft is a motion sensing device as a game console for Xbox 360 and Windows PCs.By the Kinect, users just need to swing their hands, legs, or body and then can interactively control game role players.The new idea inspires numerous players and researchers to invest in 3D scanning, motion detection and interaction and related approaches.
The first success of Kinect is its depth map scan which let users easily determine the depth of every object from a screen.From the technical documents provided from the PrimeSense Ltd. [1], in the light coding solutions, the Kinect generates near-IR light to code the scene and then uses a standard off-the-shelf CMOS image sensor to read the coded light back from the scene.In which, the near-IR emitter diverges an infrared beam through a diverging lens and then the beam is projected on the surfaces in the form of uniform squares scattered as formed structured light planes.Then, the monochrome CMOS image sensor detects and recognizes the structured light map and then results in the depth map.Since near-infrared light is invisible and unaffected by ambient light, to diverge near-infrared light to detect distance is very suitable.
Besides, many similar studies [2][3][4][5][6][7] on structured light coding are proposed.In [2], Albitar et al. proposed a monochromatic pattern for a robust structured light coding which allows a high error rate characterized by an average Hamming distance higher than 6.Tong et al. present an up-to-date review and a new classification of the existing structured light techniques in [4].
Generally, to integrate two or more cameras/image sensors as a stereo vision device is a common technology for 3D capture.Another aspect, during these years numerous researchers focus on 3D scanning.Unlike 3D camera that collects color information about surfaces, 3D scanner collects depth/distance information about surfaces.These two aspects both provide useful information for stereo vision but also lack others [5].That is why Kinect [6,7] integrates an infrared projector and a monochrome CMOS camera as the depth sensor to collect distance information and adopts a RGB camera as the image sensor to collect color information for full 3D motion capture and facial recognition.
For advanced security, to detect and recognize biological characteristics such as fingerprint, face, voice, and iris, has become a commonly used technology.Among these biometric identification technologies, face identification is the most widely used.Since good recognition must follow good detection in face identification processes, how to detect the objective faces became a major topic, in which the depth map of image objects will be an important factor because if the object is far away from the camera then its image in size will be smaller than original without zooming.It means that if the depth map and the 2D image are considered simultaneously, then the facial detection and recognition will become easy.
Nowadays, there are numerous approaches on facial detection and recognition.The common technologies of facial recognition consist of Eigenface, Fisherface, waveletface, EGM (Elastic Graph Matching), PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis), Haar wavelet transform, and so on.It is worthy noted that most approaches develop the theory and algorithms on 2D image processing.There are many approaches [6][7][8][9][10][11] that focus on 3D integrated face reconstruction and recognition, in which the depth map becomes as an important factor.For instance, in the approach [8], Burgin et al. extend the classic Viola-Jones face detection algorithm [9] which considers depth and color information simultaneously while detecting faces in an image.The studies proposed by Rodrigues et al. in [10] discuss an efficient 2D to 3D facial reconstruction and recognition scheme.
In this paper, the human facial features of configuration and movement are estimated by using Haar wavelet transform.The features will be considered as patterns for facial detection and recognition.To determine skin color range, geometric relationships of features, and eigenfaces as patterns, the system will conclude the facial appearance and features to the most similar pattern.Then, the interface will show the meaning of the facial expression.
Structured Light Based Depth Map Analysis
In Kinect system, there are two speckle patterns that could appear on the camera: one is the primary speckle coming from the diffuser (projected on the object) and the other is the secondary speckle formed during the imaging due to the lens aperture and object material roughness.It only concentrates in primary speckle.The primary speckle pattern, produced by the diffuser and diffractive optical element (DOE) and then projected on the surface, varies with -axis, in which the PrimeSense Ltd. calls the speckle pattern structured light.Based on the extended depth of field (EDof), DOE is the embodiment of astigmatic optical element, which has different focus for different angle direction.Besides, DOE is designed to reduce the divergence angle so that light intensity would vary slower with distance.
Figure 1(a) shows the speckle pattern projected when the distance between the surface and the device is 1.60 m; Figure 1(b) displays the speckle pattern projected in a dark room from the distance of 0.5 m.Moreover, in order to test how the speckle pattern is generated by an infrared light source, one can modify the webcam: LifeCam Cinema (Microsoft, as shown in Figure 2(a)) by removing the filter of infrared light.As shown in Figures 2(b) and 2(c), there indicate the speckle patterns of the projection surfaces captured by the modified webcam where the infrared light is emitted from a remote control and the Kinect respectively.Notice that the infrared light source of the Kinect, emits a pyramidal speckle pattern.
In this paper, the detectable range of objects is the distances from 60 cm to 10 m and in front of the Kinect.By integrating a Kinect and its depth map analysis, the proposed system aims to improve the effectiveness of facial detection.The diagram of proposed scheme is as shown in Figure 3.At first, the Kinect emits a near-IR light beam which is then projected on surfaces and forms a speckle pattern.The speckle pattern is captured as a grayscale image by a monochrome CMOS camera of the Kinect.The grayscale image is then processed by histogram thresholding and transferred to a binarized image containing the contours of objectives.Next, by using median filter operation, the minor image blocks will be eliminated.Finally, after the following steps of edge detection and ellipse detection, the objective facial blocks will be determined.Facial image preprocessing is an important aspect of the facial detection and facial recognition.The images are sensitive to ambient conditions such as the brightness of ambient light, resolution and characteristics of the image device, and signal noises.There will be noise, distortion, low contrast, and other defects occurring during facial detection processes.In addition, the captured distance and direction of objectives, focal size, and so forth might make face blocks be with different sizes and locations in the image.
To ensure that all the objective faces in the image can be detected with consistent features such as size, aspect, contour, and position of facial blocks, it needs to do suitable image preprocessing.The common image preprocessing methods include facial position correction (rotation, cropping, and scaling), facial image enhancements, geometric normalization, grayscale normalization, and so forth, in which in order to get good facial recognition must follow to get upright positions of facial images; the facial image enhancement is to improve the facial images and then results in clearer images and the images in uniform size and conditions are more conducive to the image processing for facial detection and recognition.The necessary image preprocessing of facial detection will be discussed below.
Light Coding.
A typical structured light measurement method is to project a known light pattern into the 3D scene viewed by camera(s) and/or by means of the triangle measurement and geometry relations computation and then can determine the contours of objective surfaces.Similarly, the PrimeSense Ltd. calls the above technology in Kinect "light coding, " which means the speckle spots on the projection are able to be coding to represent the depths of surfaces.That is, the objects will be marked in the same light code because of their similar depth through structured light measurement and determination.Such processing results in a depth map as shown in Figure 4.It is noted that the original grayscale images have been transferred into yellow grayscale ones in this paper in order to display more significantly.
Histogram Thresholding and Median
Filter.There are many noises or unexpected blocks in the grayscale image and need to be removed.Firstly, the image can be in binarization by histogram thresholding operation.Then, the noises in the binary image can be filtered by median filter operation.After these two steps of image preprocessing as shown in Figure 5, the objective facial blocks will be split from the original image successfully.However, the threshold of binarization is difficultly decided by a constant.From Figures 6(a The estimated thresholds by (1) will become 209, 214, 219, and 224.
Edge Detection and Ellipse Detection.
The resultant image after median filter shows the objective block including face and part of body.To execute gradient computation for edge detection (Figure 7(a)) and then to match the block to an ellipse model in axis ratio of 1.2 (a common face) for ellipse detection (Figure 7(b)), the objective facial block is determined as shown in Figure 7(c).
The Advantages of Adopting Structured Light Analysis.
The facial detection based on structured light analysis starts from the grayscale image which is a monochrome image of the speckle pattern.Besides, the foundation processes of the speckle pattern are almost unaffected from ambient light which results in more reliable detection.It benefits the objective detection be superior to being influenced in dusky or bright or inconstant illumination.Moreover, the computations in such monochrome way also cost lower than those while dealing with color image detection.Thus, the proposed scheme is suitably adopted for fast facial detection facing different even bad illumination conditions.
Haar-Like Facial Detection
The concept of Haar-like features was firstly proposed by Papageorgiou et al. in 1998 [12] and then widely used in object recognition [12][13][14][15].They intended to adopt Haar wavelet transfer algorithms to deal with the facial detection of upright faces but found there were certain limitations existing in the application.In order to obtain the best spatial resolution, they proposed 3 kinds and 3 types of characteristics.In [13], Viola and Jones have made an expansion based on these foundations, who propose 2 kinds and 4 types of characteristics defined as 3-rectangle features and 4-rectangle features.
The rapid object detection based on Haar-like features [11,13] is proposed by Viola and Jones in which there are three characteristics as follows: (1) the use of integral images achieves the fast characteristic computation; (2) constructing a classifier by the method of AdaBoost [14] to collect few important characteristics; (3) Using a boosted cascade of simple features, it enhances the detection by focusing on useful features.
In the studies in [13], Viola and Jones proposed the concept of integral images and the theory based on the AdaBoost real-time facial detection.They construct an upright facial classifier which is based on 200 characteristics concluded after classifying 4,916 artificial faces in the size of 24 × 24 and 3,500,000 inhuman faces.From these two examples of rectangular characteristic model, the AdaBoost facial classifier can achieve 95% detection rate; moreover in 14804 inhuman face examinations, the proposed scheme achieved 100% false positive rate.To adopt a boosted cascade of classifiers, it improves the effectiveness of facial detection and reduces the computation time because the inhuman faces will be passed in real-time human facial detection.3.1.Integral Image.Because there are usually more than ten thousand training samples in a rectangular image to represent the features, for instance, if one needs to count the total of pixels in any rectangle, the computation will be huge and time-consuming.The concept of integral image is to count the sum of features in the rectangle, which is then defined as the new image value of respective pixel.For instance, in Figure 8, the value 1 represents the total of the pixels in the rectangular A as a feature and the values 2 , 3 , and 4 indicate the total of the pixels in After a classifier is trained, it can be applied to a region of interest (of the same size as used during the training) in an input image.The classifier outputs a "1" if the region is likely to show the face and "0" otherwise.To search for the object in the whole image one can move the search window across the image and check every location using the classifier.The classifier is designed so that it can be easily "resized" in order to be able to find the objects of interest at different sizes, which is more efficient than resizing the image itself.So, to find an object of an unknown size in the image the scan procedure should be done several times at different scales.
Haar-Like
Feature-Based Cascade Classifier.The cascade in the classifier means that the resultant classifier consists of several simpler classifiers (stages) that are applied subsequently to a region of interest until at some stage the candidate is rejected or all the stages are passed.The word "boosted" means that the classifiers at every stage of the cascade are complex themselves and they are built out of basic classifiers using one of four different boosting techniques (weighed voting).The basic classifiers are decision tree classifiers with at least 2 leaves.
The feature used in a particular classifier is specified by its shape, position within the region of interest and the scale (this scale is not the same as the scale used at the detection stage, though these two scales are multiplied).For example, in the case of the third line feature the response is calculated as the difference between the sum of image pixels under the rectangle covering the whole feature (including the two white stripes and the black stripe in the middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to compensate for the differences in the size of areas.The sums of pixel values over a rectangular region are calculated rapidly using integral images.
Results and Discussion
The detection experiments are executed by using a Kinect, the depth sensing device produced by Microsoft Corp.It emits invisible infrared light beams through a diffuser to be scattered on detected surface.The speckles projected on the surface are detected by the CMOS camera of Kinect as a depth image which could display 3D scene and be used to determine 3D poses and motions.
Figure 11 displays three cases of real facial detections.Each subfigure contains 8 experimental results, from top to bottom and left to right, and there are the speckle pattern, the histogram, the binarized image after thresholding, the image after median filter, the image after edge detection, the facial block after ellipse detection, the image after Haar-like facial detection, and the resultant image after skin segmentation.Even in complex background, the experimental results still demonstrate the feasibility of proposed scheme.From the data after over 5,000 pattern (inhuman face included) tests, the average of successful facial detection rate is 95.3%.If only counts the human face tests, the success rate will be reduced into 85.7%.It is necessary to recover in skin segmentation.
Conclusion
The proposed scheme consists of three subsystems, the first part is the structured light based depth sensing system, the second is the depth map analysis system, and the third is the Haar-like feature based cascade classifier.The structured light device provides the depth maps and helps the system to detect the human face by proposed fast facial detection.The Haar-like feature-based cascade classifier then makes good and fast facial detection.The proposed facial detection scheme based on depth map analysis is proven to obtain Mathematical Problems in Engineering better effectiveness of facial detection and recognition under different environmental illumination conditions.From the experimental results, even in complex background, it still demonstrates the feasibility of proposed scheme.
Figure 1 :
Figure 1: Test of the speckle patterns projected from different distances.
Figure 2 :Figure 3 :
Figure 2: The test of speckle patterns emitting from different infrared light sources.
Figure 4 :
Figure 4: The depth maps determined after light coding.
Figure 5 :
Figure 5: The image preprocessing by filters.
) to 6(d), it is seen that there need to be different thresholds in different depths.One can observe these figures and if to look at the point of 0.5% height in histogram, the possible thresholds 209, 215, 219, and 220 in the depths 0.6 m, 0.8 m, 1.0 m, and 1.2 m can be approximated by the formula of their respective depth as the follows: Threshold = 219 + (depth − 1) * 2.5.(1) (d) The facial image and threshold 220 of depth 1.2 m
Figure 6 :
Figure 6: Different thresholds need to be given for different depth maps.
Figure 7 :
Figure 7: The image processing for facial detection.
Figure 8 :
Figure 8: The diagram of determining an integral image.
Figure 9 :
Figure 9: The concepts of the Haar-like rectangle features.
Figure 11 :
Figure 11: The experimental results of real facial detection based on Haar-like features. | 4,305 | 2013-11-14T00:00:00.000 | [
"Computer Science"
] |
Orthographic familiarity, phonological legality and number of orthographic neighbours affect the onset of ERP lexical effects
Background It has been suggested that the variability among studies in the onset of lexical effects may be due to a series of methodological differences. In this study we investigated the role of orthographic familiarity, phonological legality and number of orthographic neighbours of words in determining the onset of word/non-word discriminative responses. Methods ERPs were recorded from 128 sites in 16 Italian University students engaged in a lexical decision task. Stimuli were 100 words, 100 quasi-words (obtained by the replacement of a single letter), 100 pseudo-words (non-derived) and 100 illegal letter strings. All stimuli were balanced for length; words and quasi-words were also balanced for frequency of use, domain of semantic category and imageability. SwLORETA source reconstruction was performed on ERP difference waves of interest. Results Overall, the data provided evidence that the latency of lexical effects (word/non-word discrimination) varied as a function of the number of a word's orthographic neighbours, being shorter to non-derived than to derived pseudo-words. This suggests some caveats about the use in lexical decision paradigms of quasi-words obtained by transposing or replacing only 1 or 2 letters. Our findings also showed that the left-occipito/temporal area, reflecting the activity of the left fusiform gyrus (BA37) of the temporal lobe, was affected by the visual familiarity of words, thus explaining its lexical sensitivity (word vs. non-word discrimination). The temporo-parietal area was markedly sensitive to phonological legality exhibiting a clear-cut discriminative response between illegal and legal strings as early as 250 ms of latency. Conclusion The onset of lexical effects in a lexical decision paradigm depends on a series of factors, including orthographic familiarity, degree of global lexical activity, and phonologic legality of non-words.
Background
Since the early 80s, one major topic of investigation has been into the exact time the brain takes to access the lexical properties and conceptual meaning of a word, after it has been presented visually or acoustically [1][2][3]. A lively debate has developed since then [4][5][6] about the timing of semantic processes, which now seem to be much earlier (150 ms) than previously conceived (about N400 ms), and to occur in parallel (rather than in sequence) with other types of speech/sentence processing (i.e. ortho-graphic/phonological analysis, first and second order syntactic analysis, pragmatic analysis).
This wide variability seems to depend heavily on methodological factors [6,24] such as differences among studies in experimental parameters (e.g. word luminance, length, duration, frequency of use, semantic category or domain, grammatical class, repetition rate, familiarity, abstractness, ISI, SOA) and task modalities (lexical decision, orthographic or phonetic decision, semantic priming, SRVP, terminal word paradigm, etc.). The degree of fluency and age of acquisition of a language for a multilingual speaker [25,26], and even the number of languages known, are also very important in determining the speed of semantic processing. For example, a linear relationship has been demonstrated between response times to semantically congruent words in simultaneous interpreters engaged in a simple semantic task in their native language (judging the degree of semantic integration between a sentence and its terminal word) and the number of languages mastered by them: the response slows as the number of languages mastered increases from 3 to 5-6 [27]. Consistently, another study [28] found that the N1 and N400 components to semantically incongruous words had slower latencies in simultaneous interpreters (mastering up to 5-8 languages) than in age-matched monolingual controls. Therefore it seems that semantic processing relies on systems with limited capacity, and the speed of processing may depend on multiple factors such as those previously reported. One obvious factor in the inconsistency among studies is the inter-study variability in signal-to-noise ratio for ERP averages: in some studies, ERP waveforms are so noisy that the first reliable component showing stimulus-related effects necessarily becomes the largest in amplitude and most resistant to noise (N400), the late latency of which is thereafter considered the onset of semantic processing.
One further factor that might affect the temporal onset of the first semantic effect in lexical decision tasks based on word/non-word recognition is the orthographic similarity between words and non-words, that is the number of orthographic neighbours of pseudo-words [29,30]. Indeed, the decision processes that lead to the determina-tion of whether a given item exists may demand more effort when a pseudo-word is orthographically quite similar to a real word. In some studies the procedure adopted to generate legal pseudo-words consists in changing one single letter in each element of a set of real words, or by transposing 1-2 letters [31]. The pseudo-words thus obtained (although meaningless) are very similar in form to words at both the orthographic and phonological levels. Interestingly, a recent ERP study [32] involving a lexical decision task (word/non-word discrimination) demonstrated that responses to pseudo-words that were perceptually similar to words, obtained by transposing two letters, were 118 ms slower than responses to less word-like pseudo-words (created by replacing those two letters). Furthermore, the transposed-letter pseudo-words activated their corresponding base words to a considerable degree, as shown by a substantial false alarm rate. As for the ERP data, the N400 component (300-500 ms) was larger to less "word-like" stimuli than to transposed-letter pseudo-words, which were treated almost as words, whereas in a second latency range (500-680 ms) this effect was reversed -transposed-letter pseudo-words were fully recognized as meaningless.
It has been shown [30] that reaction times to non-words are longer when these stimuli have many word neighbours. According to Grainger and Jacobs, non-words with many neighbours (some of which are words) generate high levels of global lexical activity through the activation of word neighbour representations. This high global lexical activity prolongs the processing time needed to determine the level of semantic denotation of a string and therefore results in slower correct 'no' responses to nonwords with many neighbours. It has been consistently shown [33] that, when the pseudo-words are created by replacing one internal letter of a base word, high-frequency pseudo-words yield slower latencies than low-frequency pseudo-words in lexical decision tasks.
Braun and colleagues [34] recently investigated the role of non-word orthographic neighbours by comparing ERP responses to 300 words and 300 non-words obtained by replacing 1, 2, 3 or 4 letters from a set of 3000 real ones. They expected a systematically graded variation in the ERP, in particular of the N400 amplitude, in response to non-words. The results from a lexical decision task provide evidence for an overall effect of lexicality (word vs. pseudo-word distinction between 300 and 390 ms, and a graded effect of global lexical activity for non-words between 450 and 550 ms post-stimulus). The data are interpreted as reflecting two different decision processes: an identification process based on local lexical activity underlying the 'yes' response to words, and a temporal deadline process underlying the 'no' response to nonwords based on global lexical activity.
As for the acoustic phonetic modality, an interesting ERP study [35] presented spoken words and pseudo-word variants that differed only in their medial consonants. For each pseudo-word, one phoneme was replaced with a new one, which either had a coronal (dental or nasal /d/, /t/, / n/) or a non-coronal (labial: /b/, /p/, /m/; dorsal /g/, /k/) place of occlusion. ERPs were not time-locked to stimulus onset but to deviation points. They found a marked difference in the latency of lexical effects according to the type of replacing phoneme (coronal or non-coronal). In particular, while ERPs for non-coronal variants did not differ from their base words in the initial part of the N400 (100-250 ms), the mean amplitudes for coronal pseudo-word variants were more negative than the mean amplitudes for their non-coronal base words, thus showing an early lexical effect.
The aim of the present study was to investigate further the neural mechanism subserving reading and the time course of lexical processing by comparing the bioelectrical activities elicited by letter strings with various degrees of semantic denotation (inducing a graded level of global lexical activity) and orthographic legality. For this purpose, 400 words, quasi-words (non-words with many neighbours obtained by replacing one letter), non-derived pseudo-words (non-words with few orthographic neighbours) and illegal letter strings were presented. We expected to find: (i) an effect of orthographic legality and word visual familiarity by comparing ERPs to legal pseudo-words and to illegal letter strings; (ii) a graded effect of non-word orthographic neighbours on the amplitude and latency of ERP responses, thus shedding some light on the timing of lexical processes.
Participants
Sixteen Italian University students (8 men and 8 women) volunteered for the study. Their ages ranged from 20 to 25 years (mean = 23; SD = 1.73). All had good or correctedto-normal vision and right hand and ocular dominance, as attested by the Italian version of the Oldfield inventory [36]. They were all healthy and reported that they had never suffered from neurological or psychiatric diseases. Experiments were conducted with the understanding and the written consent of each participant and in accordance with ethical standards (Helsinki, 1964). The subjects earned academic credits for their participation. Four participants were excluded from the statistical analyses because of excessive EEG and EOG artefacts.
Procedure
Stimuli consisted of 400 letter-strings including 100 Italian words, 100 legal derived pseudo-words, 100 nonderived pseudo-words, and 100 illegal letter strings. They were blue on a white background, typed in capital letters and Times New Roman font.
Derived pseudo-words were obtained by changing one single letter in an existing lemma (e.g. Banana -> Barana), whereas non-derived pseudo-words were created de novo and had no orthographic neighbours (see Table 1).
Stimuli were randomly presented at the central visual field for 200 ms with an ISI varying between 1650 and 1850 ms (see Figure 1). Stimuli were 1 cm in height (30'10" of visual angle) and their length ranged from 4 to 9 cm (from 2°1'41" to 4°32'32").
They were balanced for length, ranging from 4 to 8 letters (words = 6.08; SD = 1.38; pseudo-words = 6.15; DS = 1.34; quasi-words = 6.15; SD = 1.35; letter strings = 6.12; SD = 1.36). Overall, words and quasi-words (that is, the original lemmas used to generate them) were familiar and had good imageability values (half were names of animals and the other half of vegetables). Letter strings included both vocals (V) and consonants (C). The relative proportion of vocals and consonants was similar across lexical classes (e.g., 3V, 4C for a 7 letter word). The repetitive insertion of consonants not very frequent in the Italian orthography (e.g., Q, Z, X, Y, W) was also avoided. Apart from that, LS were unpronounceable and illegal, for example they did not always end in a vowel, as instead required by Italian orthographic rules.
Words and quasi-words (that is, the original lemmas used to generate them) were balanced in frequency of use according to a online database [37]. In detail, words had a mean frequency value of 22.11 (SD = 33.67); words used to generate quasi-words had a mean frequency value of 20.51 (SD = 34.31); again, for quasi-words, half were names derived from animals and the other half from vegetables. Words, quasi-words and pseudo-words were regularly pronounceable, whereas letter strings were phonologically illegal.
Participants sat comfortably in a darkened, acoustically and electrically shielded box in front of a computer screen located 114 cm from their eyes. They were instructed to fixate a little cross located at the centre of the screen and avoid any eye or body movements during the recording session.
The task was a lexical decision task (word/non-word). Subjects had to press a button with the index finger (of the left or right hand) in response to words, and with the middle finger in response to non-words, as accurately and rapidly as possible. The two hands were used alternately during the recording session, and the hand and sequence order were counterbalanced across subjects.
EEG recording and analysis
The EEG was continuously recorded from 128 scalp sites (see Figure 2 for the complete electrode montage) at a sampling rate of 512 Hz. Horizontal and vertical eye movements were also recorded. Linked ears served as the reference lead. The EEG and electro-oculogram (EOG) were amplified with a half-amplitude band pass of 0.016-100 Hz. Electrode impedance was kept below 5 kΩ. EEG epochs were synchronized with the onset of stimulus presentation and analyzed using ANT-EEProbe software. Computerized artefact rejection was performed before averaging to discard epochs in which eye movements, blinks, excessive muscle potentials or amplifier blocking occurred. EEG epochs associated with an incorrect behavioural response were also excluded. The artefact rejection criterion was a peak-to-peak amplitude exceeding 50 μV, and the rejection rate was ~5%. ERPs were averaged offline from -100 ms before to 1000 ms after stimulus onset.
Response times exceeding mean ± 2 standard deviations were excluded. Hit and miss percentages were also collected and arc sin transformed in order to be statistically analyzed. Behavioural (both response speed and accuracy data) and ERP data were subjected to multifactorial repeated-measures ANOVA. The factors were "lexical class" (words, quasi-words, pseudo-words, letter strings) and "response hand" (left, right) for RT data, and additionally "electrode" (dependent on ERP component of interest) and "hemisphere" (left, right) for ERP data. Multiple comparisons of means were done by post-hoc Tukey tests.
Topographical voltage maps of ERPs were made by plotting colour-coded isopotentials obtained by interpolating Scheme of the 128 channels electrode montage voltage values between scalp electrodes at specific latencies. Low Resolution Electromagnetic Tomography (LORETA [38] was performed on ERP difference waves at various time latencies using ASA3 and ASA4 software. LORETA, which is a discrete linear solution to the inverse EEG problem, corresponds to the 3D distribution of neuronal electric activity that has maximum similarity (i.e. maximum synchronization), in terms of orientation and strength, between neighbouring neuronal populations (represented by adjacent voxels). In this study an improved version of Standardized Low-Resolution brain Electromagnetic Tomography (sLORETA) was used that incorporates a singular value decomposition-based lead field weighting: swLORETA [38,39]. Source space properties were: grid spacing = 5 mm; Tikhonov regularization: estimated SNR = 3.
The mean amplitude of temporal P2/N3 and P3 components was measured at centro-parietal (CP5, CP6) and temporo/parietal (TTP7, TTP8h) sites between 250 and 350 ms, and between 380 and 460 ms, respectively. The mean amplitude of occipito/temporal N3 was measured at lateral occipital (PO9, PO10) and posterior temporal sites (P9, P10) between 345 and 395 ms. The mean amplitude of N400 response was measured at the same sites between 400 and 600 ms. This ANOVA was performed on ERP responses to legal strings (words, quasi-words, pseudo-words).
P3 peak latency and peak amplitude were measured at CP5, CP6 sites between 380 and 730 ms post-stimulus. Measurements in the ascending phase of P3 component (mean amplitude value in the 380-460 ms time window) were performed to emphasize the quite early P3 response to letter strings.
In order to focus the analyses on the mechanisms supporting lexical processing and to explore the graded effect of global lexical activity for the three categories of legal strings, further ANOVAs were performed on anterior components, with three levels of variability for "lexical class factor" (words, quasi-words, pseudo-words). Anterior and central components were measured as follows: N2 mean amplitude between 200 and 250 ms at the FFC1h, FFC2h, FFC3h, FFC4h electrode sites. Late negative deflection lexical processing negativity (LPN) mean amplitude was measured between 250 and 340 ms at the AFF1, AFF2, AFp3h, AFp4h electrode sites. This components has been described by King and Kutas [14] as an anterior negativity, ranging from about 280 to 385 ms of latency, and being very sensitive to the frequency of occurrence of words.
P3 component mean amplitude was measured between 340 and 400 ms at the AFF1, AFF2, AFp3h, AFp4h electrode sites. P/N400 mean amplitude was measured between 400 and 600 ms at the CCP5h, CCP6h, CPP5h, CPP6h sites whereas P600 mean amplitude was measured between 600 and 800 ms at the same electrode sites.
ANOVA on the RTs revealed the effect of lexical class (F3,33 = 8.0639; p < 0.001; eta2 = 0.423; F-crit = 2.891), showing that RTs were most rapid to letter strings and slowest to quasi-words (W = 554; QW = 622; PS = 567; LS = 532 ms). Post-hoc comparisons showed that response times were slower in response to quasi-words than to any other stimulus type (p < 0.01), while they tended to be faster to letter strings than pseudo-words (p = 0.07), probably reflecting task difficulty. Response hand had no effect on behavioural data.
Electrophysiological data
Posterior components Occipito/temporal N3 (345-395 ms) Figure 3 shows the grand-average ERP waveforms recorded at posterior sites in response to the various stimulus types. . Post-hoc comparisons indicated a significant difference between words and pseudo-words (p < 0.05), no difference between words and quasi-words, and a marked difference between legal (words, quasiwords and pseudo-words) and illegal strings (p < 0.001).
The interaction lexical class × electrode × hemisphere (F3,33 = 3.71; p < 0.021; eta2 = 0.252; F-crit = 2.891) showed larger lexical effects at left than at right electrode sites, and a significant difference between N3 to words and quasi-words at the occipito/temporal (p < 0.001) but not the lateral occipital site. Overall, the effects of orthographical well-formedness and legality were larger at the former than the latter, as illustrated by the mean N3 values plotted in Figure 4.
P3 peak latency
The latency of the late positive component (P3) was strongly modulated by lexical class (F3,33 = 37.8; p < 0.001; eta2 = 0.774; F-crit = 2.891). Post-hoc comparisons showed shorter latencies in response to letter strings (484 ms) than to words (570 ms) or pseudo-words (588 ms; p < 0.001), and to the former than quasi-words (680 ms; p < 0.001), thus perfectly recalling the gradient shown by behavioural data. Grand-average ERP waveforms recorded at left and right ventral lateral occipital (P9, P10) and occipito/temporal (PO9, PO10) sites in response to words, derived non-words (Quasi-w.), pseudo-words (Pseudo-w.) and letter strings (Letter-str.) Figure 3 Figure 6 shows grand-average ERP waveforms recorded at fronto-central sites in response to the various stimulus types. In the first temporal window considered, corresponding to the rising phase of anterior N2, the significant "lexical category × hemisphere" interaction (F2,22 = 5.39; p < 0.012; eta2 = 0.329; F-crit = 3.443) showed a larger negative response to pseudo-words than to words or quasi-words, with no difference between the two former classes of stimuli. The lexical effect was more consistent over the left hemisphere (LH: W = 0.82; QW = 0.93; PS = 0.08 μV; RH: W = 0.98; QW = 0.84; PS = 0.33 μV). This early negativity was larger at more medial (FFC1h-FFC2h = 0.57 μV) than lateral sites (FFC3h-FFC4h = 0.76 μV), as shown by electrode factor (F1,11 = 11.89; p < 0.005; eta2 = 0.519; F-crit = 4.844). Figure 7 shows a comparison of lexical effects as a function of the time-course of processing. Late latency potentials P/N400 (400-600 ms) Figure 9 shows grand-average ERP waveforms recorded at centro-parietal sites in response to the various stimulus types. In this time window, the lexical factor (F2,22 = 24.98; p < 0.001; eta2 = 0.69; F-crit = 3.443) showed a larger negativity to quasi-words than pseudo-words, and a larger positivity to words than pseudo-words (W = 5.52; QW = 1.70; PS = 2.96 μV), thus suggesting that this centroparietal component is sensitive to subjective expectancy and semantic violation. At both electrode sites, P600 was larger to words than to either type of non-word (p < 0.001), and to quasi-words than pseudo-words (p < 0.001), as shown by post-hoc comparisons.
The role of orthographical well-formedness and visual familiarity in reading
Overall, it seems that while the left occipito/temporal area is sensitive to word visual familiarity, the temporo/parietal area is more sensitive to phonological legality. This anatomical and functional dissociation was reflected by the following. (1) There was a lack of discriminatory N3 response between real words and quasi-words, depending on their global visual resemblance to words at left occipital area. This finding suggests the existence of a visual input lexicon, which would store the visual form of known words, allow direct access to the lexicon through a visual route and show early effects of word familiarity (e.g. [6,21]). According to the dual route model of reading, damage to it would result in reading disorders such as socalled surface dyslexia [40]. (2) The ERP data also showed a gradient of lexical activation for N3 at the left occipito/ temporal site in response to words with different numbers Grand-average ERP waveforms recorded at left and right fronto-central mesial and lateral sites in response to the vari-ous stimulus types Figure 6 Grand-average ERP waveforms recorded at left and right fronto-central mesial and lateral sites in response to the various stimulus types. The early clearcut distinction between non-derived pseudo-words and word-like stimuli (words and quasi-words) between 200 and 250 ms in the ascending early phase of LPN is visible.
of orthographic neighbours. This finding is consistent with recent data supporting the evidence that VWFA, besides being strongly sensitive to orthographic stimulus properties [41][42][43][44][45], might be also sensitive to word frequency [46]. (3) At superior temporal sites, ERP showed a clear-cut discriminative response between legal and illegal strings, which was insensitive to the lexical content, probably suggesting difficulty in accessing the phonological forms of illegal strings. It might be suggested that this surface potential corresponds to intracranial generators responsible for the fast mapping between orthographic and phonological representations.
In order to locate the possible neural source of this effect, a swLORETA source reconstruction was performed on the difference-wave obtained by subtracting ERPs to pseudowords from those elicited by letter-strings in the time window corresponding to the temporo/parietal P2/N3 (300-350 ms). The inverse solution showed that the processing of phonologically illegal strings was significantly associated with stronger activity in a series of left and right hemispheric regions, listed in Table 2, including the left angular gyrus (BA 39) and the left pre-central and postcentral area. As well known, the angular gyrus is thought to play a crucial a role in phonological processing [47] and especially in grapheme to phoneme conversion [48,49]. In this context, it is possible that the so called 'dorsal phonological area', including the suvramarginal gyrus (BA 40), might become more active during reading of hardly readable material such as illegal letter strings.
The P3 amplitude reflected a much faster identification of non-words when they were also ill-formed and illegal. The lexical effect resulted in a larger P3 component to words than non-words. The smaller and later P3 to quasi-words than to pseudo-words probably reflected the difficulty of rejecting as non-words items that induced a stronger global lexical activity than non-derived pseudo-words, this depending on the higher number of orthographic neighbours. This hypothesis is supported by behavioural data showing faster RTs to letter strings than pseudo-words and to pseudo-words than quasi-words. This pattern of results agrees with the finding that reaction times to non-words are longer when these stimuli have many word neighbours [30].
The timing of lexical processing
At posterior sites, over the left occipito/temporal area, the N3 response (345-395 ms) showed a gradient of activation with the highest response for the more familiar words and the lowest response for the less familiar word-like cluster of letters. This finding suggests an effect of visual familiarity of words as unitary visual objects. The relatively late onset of the lexical effect, compared to some recent literature [4,18,19,21,22], is very probably due to the mixed presentation of words and non-words with quasi-words that are very difficult to discriminate on the basis of visual appearance, since they were obtained by replacing just a single letter. In contrast, our data show that lexical effects may be very much delayed by the use of non-derived non-words with many orthographic neighbours [30,32,33]. In this regard, an important role in determining the onset of lexical effects is also played by the specific task modalities: for example, letter or phoneme detection (as in [20,50]) requiring focussed selective attention on the physical characteristics of the stimulus seems to expedite linguistic processing compared for example to a higher order task such as lexical decision, which was used in the present study and in others [34]. In addition, word length is a quite crucial factor in determining an earlier lexical onset for short (4-6 letters) vs. longer (7-9 letters) items [21].
Analysis of the anterior N2, LPN and P3 components suggests a dynamic analysis of word feature characteristics, which could be summarized as follows: at about 200-250 ms over the left fronto-central area, pseudo-words were discriminated from more word-like stimuli, resulting in a greater anterior negativity to pseudo-words as the earliest lexical effect. In the next latency range, at about 250-340 ms, the anterior frontal area showed a lexical gradient in the form of a lexical processing negativity that was very sensitive to word lexical properties and the number of orthographic neighbours. This effect might be conceived as a stage corresponding to the extraction (retrieval) of word semantic representations reflecting the global lexical activity of each item. At about 340-400 ms post-stimulus, the main stimulus property analyzed was word lexical representation: items lacking a sufficient level of lexical activation were therefore rejected as non-words. Indeed, P3 distinguished sharply between meaningful and meaningless stimuli, with no lexical gradient depending on wellformedness, legality or number of orthographic neighbours.
The (late) lexical effects obtained in the present study were still earlier than those reported by Braun and colleagues [34]. These authors found a graded effect of non-word neighbours at about 500 ms post-stimulus, while the pure effect of lexicality was found at about 350 ms post-stimulus. This dissociation led the authors to interpret the data as reflecting two different decision processes: a faster iden-Grand-average ERP waveforms recorded at left and right anterior and posterior centro-parietal sites in response to the various stimulus types Figure 9 Grand-average ERP waveforms recorded at left and right anterior and posterior centro-parietal sites in response to the various stimulus types. The arrows indicate the larger N400 response to derived quasi-words, probably suggesting a violation of subjective expectancy.
Grand-average ERP waveforms recorded at left and right anterior frontal (AFp3h, AFp4h) and pre-frontal (AFF1, AFF2) sites in response to the various stimulus types Figure 8 Grand-average ERP waveforms recorded at left and right anterior frontal (AFp3h, AFp4h) and pre-frontal (AFF1, AFF2) sites in response to the various stimulus types. A graded lexical effect for LPN component is notable, depending on the density of orthographic neighbours of the stimulus, and there is a later clear-cut discriminative effect between words and non-words.
tification process based on local lexical activity underlying the 'yes' response to words, and a slower temporal deadline process underlying the 'no' response to non-words based on global lexical activity. It should be considered that in their study the RTs were long, ranging from about 650 to 800 ms, whereas in the present experiment the response times did not exceed 620 ms. For this reason we found no time-delayed global lexical activity effects. On the contrary, the data suggest that orthographic, phonological and lexical word properties were processed in parallel between 200 and 400 ms post-stimulus. The first evidence that quasi-words benefited by their word-like visual form (thus leading to potentials of comparable amplitude between words and quasi-words) was observable at 200-250 ms at left front-central sites, while posteriorly, at about 350 ms, the left lateral-occipital region failed to discriminate them from words. In the same latency range, the nearby occipito/temporal area provided evidence of a marked discriminative response, with significantly enhanced amplitudes to words than quasi-words. In order to locate the possible neural source of this effect, a swLORETA source reconstruction was performed on the difference-wave obtained by subtracting ERPs to quasiwords from those elicited by words in the time window 345-395 ms (Figure 10, left). The linear inverse solution showed that the processing of real words was significantly associated with stronger activity in the left inferior temporal gyrus of the temporal lobe (X = -58.5, Y = -55.9, Z = -10.2, BA37) and in the right fusiform gyrus of the temporal lobe (X = 60.6, Y = -55, Z = -17.6, BA37). These data might be interpreted with the notion that, other things being equal (e.g. orthographic well-formedness), only real words possessing conceptual and sensory features might activate a region in the ventral stream that responds to complex objects and is crucial for recalling names of living entities (in this case, animals and vegetables) [51][52][53]. A further swLORETA aimed at assessing the possible neural locus of the visual word familiarity effect was performed on the difference-wave obtained by subtracting the ERPs to quasi-words from those elicited by pseudowords in the time window 345-395 ms (Figure 10, right).
The linear inverse solution showed that the processing of more familiar non-words (obtained by means of a single letter replacement) was significantly associated with stronger activity in the left fusiform gyrus of the temporal lobe (X = -48.5, Y = -55, Z = -17.6, BA37) and in the right fusiform gyrus of the temporal lobe (X = 50.8, Y = -55, Z = -17.6, BA37) (power RMS = 27.7 mV). This demonstrates that the occipito/temporal N350 might indicate the activity of the visual word form area (VWFA) devoted to orthographic processing, and sensitive to lexical or sublexical properties of words such as word familiarity [9,[54][55][56].
A similarly late effect of word frequency on the occipito/ temporal N2 and N3 components (240-360 ms), localized in the left fusiform gyrus of the occipital lobe, has been recently provided [46]. The data have been interpreted as an index of VWFA sub-lexical sensitivity. At this regard, it should be considered that a different degree of orthographic transparency (from the more transparent Italian to the deeper French or English orthographies) might play a role in the activation of a visual reading route.
Conclusion
Overall, the data provided evidence that: (i) the latency of the lexical effect (word/non-word discrimination) varies as a function of the number of a word's orthographic neighbours, being faster to non-derived than to derived pseudo-words; this suggests some caveats in the use in lexical decision paradigms of quasi-words obtained by transposing or replacing only 1 or 2 letters. Our findings also showed that: (ii) the left-occipitotemporal area, probably reflecting the activity of the underlying VWFA (BA37), is sensitive to word visual familiarity, thus explaining its sub-lexical or even lexical sensitivity (word-pseudo-word difference); and (iii) phonological properties, accessed in a parallel modality during orthographic and lexical analysis, strongly affect lexical decision processes, allowing more rapid rejections of items lacking a phonological form. Tailarach coordinates corresponding to the intracranial generators explaining the difference voltage Letter-strings -pseudo-words in the 300-350 ms time window, according to swLORETA (ASA) [38,39]; grid spacing = 5 mm; power = 37.5 μV). | 7,173.6 | 2008-07-04T00:00:00.000 | [
"Linguistics"
] |
Fractal structure and non extensive statistics
The role played by non extensive thermodynamics in physical systems has been under intense debate for the last decades. With many applications in several areas, the Tsallis statistics has been discussed in details in many works and triggered an interesting discussion on the most deep meaning of entropy and its role in complex systems. Some possible mechanisms that could give rise to non extensive statistics have been formulated along the last several years, in particular a fractal structure in thermodynamics functions was recently proposed as a possible origin for non extensive statistics in physical systems. In the present work we investigate the properties of such fractal thermodynamical system and propose a diagrammatic method for calculations of relevant quantities related to such system. It is shown that a system with the fractal structure described here presents temperature fluctuation following an Euler Gamma Function, in accordance with previous works that evidenced the connections between those fluctuations and Tsallis statistics. Finally, the fractal scale invariance is discussed in terms of the Callan-Symanzik Equation.
Introduction
As the formulation of new mathematical tools opens opportunities to describe systems of increasing complexity, entropy emerges as an important quantity in different areas. In recent years, our knowledge about the role played by entropy in physics as well as in other fields have increased rapidly in part, at least, due to the formulation of new entropic forms that generalize in some way the one first proposed by Boltzmann. The non additive entropy, S q , introduced by Tsallis [1] has found wide applicability, triggering interesting studies on the deepest meaning of entropy and on its importance in the description of complex systems [2][3][4][5][6][7].
The full understanding of the non-extensive statistics formulated by Tsallis, however, has not been accomplished yet. Four different connections between Boltzmann and Tsallis statistics have been proposed so far [8][9][10][11][12][13], all of them giving a clear meaning to the entropic index, q that appears in the non-extensive case and, in all connections, Boltzmann statistics are obtained as a special case. Nevertheless, it seems that the physical meaning of this parameter is not understood in the general case, and the difficulty to grasp the significance of the entropic index may be related to the fact that this quantity never appeared before in thermodynamics, while temperature, even if it appears as another parameter in statistical mechanics, had already an intuitive meaning in the description of thermodynamical systems. This fact, however, cannot diminish the importance of the index q in the formulation and description of systems where Boltzmann statistics is not suitable.
In the present work, we make a detailed analysis of the fourth of those connections, where a system featuring fractal structure in its thermodynamic properties, which was named thermofractals [12], has been shown to follow Tsallis statistics. These fractals are relatively simple systems: they are conceived as objects with an internal structure that can be considered as an ideal gas of a specific number of subsystems, which, in turn, are also fractals of the same kind. The self-similarity between fractals at different levels of the internal structure follows from its definition and reveals the typical scale invariance. It has been shown that thermodynamical systems with the structure studied in the present work show fractional dimensions [12], another feature shared with fractals in general. The fractal dimension can be related to the fact that the system energy is proportional to a power of the number of particles, this power being different from unit. This and other aspects of those systems will be discussed in the present work.
Although the motivation that prompted the formulation of thermofractals was related to applications of Tsallis distributions in high energy physics [14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30], Hadron physics [28,[31][32][33][34][35], astrophysics [32][33][34]36] and cosmic ray spectrum [37], the concept of thermofractals is, in fact, general and, in principle, could find applications in other fields. Being a way to relate formally Tsallis and Boltzmann statistics, the analysis of these fractals may shed some light on the open questions about the meaning of the entropic parameter and on the fundamental basis of the non-extensive statistics, and, in this way, it can also contribute to a better understanding of entropy. In this regard, it is worth mentioning that fractals was one of the starting points for the formulation of the generalized statistics [1]. In spite of our objective here being the study of the general properties of thermodynamical fractals, the results obtained in the present work offer a new perspective in the analysis of Hadron structure. This new perspective will be exploited in a future paper.
This work is organized as follows: in Section 2, the main aspects of thermodynamical fractals are reviewed; in Section 3, the fractal structure is analyzed in detail; in Section 4, a diagrammatic scheme to facilitate calculations is introduced, and some examples are given; in Section 5, it is shown that the temperature of the fractal system addressed here fluctuates according to the Euler Gamma Function, a kind of temperature fluctuation already associated with Tsallis statistics; in Section 6, we analyse the scale invariance of thermofractals in terms of the Callan-Symanzik equation, a result that may be of importance for applications in Hadron physics; in Section 7, our conclusions are presented.
Fractals and Tsallis Statistics
From a mathematical point of view, the basic difference between Boltzmann and Tsallis statistics is the probability factor, P(E), which is an exponential function of energy in the case of Boltzmann statistics, and in the non-extensive statistics proposed by Tsallis is a function called q-exponential, given by where τ is associated with the temperature, k is the Boltzmann constant, A is a normalization constant and q is the so-called entropic factor, which is a measure of the deviation of the system thermodynamical behavior from the one predicted by the extensive statistics. The emergence of the non-extensive behavior has been attributed to different causes: long-range interaction, correlations, memory effects, which would lead to a special class of Fokker-Planck equation that would lead to a non-extensive behavior [8], temperature fluctuation [9,10], and finite size of the system [11]. In this work, we analyze in detail a thermodynamical system recently proposed that presents a fractal structure in its thermodynamical functions, which leads to a natural description of its properties in terms of Tsallis statistics [12], and we show that such a system presents a fractal structure in its momentum space. Three important properties for systems with fractal structure are defined in [12,38] and will be used in the present work: 1.
It presents a complex structure with a number N of compound systems that present the same properties as the parent system. 2.
The internal energy, E, and the kinetic energy, F, of each compound system are such that the ratio E/F follows a distributionP(ε).
3.
At some level of the internal struture, the fluctuations of internal level of the compund systems are small enough to be disconsidered, and then their internl energy can be regarded as constant.
In the study of the thermodynamical properties of the fractal system of interest here, an important quantity is the partition function, defined as where ρ(U) is the density of states. The probability of finding the system at an energy between U and U + dU is, accordingly, given by For simplicity, here we will use the quantity which is, obviously, identical to unit. The main characteristic of the fractal system [12] of interest here is that Ω, which can be written in Boltzmann statistics as where A = Z −1 . ρ(U) is a particular density of states characteristic of such fractal system, results in being equivalent to the integration over all possible energies of the q-exponential function, that is, This result shows, therefore, that, for systems with a particular density of states, will be presented in the following: Tsallis statistics can substitute Boltzmann statistics while all the details of the internal structure of the system are ignored. In particular, this system presents a fractal structure in some thermodynamical quantities, and, consequently, it shows an internal structure with self-similarity, i.e., the internal components are identical to the main system after rescaling.
The importance of this result is two-fold: in one hand, it allows for understanding the emergence of non-extensivity and the applicability of Tsallis entropy becomes clear, with the entropic index, q, being given by quantities well defined in the Boltzmann statistics; on the other hand, the structure obtained resembles in many ways strongly interacting systems, where Tsallis statistics has been used, indeed, to describe experimental distributions [25,[39][40][41].
The particular fractal structure that leads to Tsallis statistics has a density of states given by where F and ε are independent quantities and A = AkT. The remaining part of the total energy, The exponent ν in Equation (5) is a constant that will be related, in the following, to the entropic index, andP(ε) to the Tsallis distribution. Notice that the phase space corresponding to a variation dU is given, in terms of the new variable, by dU = dFdε, since the two variables are independent.
Substituting Equation (5) in Equation (3), it follows that with N = N + 2/3 and α = 1 + ε/kT. Observe that now we have integrations on the independent variables F and ε. It will be clear in the next section that the integration in F is equivalent to an integration on the compound system momentum, and that the integration on ε is related to an integration over the energy of a given component of the system, namely, its subsystems. It is straightforward to verify that Ω reduces to Equation (1) ifP(ε) is itself a q-exponential. In fact, definingP substituting Equation (5) into Equation (3) and integrating the last equation in F, it will result in Equation (4) when the following substitutions are made: With these substitutions, the density distribution results in being Comparing Equations (2) and (4), one can see that the energy distribution of the system, P(U) is equal to the probability densityP(ε). Hence, the energy distribution of the system follows the same distribution of the energy distribution of the compound system internal energy, i.e., P(U) ∼P(ε) (11) This result shows that some properties of the main system are found also in its compound systems, a self-similarity property that is present in the system with a fractal structure. In fact, the system described by the density of states given by Equation (5) is a fractal [12], and below its structure is discussed in detail. Moreover, the distribution given by Equation (10) is the well-known Tsallis distribution, hence we can conclude that using Tsallis statistics all complexity of the fractal system is taken into account in a rather simple way, since, from the non-extensive entropy associated with these statistics, all thermodynamics properties can be derived by the usual thermodynamic relations [42,43].
Fractal Structure
The results obtained in the last section show that the system with the density of states given by Equation (5) presents self-similarity, allowing one to interpret it as a fractal system. In this section, such structure will be analyzed, and it will be shown that such a system is a fractal in the energymomentum space. Notice that Equation (7) can be written as The most evident aspect of a fractal structure is its scale invariance. For the system studied here, it means not only that the self-consistency relation represented by Equation (11) must be valid, but also that for the kinetic energy, F, the distributions must be the same at all levels of the fractal structure. From Equation (5), it follows that the distribution for F is which represents a Maxwellian distribution of energy. Therefore, the scale invariance of thermofractals will be accomplished with the requirement that the kinetic energy distribution and the internal energy distribution are invariant under a scale transformation, so and remains constant, hence Here, and in what follows, we use upper index (0) to refer to quantities for the initial level of the thermofractal structure, or main system, and upper index (n) to refer to quantities for the n-th level of the structure. The energy of the initial thermofractal, or main system, is E = E (0) , and the temperature of the internal structure to this level is T = T (1) .
It is interesting to express the scaling properties in terms of the fractal dimension, which is one of the distinguishing properties of fractals and expresses the fact that some quantities do not scale as one could naively expect from the topological dimension of the system. In the present case, as it was shown in Reference [12], energy and particle multiplicity do not increase in the same way, a different behavior from that found in an extensive ideal gas. In fact, in [12], the subsystem energies obey a geometric ratio given by: where is the fractal dimension. Here, R is the ratio between the internal energy of a subsystem and that of its parent system, and is given in terms of the parameters q and N by The internal energy distribution scales by a factor defining the quantity λ = 1/N 1 1−D , and Therefore, fractals with different internal energies present energy distributions that are similar and scales with the internal energy of the subsystems, that is, Remarkably, as all energies are rescaled, it also happens ε to be rescaled; therefore, one has with τ (n) determined by Equations (9) and (21). Thus, the argument of the q-exponential function in the probability distribution P(ε) = Ae q (−ε/(kτ)) = AP(ε) does not change when we move from one level of the system to its next level. This is, in fact, the essence of self-similarity, and P(ε) is the self-similar distribution. Another interesting feature is that In what follows, the structure of the system and its subsystem just described will be investigated in detail. For the sake of clarity, the symbols will be used. Note thatP(ε)Adε is dimensionless. Due to property 2 of thermofractals, one has at the level n − 1 of the fractal structure where F (n) is the total kinetic energy of the compound fractals and E (n) = F (n) ε (n) /kT (n) is their total internal energy. The following normalized energies will be adopted: and with n = 1, 2, . . . corresponding to the level of the fractal structure. Note that the normalized energies are dimensionless and scale invariant. Given a fractal with non-extensive temperature τ, the subsystem energy, ε (n) , fluctuates according to the distribution and, generalizing Equation (7) to any subsystem level n − 1, one can write (see Equation (A10)) in the Appendix A): Ω n represents the energy distribution of a constituent fractal at the n-th subsystem level of the main system.
correspond to the kinetic energy of the i-th constituent fractal at the n-th level of the fractal subsystem structure, each one having an internal energy determined by (n) = ε f (n) i . Equation (30) can be written in terms of the kinetic and internal energy of each constituent subsystem fractal, since Note that Therefore, the constant A also scales as with A 0 = A being the constant for the main system. This result is consistent with the temperature scale in Equation (21) and with the energy scaling relation in Equation (22). It results that with (n−1) the normalized total internal energy of the thermofractals at the level n − 1. Of course, The term d (n−1) can be written in terms of d (n) as since it is related to the number of possible states { (n) i } that would sum up the total energy (n−1) . The delta function here indicates that (n−1) is equal to the sum of the energies (n) j , which is to be found in the interval between (n−1) and (n−1) + d (n−1) .
With these definitions, one has where and Observe that the integrations inside brakets are performed on the variables corresponding to the subsystem level n.
In Equation (37), relation (32) was used for writing d i in place of dε since with m (n) i being the mass of the i-th constituent fractal. One can identify the mass with the internal energy of the fractal subsystem, so that m Then, Equation (37) results in (see Appendix A, Equations (A1) and (A10)) where u i . In Equation (43), the potential Ω is described entirely in terms of the characteristics of the N compound thermofractals at the n-th level of the subsystem fractal structure, with f (n) i and (n) i being related to their kinetic and internal energies, respectively. However, (n) i = i and π (n) i = π i are independent of n, so it results The self-similar relation present in the subsystem fractal structure can be more apparent if Equation (45) is written as where it is possible to recognize in the term A (n) dE (n) i the same expression as in Equation (26), which allows the extension of calculations to include quantities of the next subsystem level in the fractal structure, i.e., level n + 1, since and In addition, due to Equation (7), which allows the passage to the next subsystem level by following all the steps described above. Before going into further calculations, however, a diagrammatic description will be introduced.
Diagrammatic Representation
It is possible to have a diagrammatic representation of the probability densities that can facilitate calculations of Ω and other relevant quantities. In Figure 1, the basic diagram symbols are presented, adopting N = 2 for simplicity. Each of the basic diagrams correspond to a mathematical expression, and the correspondence can be established as follows:
1.
A line corresponds to a term with f = π 2 /(2 ) and = (u − f ), where u is the total energy of the fractal represented by the line.
2.
A vertex corresponds to the term 3.
To each final line, i.e., those lines that do not finish in a vertex, the associated term reads The simplest diagram of interest is a line with a vertex where each branch is a final line. In this case, the diagram scheme results in Delta functions can be included to fix energy and momentum of some of the fractals at any level. As an example, consider the graph shown in Figure 2. Observe that there are two levels of the subsystem structure: the initial fractal has well defined momentum (it is indicated by i), and, in the second level, one of the subsystems has well defined energy and momentum.
Such a diagram gives the probability to find a constituent subsystem fractal f at the third level of the initial fractal i. According to the diagrammatic rules, one has where δ f 1,2 , f f determines the kinetic part of the fractal indicated by f at the second level. It is also possible to consider the subsystem fractal structure in the opposite way: given N fractals with energies { f 1 , 1 , . . . , f N , N } varying in the range d f 1 , f 1 , . . . , d f N , d N , the probability that they form a single fractal with energies f = f 1 +, . . . , + f N and = 1 +, . . . , + N is given by with E/kT = f + and ε = ( / f )kT. This result is a direct consequence of the fact that thermofractals are in thermal equilibrium. After integrating on f , one obtains showing the consistency of the fractal description introduced in the present work. The process described in Equation (55) corresponds to N fractal subsystems merging into a single one. In the example given above and described in Figure 2, the final system generated from the lower branch at the first level can be merged into a single fractal. The tree diagram can then be reduced to a linear diagram, as shown in Figure 3, resulting in a simpler expression for the probability calculated in that example. In this case, the result is Figure 3. The same diagram of Figure 2 represented as a linear graph. This is possible by rearranging terms in the summation of different contributions and using the merging property of thermofractals.
Temperature Fluctuation in Thermofractals
On the right-hand side of the last equality in Equation (37), the distribution of the kinetic energy of the thermofractals at the nth level is given by where T (n) is the scaled temperature at the nth level of the thermofractal. However, at the n − 1 subsystem level, there are N thermofractals, and each of them present different internal energies.
One could, therefore, write the temperature T (n) j associated with the thermofractal j at the previous level. Then, Equation (58) can be written as for each thermofractal i found inside a thermofractal j at level n − 1, with Suppose now that, at the nth level, the internal energy fluctuations are already small enough to be disregarded and the internal energy is a constant m i . Then, according to the diagrammatic rule 3 of thermofractals, the energy fluctuation of the jth thermofractal at the n − 1 subsystem level is proportional to the kinetic energy fluctuation, that is, where µ = m/kT. However, the product of Gamma functions above is itself Gamma function, as described in the Appendix, resulting with M = ∑ m i . Since the thermofractals at the nth subsystem level are being considered as structureless particles, the subsystem at level n − 1 can be considered as an ideal gas of particles with masses m i . The parent thermofractal at level n − 2 is therefore formed by N thermofractals, each one considered as an ideal gas of N particles but at different temperatures T j and with total energy M j . The probability density to find a set with total internal energy energy M is then If, at this stage, one still disregards the thermofractal subsystem structure, the kinetic energy F can only be interpreted as a parameter, while the system energy M is the only quantity that keeps some physical meaning, besides the temperature that now fluctuates inside the system. When this step is performed, the equation above is interpreted as a Gamma distribution of the inverse temperature β = 1/(kT), that is, The distribution of temperatures as described by Equation (65) was already considered in connection to Tsallis distribution in a different context [9,10,44]. On the other hand, the possibility of an equilibrated system with temperature fluctuation is rather controversial [45][46][47][48]. In the present work, such fluctuations are well defined in association with the fractal structure of the thermodynamics functions of the system analyzed. Temperature fluctuations arising from a multi scale system were already analyzed in Reference [49].
Callan-Symanzik Equation for Thermofractals
Due to the evident similarities between Hadron structure and thermofractal structure [13,30,33,36,38] and, due to the possible applications of thermofractals or their consequences in Hadron physics [28,31,33,36], astrophysics [28,32] and high energy physics [16,23,24,30], it is possible to show that the thermofractal description has close connections to quantum field theory as far as scaling properties are concerned. This will be done in a future work [50], but it is convenient to advance some aspects as follows.
The simplest diagrammatic representation of the thermofractal evolution form one level to the next level corresponds to a vertex with an initial system characterized by energy and momentum ( 0 , π 0 ), as described by diagram in Figure 1b, at an arbitrary level n generating N subsystems with ( i , π i ) such that 0 = ∑ i and π 0 = ∑ π i . Such diagram leads to Here, the passage from one level to the next subsystem one represents only an alternative description of the same system. However, one can consider that the initial thermofractal can break into N pieces, each one being a thermofractal. Let g be a coupling constant that gives weight to a transition from one subsystem level to another one, and then one can write and the termḡ can be considered as an effective coupling constant. Γ i,j is, then, understood as a vertex function that is clearly scale free. Vertex functions that are invariant under scale transformation can be described by the Callan-Symanzik equation, which played a fundamental role in the determination of the asymptotic freedom in Yang-Mills theory. A thermofractal version of such equation was already derived in Reference [51], and it will be derived here in a different way.
The thermofractal temperature T = T (n) works, as seen above, as a scale parameter that determines the fractal structure of the subsystem at a certain level, so one can write the factor N n in terms of the subsystem temperature by using Equation (20), i.e., Since N = N + 2/3, for the sake of scaling, it will be assumed N ∼ N , which is a good approximation for n sufficiently high. It results that the vertex function is Notice that when the scale transformation on energy and momentum is performed, so that π → λ π and → λ , the distribution P(ε) remains unchanged, since E/F is invariant. Therefore, it can be left out of the scale invariance analysis of the vertex function studied here. Taking this aspect into account and introducing M = kT for the sake of simplicity, the scale invariance of the vertex function Γ is expressed by where it made use of the scaling property of thermofractals. From the above expression, it is straightforward to conclude that and, with these results, one can write where d = 1 − D is the anomalous dimension for thermofractals, a result equivalent to the one obtained in Reference [51]. The fact that thermofractals satisfy the Callan-Symanzik equation indicates that, if it is possible to describe such systems through a field theoretical approach, the Yang-Mills theory is the appropriate framework for it. These results, therefore, sets the grounds for a more fundamental description of thermofractals in terms of gauge field theory, but it will be developed in a future work [50].
Conclusions
In the present work, the structure of a thermodynamical system presenting fractal structure, recently introduced in [12], is investigated in detail. The fractal structure in thermodynamics has been shown to lead to non-extensive statistics in the form of Tsallis statistics; therefore, this system can shed some light on relevant aspects of the generalized statistics.
The study presented here provides evidence of the consistency of the proposed fractal structure of thermodynamical functions that leads to Tsallis statistics [12]. The diagrammatic representation is a good auxiliary tool for calculations. In the present investigation, the scaling features of thermofractals are made clear, and it is concluded that temperature fluctuates from one subsystem level of the thermofractal structure to the other. It is interesting that temperature fluctuations are pointed out as a possible origin of non-extensive statistics [10]; therefore, one can conjecture that thermofractals will present the same temperature fluctuations necessary to obtain Tsallis statistics, as given in Equation (21).
One of the main results obtained in the present work is given by Equation (33), showing that the normalizing quantity increases as the system is described by means of structures at deeper levels, n. This is a consequence of the fact that the systems at deeper levels contribute less to the energy fluctuation of the system. It follows, on the other hand, from the fact that, at deeper levels, thermofractals are less massive, and since the energy fluctuation of thermofractals presents self-similarity, energy fluctuation tends to vanish as n increases, as described through Equation (22).
Another interesting result within the context of Hadron production in high energy collisions is the scale parameter λ n , which appears in Equation (20). If E (n) = Λ and E (0) = E, it is obtained that Since N n = (N + 3/2) n ∼ M, for n sufficiently large, with M being the particle multiplicity, it follows that log M = (1 − D) log(E/Λ) Thus, the measurement of particle multiplicity in high energy nuclear collisions gives an easy way to access the associated fractal dimension D in practice.
In addition, it is shown that thermofractals, when the internal structure is not considered, can be interpreted as an ideal gas with inverse temperature that fluctuates according to the Euler's Gamma function. Such temperature distribution was already connected to Tsallis distribution, but here it is obtained as a consequence of the fractal structure of thermodynamics functions.
In summary, a diagrammatic prescription for calculations with the fractal structure is introduced, which can certainly help in calculations involving several subsystem levels of the fractal structure, and some examples are presented. In particular, it is shown that the equivalence between tree diagrams and linear diagrams, a result that simplifies the calculations of the relevant quantities. Temperature fluctuations inside the thermofractal is analyzed, reproducing a well-known distribution already connected to Tsallis distribution. The Callan-Symanzik equation for thermofractal structure was obtained, opening the opportunity to develop a field theoretical approach for thermofractals. where S n = 2π n/2 Γ(n/2) (A6) is a surface factor for the n-dimensional hypersphere, with Γ being the Euler Gamma Function. However, F = p 2 /(2m), then dF = p m dp (A7) hence dF 2F = dp p (A8) Therefore, from Equation (A5), one has | 6,878.2 | 2017-12-01T00:00:00.000 | [
"Physics"
] |
A rotating annulus driven by localized convective forcing: a new atmosphere-like experiment
We present an experimental study of flows in a cylindrical rotating annulus convectively forced by local heating in an annular ring at the bottom near the external wall and via a cooled circular disk near the axis at the top surface of the annulus. This new configuration is distinct from the classical thermally driven annulus analogue of the atmosphere circulation, in which thermal forcing is applied uniformly on the sidewalls, but with a similar aim to investigate the baroclinic instability of a rotating, stratified flow subjected to zonally symmetric forcing. Two vertically and horizontally displaced heat sources/sinks are arranged, so that in the absence of background rotation, statically unstable Rayleigh–Bénard convection would be induced above the source and beneath the sink, thereby relaxing strong constraints placed on background temperature gradients in previous experimental configurations based on the conventional rotating annulus. This better emulates local vigorous convection in the tropics and polar regions of the atmosphere while also allowing stably-stratified baroclinic motion in the central zone of the annulus, as in mid-latitude regions in the Earth’s atmosphere. Regimes of flow are identified, depending mainly upon control parameters that in turn depend on rotation rate and the strength of differential heating. Several regimes exhibit baroclinically unstable flows which are qualitatively similar to those previously observed in the classical thermally driven annulus. However, in contrast to the classical configuration, they typically exhibit more spatio-temporal complexity. Thus, several regimes of flow demonstrate the equilibrated co-existence of, and interaction between, free convection and baroclinic wave modes. These new features were not previously observed in the classical annulus and validate the new setup as a tool for exploring fundamental atmosphere-like dynamics in a more realistic framework. Thermal structure in the fluid is investigated and found to be qualitatively consistent with previous numerical results, with nearly isothermal conditions, respectively, above and below the heat source and sink, and stably-stratified, sloping isotherms in the near-adiabatic interior.
Introduction
The circulation of the atmosphere is the result of several physical forcing processes: gravity, rotation, radiative exchanges (incoming sunlight, outgoing thermal infrared radiation), clouds and moisture processes, chemical reactions, interaction with surface topography, boundary processes, etc. Although numerical global circulation models solving equations of dynamics, thermodynamics, and continuity in 3D on a sphere are capable of reproducing and even predicting many features of the observed atmosphere, their complexity and the need to represent unresolved processes by semi-empirical parameterizations mean that insights and understanding of how the real atmosphere works may still be elusive.
In this context, laboratory analogues of the dynamics of the atmosphere continue to help scientists understand fundamental dynamical processes in the atmosphere by Abstract We present an experimental study of flows in a cylindrical rotating annulus convectively forced by local heating in an annular ring at the bottom near the external wall and via a cooled circular disk near the axis at the top surface of the annulus. This new configuration is distinct from the classical thermally driven annulus analogue of the atmosphere circulation, in which thermal forcing is applied uniformly on the sidewalls, but with a similar aim to investigate the baroclinic instability of a rotating, stratified flow subjected to zonally symmetric forcing. Two vertically and horizontally displaced heat sources/sinks are arranged, so that in the absence of background rotation, statically unstable Rayleigh-Bénard convection would be induced above the source and beneath the sink, thereby relaxing strong constraints placed on background temperature gradients in previous experimental configurations based on the conventional rotating annulus. This better emulates local vigorous convection in the tropics and polar regions of the atmosphere while also allowing stably-stratified baroclinic motion in the central zone of the annulus, as in mid-latitude regions in the Earth's atmosphere. Regimes of flow are identified, depending mainly upon control parameters that in turn depend on rotation rate and the strength of differential heating. Several regimes exhibit baroclinically providing the possibility of precise, reproducible experiments to test ideas and hypotheses.
In the spirit of this approach, the idealization of the midlatitude atmosphere in the laboratory as a differentiallyheated, rotating, cylindrical annulus (see Fig. 1a) originally proposed by Hide, has proved to be a prolific and productive analogue of many features of the atmospheric circulation (Hide 1958;Fowlis and Hide 1965;Hide et al. 1977;Read et al. 2014). By combining only the three essential physical ingredients forcing the atmosphere, i.e., gravity, rotation, and differential heating, the 'classical' thermally driven rotating annulus setup provides an experimental configuration that enables both large-scale overturning circulations and baroclinic instabilities to be studied and explored. It consists of an assembly on a rotating platform with a fluid contained between two upright, coaxial cylinders that are maintained at different temperatures, with thermally insulating, horizontal endwalls (see Fig. 1b) and reproduces fluid motions and baroclinic instabilities that transport heat from the equator, where solar heating of the surface is most intense, towards the cooler polar regions in the atmosphere. In addition, the latitudinal dependence of the Coriolis force (the so-called 'beta-effect') can also be reproduced by simply adding a conical topography at the bottom inside the annulus (e.g., Bastin and Read 1997).
Thus, this setup has proven to be a simple enough system to capture the fundamental physics and with enough complexity to represent nonlinear features and transition sequences to chaos similar to those of atmospheric dynamics (Hignett et al. 1985;Früh and Read 1997;Bastin and Read 1998;Read 2003;Von Larcher and Egbers 2005;Wordsworth et al. 2008;Vincze et al. 2014). It has also proved valuable as a tractable 'test bed' within which to test numerical codes and methods (Harlander et al. 2011;Vincze et al. 2015) as well as to benchmark statistical-dynamical analysis methods in widespread use in meteorology, such as data assimilation (Young and Read 2013). Moreover, it is even inspiring new experiments for studying non-terrestrial planetary atmosphere dynamics in the laboratory (Read et al. 2015;Yadav et al. 2016).
The classical annulus configuration, however, has some important limitations as an analogue of the mid-latitude atmosphere, an important aspect of which derives from its use of isothermal vertical boundaries to provide heating and cooling. This leads to an intense, boundary layer dominated, overturning circulation which imposes a strong constraint on the background stratification in the working fluid, and which, therefore, limits the possibility for internal instabilities to change this, even at large equilibrated amplitudes. In contrast, in the real atmosphere (see Fig. 1a, c), the effective heat source in the tropics is located mostly near the ground (through absorption of infrared radiation re-radiated from the solar-heated surface), while the heat sink is In each configuration sketched in 2b or 2c, the unwrapped rectangular shape obtained corresponds to one half r-z section of the experimental annular configuration. Axisymmetry with vertical axis is implicit and assumed near the inner vertical boundary near the heat sink region, as illustrated in a predominantly located in the upper troposphere in the mid-latitudes and polar regions 1 (e.g., see Chan and Nigam 2009). The resulting circulation spontaneously partitions itself into a convectively unstable/neutral region in the tropics that interacts with a statically stable, baroclinic region at mid-latitudes that, despite being cooled from above, is statically stable. Precisely how is heat passed from the convectively turbulent region in the tropics into the stably-stratified sub-tropics and mid-latitudes is still not well understood quantitatively. In particular, it is not clear what determines the characteristics of the observed vertical stratification in the mid-latitude atmosphere (its static stability, surface thermal contrast, and tropopause height) and more generally what are the dominant mechanisms for nonlinear equilibration of baroclinic and convective instabilities in the atmosphere (e.g., see Schneider 2006; Stone 2008 for reviews).
Diverse theoretical and numerical studies of stratified macroturbulence and equilibration processes have recently explored the possible role of baroclinic instabilities in acting to stabilise their own thermal environment (Zurita-Gotor and Lindzen 2007; Zurita-Gotor and Vallis 2009Vallis , 2010Schneider and Walker 2006;Schneider 2006). The idea of a feedback between the baroclinic instability and the thermal stratification was originally formulated by Stone (1978). In his baroclinic adjustment theory, based on the two-layer quasi-geostrophic model (Phillips 1951), a rotating, stratified fluid may adjust the mean thermal structure to maintain a stably-stratified environment that is close to a marginally critical state for baroclinic instability (Stone 1978) with a criticality parameter,ξ, that equilibrates to ξ ≈ 1. ξ is a measure of the isentropic slope and defined as (Zurita-Gotor and Vallis 2010), with θ the mean potential temperature, f ≡ 2� sin φ (where φ is latitude) and β quantifies the (linearised) dependence of the coriolis force on latitude so that f ≃ f 0 + βy with f 0 a constant. Recent studies, however, have disputed this analysis, since numerical simulations have shown that the two-layer model does not necessarily equilibrate to a value of ξ ≈ 1 (e.g., Salmon 1978Salmon , 1980Vallis 1988).
Nonlinear wave-wave interactions may play a significant role in determining whether the flow equilibrates to a marginally critical (ξ 1) or supercritical (ξ > 1) state (Vallis 1988;Schneider and Walker 2006;Schneider 2006). Some evidence further suggests that wavewave interactions may become spontaneously suppressed under some conditions (Schneider and Walker 2006;Schneider 2006), leading to marginally critical conditions and the absence of a large-scale turbulent energy cascade, although the detailed structure of the heat sources and sinks may also be important in determining ξ (Zurita-Gotor and Vallis 2009Vallis , 2010. Schneider (2004) further highlights that the inhibition of nonlinear eddy-eddy interactions offers an explanation for the historic success of linear and weakly nonlinear models of large-scale extratropical dynamics, but also rests on the postulate that the kinematic mixing properties of baroclinic eddies exhibit no essential vertical structure. The key point here is that this problem remains highly active and controversial, and laboratory experiments of the kind described here may provide some valuable new insights.
In this study, to explore the impact of changes to the forcing on such baroclinic equilibration processes and the criticality parameter, we have constructed a new laboratory analogue of atmospheric circulation with a rotating annulus that is convectively forced by local differential heating on horizontal boundaries. We aim, thereby, to relax some of the thermal constraints on background temperature gradients in the 'classical' annulus experiments to better represent qualitatively how the thermal structure of the atmosphere itself is maintained, with particular reference to the equilibration mechanisms of convective and baroclinic instabilities. Section 2 describes the characteristics of the new experimental apparatus. Validation of the setup and some preliminary results are presented in Sect. 3 and briefly discussed in Sect. 4.
Experimental setup
The new experimental configuration is inspired by the schematic model, as illustrated in Figs. 1c and 2. By unwrapping the spherical domain, this leads to the usual atmosphere-like cylindrical configuration, but where local differential heating at the horizontal boundaries is used to emulate the radiative heat sink in the upper atmosphere in the polar regions and the heat source near the ground in tropical regions. This allows the possibility of the formation of a statically stable (though baroclinically unstable) zone, sandwiched between convectively unstable regions over/underlying the heated or cooled boundaries (Wright et al. 2017) and permits a possible feedback of the resulting baroclinic instability onto the background stratification. Figure 2 illustrates the experimental configuration, with the fluid contained within an annular channel between two upright, rigid, coaxial, thermally insulating cylinders and a flat (or conical), horizontal base, within which the outermost δ hot = 10 cm in radius is maintained with a constant heat flux and equilibrates at a corresponding temperature T b , while for r < b − 10 cm, the lower boundary is thermally insulating (perspex material). The upper surface is free-slip except for the innermost r ≤ δ cold = 9.25 cm, for which the temperature T a is fixed, thanks to circulating water within the cold central plate and such that T b > T a . The whole system rotates uniformly about the axis of symmetry in an anticlockwise direction at angular velocity up to 3.5 rad s −1 .
The global design and a view of the whole assembly of the new experimental setup are presented in Fig. 3. This shows the perspex tank of internal radius b = 48.8 cm, filled with water and including the aluminium circular ring of radial width 9.25 cm attached to a central 5 cm diameter cylindrical post (i.e., with inner annulus radius a = 2.5 cm) cooled with water circulated from an external water bath in the lab frame (see Fig. 3a) via a DSTI TM rotary union. Topview cameras can be seen attached to the embedded rotating aluminium framework. The 10 cm wide, electrically heated, annular ring is composed of two aluminium plates, onto the lower face of each of which two flexible, plasticcoated Clarian TM heaters (each resistance ≈ 18 ± 1 ) are attached. All four heaters are connected in a parallel configuration to a 10A-35V TTI Thurlby Thandar TSX3510P power supply. Both the cold plate and the electrically heated annular plate are equipped with Captec TM heat fluxmeters and thermal probes, platinum resistance sensors and type T thermocouples, with watertight wire connections at the bottom of the tank (see Fig. 3c).
To monitor the temperature within the fluid and measure the thermal stratification, 3 mm diameter rods instrumented with ten type T microthermocouples have been introduced in a flexible way above the central (baroclinic) zone at midradius and above the heated external (convective) zone, as sketched in Fig. 2 and visible in Fig. 3a inside the tank. Calibration of the thermocouples was carried out independently by immersing the rods in the temperature-controlled reservoir of the chiller and using its built-in platinum resistance probe as a reference. As visible in Fig. 3a-c, an annular silver-coloured ring is located around the tank and consists of a series of white light LED arrays, collimated between two thin, aluminium plates to generate a 5 to 10 mm-thick horizontal white light sheet. On the shelf underneath the main turntable are located a remotely controlled computer, connected by a GPIB connection to the acquisition system for the thermal probes (a computer-controlled Agilent 34970A data logger unit with a 34901A card for measurements using 4-wire platinum resistance sensors and acquisition of voltages from fluxmeters, and a 34908A card for type T thermocouple measurements), and power supplies for the lights and annular heaters (see Fig. 3a). The data logger is set to acquire temperature measurements every 1 or 2 min in the general monitoring configuration and, depending on acquisitions, every 5 or 10 s in equilibrium phases. The principal dimensionless parameters expected to govern the behaviour of the system are in common with the conventional rotating annulus experiment. Thus, the thermal Rossby number is a stability parameter comparing the characteristic velocity U T of the thermally driven flow, consistent with the geostrophic thermal wind relation with the velocity scale L based on the rotation rate and the typical horizontal size of the tank L = b − a: with α the thermal expansion coefficient and T the temperature difference between the plates (see Table 1).
The thermal Rossby number can be related to the Burger number: where N is the buoyancy frequency N 2 = −(g/ρ)∂ρ/∂z , via a factor proportional to the mean slope of the isotherms (and isopycnals) ∼ �T horiz /�T vert . Linked to the vertical static stability, Bu represent the ratio of , the Rossby deformation radius (a typical length scale of baroclinic motion) to the scale of the domain L, and is a key parameter determining the onset of baroclinic instability (Hide and Mason 1975). The importance of viscous forces compared to Coriolis effects is described by the Taylor number in the form deduced by Fowlis and Hide (1965): where symbols are as defined in Table 1.
Following Hignett et al. (1981), Read (1986), King et al. (2009) and Read et al. (2014), we anticipate that a significant parameter associated with the thermal structure of the system will be the squared ratio of the characteristic length scales of the buoyancy-driven thermal boundary layer (without rotation) and the Ekman layers. Assuming that the thermal boundary layer length scale ℓ T = H/(2Nu) ∼ HRa −γ , where Nu is the Nusselt number characterising the efficiency of the heat transfer in the fluid in comparison with pure conduction Ra is the Rayleigh number (3)) 2 × 10 −2 -100 Ekman number (Eq. (9)) E 5 × 10 −6 -3 × 10 −4 Taylor number (Eq. (5)) Ta 10 8 -10 12 Rayleigh number (Eq. (7)) Ra 3.4 × 10 −9 Boundary layer thicknesses ratio (8)) Prandtl number (Eq. (11)) σ ≈ 7 (water only) or 13.4 (water-glycerol solution) Aspect ratio (Eq. (10)) Ŵ 1.54 and γ is an exponent that depends on the convective regime (e.g., see Chillà and Schumacher 2012), the boundary layer ratio parameter may be defined as where ℓ E = HE 1/2 is the Ekman layer thickness with E the Ekman number: P is, therefore, directly proportional to . Other intrinsic parameters are determined once geometry and working fluid are chosen: the aspect ratio: the Prandtl number: which compares viscous and thermal diffusivities. Experiments presented in this paper have been carried out in a flat-bottom configuration and with a 30 cm depth of either pure water or, to increase the density, a 17% solution by volume of glycerol and water (which also changed the viscosity and Prandtl number).
The two heaters were typically energised with an electrical power of 206W for the annulus, while the command temperature of the bath for the water circulation was set to 14 °C and air-conditioning was set to 21 °C in the room. This led to a maximum effective T ≈ 8 to 10 °C between the cold plate and the aluminium annular ring at equilibrium, as illustrated in Fig. 4. This was associated with a measured total power from the heated ring and into the cold plate of around 150 and 50 W, respectively (Fig. 4b). The 100 W loss is consistent with the average T bulk ≈ 23 − 25 °C being larger than the temperature of the air-conditioned room (21 °C). Although the total power from the cold plate was smaller than the one from the heated annulus, the flux (in W/m 2 ) through the cold plate is found, as expected, to be larger than through the annulus, since the whole annulus surface area (S ≈ 0.27 m 2 ) is larger than that of the cold plate (S ≈ 0.041 m 2 ).
The whole system rotated uniformly about the axis of symmetry at angular velocity = 0.03-1.1 rad s −1 in the current series of experiments. A summary of the values and ranges of the main parameters for the current experiments is listed in Table 1. After the first day or two of equilibration, changes in rotation rate were carried out and the system was allowed to relax to a new equilibrated dynamical regime for 1 h or more and thermally monitored before image acquisition.
In experiments using pure water, an exploration of the different regimes was performed by injecting a concentrated solution of fluoresceine dye to highlight vortices and other flow features, while in the water-glycerol experiments, 355-500 μm Pliolite particles matching the density of the solution (≈ 1.043 × 10 3 kg m −3 ) were suspended in the fluid and illuminated by the LED light sheet to trace the flow. Images were acquired with a DFK 31BF03 Imaging Source firewire camera (resolution 1024 × 768) and an ethernet AVT Manta 609B camera (resolution 2752 × 2206) within the rotating frame of reference at a frame rate of 1 fps. In experiments with particles, streaklines were produced by superposing 20 or 50 images together in a given equilibrated regime. In addition, an FLIR i50 thermal imaging camera (resolution 240 × 320) was used to image the temperature variations across the surface of the water.
Thermal structure
As explained in introduction and sketched in Fig. 2a and as shown in numerical simulations of axisymmetric flow 2D regimes of the same system (see Fig. 5 and Wright et al. 2017), the formation of a statically stable though baroclinically unstable zone in the central area is expected at equilibrium, sandwiched between convectively unstable regions over/underlying the heated or cooled boundaries. Shraiman and Siggia (1990) and review by Chillà and Schumacher (2012)]. Errors bars shown are illustrating standard deviation on the averaging period at a given rotating rate When background rotation is significant, two regimes were identified as a function of the rotation rate and classified as a function of P, the ratio of (non-rotating) thermal boundary layer to Ekman boundary layer thicknesses (Read 1986;Read et al. 2014;Wright et al. 2017). An (r, z) cross section of the temperature field from axisymmetric numerical simulations by Wright et al. (2017) of the similar configuration as the current experiment is illustrated in Fig. 5 for the range P [0.1-1]. A secondary density current that flows beneath the heat sink, down the side of the inner cylinder and along the base towards the heat source, was found under very weak rotation conditions (Wright et al. 2017). For weak rotation (defined as 0.05 P 0.2), this is replaced by a more uniform thermal gradient, with almost vertical isotherms close to the inner and outer cylinders, confined to Stewartson layers on the vertical boundaries, in association with a developing vertical shear in the azimuthal velocity (see Fig. 5 and Wright et al. 2017).
For moderate rotation rates with P of order or above unity (0.2 P 2), the thicknesses of the thermal boundary layer and the Ekman layer are comparable. Free convection then results in well-mixed, approximately isothermal, regions above and below the heat source and sink, respectively. Sandwiched between these two convective zones there emerges a stably-stratified baroclinic region with approximately uniformly sloping isotherms. In association with the horizontal temperature gradient obtained within the tank, a strong vertical shear of the azimuthal velocity is now seen to have developed, as consistent with the geostrophic thermal wind balance.
This thermal structure found in the 2D simulations (Wright et al. 2017) was approximately validated in experiments using a water-glycerol solution, as shown in the time-averaged temperature profiles in Fig. 6, which were taken for different rotation rates at equilibrium (indicated by the shaded zones in the time series shown in Fig. 11): an overall stable stratification tendency is found in the central region, while an almost isothermal profile is indeed measured above the heated annular ring.
During the transition towards thermal equilibrium (before 15 h, not shown), the temperature measured by thermocouples at the bottom of probe R2 (very close to the heated ring near the external walls) rapidly rises as expected, while thermocouples at the top first of R2 become colder and then equilibrate to an almost isothermal configuration after 15 h. In the mean time, thermocouples on probe R1 in the central zone at mid-radius reveal an equilibration towards a final statically stable configuration, as confirmed by time-averaged temperature profiles in Fig. 6. As the squared ratio of thermal boundary layer to Ekman boundary layer thicknesses, P, is estimated to be below or around unity for the current experiments (see also values in Fig. 4c, inset), these thermometric results are consistent with the global picture of two almost well-mixed near-isothermal convection zones separated by a statically stable baroclinic zone as seen in simulations over the same range of P ratio (see Fig. 5).
The thermal structure differs somewhat in the case of experiments using a water-only solution and dye visualisation (see Figs. 7,8), but a statically stable interval is still present in the lower part of the profiles, merging into a more isothermal section in the upper part of the profile. This may be due to convection processes still at play as the flow gradually evolves towards equilibrium (given a shorter dwell time at each rotating rate) or alternatively may have been the result of evaporation and wind stress processes at the surface due to there having been no enclosure around the tank at that time (in contrast to the water-glycerol experiments conducted afterwards). We estimated the power lost at the free surface due to evaporation (P evapo = L h ρdV /dt, where L h is the water latent heat and dV /dt the volumetric debit) by performing a rough monitoring of the decrease in time of the water level along the depth of the cold plate in water-only experiments. The power lost via evaporation process was found to be in the range 50-80 W, so not negligible compared to the power through the cold plate. This effect is likely to be still present in the glycerol case and temperature profiles in Fig. 6 in the mid-radius central zone also suggest the formation of a mixed layer in the uppermost few centimeters beneath the surface, in contrast to what is found in numerical simulations, where a rigid-lid boundary was used. However, this effect seems somewhat reduced, partly due to the physical properties of the mixture, or more likely mainly due to the enclosure of the whole setup with thick black fabric. Fluctuations can be seen in the interior of the same profiles within this central zone, however, which should be investigated in further experiments. Thus, at least some of these observed differences between pure water and glycerol mixture experiments in temperature profiles are likely due to evaporation effects. This illustrates the well-known imperative in rotating experiments with a free surface to reduce air exchange between air above the tank with the air in the room using thick curtains or panels or the use of rigid-lid experimental configurations, to prevent potentially non-negligible evaporation and wind stress effects. In the present case, the flux (in W/m 2 ) through the cold plate is much larger than through the free surface. Since the free surface is 17 times larger than the cold plate area, however, convective plumes can be expected to be more intense underneath the cold plate than the free surface.
Heat transport
The use of fluxmeters on the cold plate and on the heater ring allows us to estimate the Nusselt number, as illustrated in Fig. 4c, for the glycerol experiments. For estimating the conductive power Q cond in the denominator of the measured Nusselt number in in Fig. 4c, the coefficient of proportionality of Q cond with T was retrieved from numerical calculation solving the steady conduction equation for the same control and geometrical parameters. Then, the heat transport efficiency is written in the form of normalized Péclet value (Nu − 1)/Nu 0 . The Péclet number associated with the annular heated ring is found fixed, while on the cold plate, it is found that it rises with the P ratio, i.e.,with (Fig. 4c). Such two Nusselt number measurements associated with fluxmeters signals on each plate normally give two estimates of the same quantity. However, in reality, the power budget is distributed as P hot plate = P cold plate + P free surface + P walls , with lateral conduction through perspex walls P walls likely to be negligible compared to the others, given the small difference between the bulk temperature and the room temperature. P free surface is likely dominated by evaporation and whose value estimated above is consistent with this global budget given the values of power measured through the heated Fig. 7 Measurements of vertical temperature profiles for the midradius zone and the zone above the heated ring averaged for 1 h in the water-only experiment with T command cold water bath = 14 °C and input power in heated ring = 206 W. Gray area (shown only on one profile in the central zone for clarity) is the estimated ±0.1 °C incertainty Fig. 8 Measurements of vertical temperature profiles for the midradius zone and the zone above the heated ring averaged for 20 min in the water-only experiment with T command cold water bath = 18 °C and input power in heated ring = 97.5 W. Gray area (shown only on one profile in the central zone for clarity) is the estimated ±0.1 °C incertainty annulus and cold plate. Therefore, the Nusselt number retrieved from the cold plate seems the more relevant estimate to better reflect the transverse thermal efficiency and the different flow dynamical regimes depending on rotating rate than the Nusselt based on the heated annulus measurements. Even if the configuration is fully 3D here and with the presence of baroclinic instabilities, in contrast to numerical axisymmetric 2D simulations, it turns out that a similar increase of normalized Péclet number from the cold plate as a function of P was found by Wright et al. (2017) in the [0.04-0.2] range.
Horizontal flow
The flow configuration anticipated from the numerical simulations of Wright et al. (2017), of a baroclinic zone sandwiched between two convection zones, is further validated in horizontal streakline features, as illustrated in Figs. 9 and 10 (and the Online Resource video, for example, in the 0.08 and 0.12 rad s −1 cases; Movies 3 and 4). A clear separation can be noticed between the mid-radius zone, with the large amplitude baroclinic wave ( Fig. 9 reveals a nonaxisymmetric pattern consistent with baroclinic mode 1 or even an elliptic pattern corresponding to baroclinic mode 2), and a zone in an outer annular strip near the external wall presenting small-scale features likely to be remnants of the Rayleigh-Bénard convection plumes in that zone. Even in snapshots exhibiting a larger amplitude of baroclinic wave at = 0.12 rad s −1 , the meandering jet does not seem to reach the outer cylinder. This new experimental setup, therefore, offers the capability of exploring the properties of fully developed baroclinic instabilities with minimal interference from sidewall boundaries.
At low rotation rates, the experimental flows observed are consistent with the structure of the azimuthal velocity predicted by 2D axisymmetric simulations (see cases P = 0.03 and P = 0.15 in Fig. 5 and Wright et al. 2017). The occurrence of higher values of azimuthal velocity underneath and near the inner cold plate than at larger radii is confirmed for the = 0.03 and 0.08 rad s −1 cases, as illustrated in the video of a close-up view of dye experiments (e.g., Movie 1 in the supplementary Online Resource) and in streakline movies (Movies 2-3 in supplementary Online Resource). This occurrence of an intense 'polar vortex' is particularly interesting as this characteristic has been suggested as a significant dynamical feature in slowly rotating planetary atmospheres such as on Titan (Lebonnois et al. 2012) and Venus.
Baroclinic instability
In the range of rotation rates and temperature differences used, it is expected that isopycnals in the statically stable region near mid-radius would be quite strongly sloped, allowing baroclinic instabilities to be able to grow. Figure 10 shows examples of typical observations of flow patterns found in the fluid, illustrated with passive dye tracer images (after substraction and/or inversion of the image), particle streaklines and infrared thermographic camera images. Indeed, several modes of baroclinic instability were observed, from low-order, regular, azimuthal wave modes at small angular velocities to irregular and turbulent motions with small, chaotic vortices at higher rotation rates, as illustrated in Fig. 10 and in movies 2-5 in the supplementary Online Resource. For = 0.12 rad s −1 , the baroclinic wave is particularly highly developed and presents an almost steady mode 3 associated with a measured Burger number Bu = 0.15. Thus, the onset of strong baroclinic waves is found just above 0.08 rad s −1 which corresponds to Bu = 0.65 (see Figs. 6, 10), which is approximately consistent with the Eady criterion (see Hide and Mason (1975), with an instability threshold at Bu = 0.581). The evidence of weak instability for 0.03 rad s −1 up to 0.08 rad s −1 (Bu = 3.6 and 0.65, respectively) suggests the presence of instability beyond the Eady cutoff. This is also consistent with the zone of weak waves found by Hignett et al. (1985) (see symbol w in Fig. 10) suggesting a possible vertical confinement or surface intensification of baroclinic waves (Hide and Mason 1978;Jonas 1980). This will be investigated in future work. Nonlinear amplitude vacillation features, similar to those found in classical rotating annulus experiments (e.g., Früh and Read 1997), were also observed. Thus, it was observed, e.g., at rotation rate 0.12 rad s −1 , that a baroclinic mode with the initial wavenumber 3 was found to evolve to a mode 4 (i.e., suggesting a form of nonlinear wavenumber vacillation). Other realizations at a similar rotation rate revealed an evolution of a mode 3 with varying amplitude (i.e., amplitude vacillation), as illustrated in supplementary material Movie 4.
In the same way, with a zoomed-in view of temperature time series from the thermocouple probes in the central zone at mid-radius, Fig. 11 also shows oscillatory patterns that reveal a slow drift of the lowest wavenumber components around the tank (τ drift ≈ 55min) for = 0.03 rad s −1 (see also Fig. 12 for a closer view and Fourier analysis of it) and more complex, nonlinear baroclinic oscillations at higher rotation rates. A deeper analysis of these features is beyond the scope of this paper and will necessitate longer duration of data acquisition at a given equilibrium state for each rotation rate in future experiments; this will be presented elsewhere.
Finally, we compare our results with previous observations of baroclinic modes with the classical rotating annulus configuration. Figure 10 shows the location of the different observed modes in a Ta-regime diagram on which the locations of various dynamical regimes Mason (1975) and Hignett et al. (1985) in red and blue lines have been plotted for comparison.
Numbers represent mode wavenumber and added symbols that follow original annotations in Hignett et al. (1985) with a, axisymmetric; w weak waves, S steady waves, AV amplitude vacillation, SV shape vacillation, I irregular. See also movies in supplementary material from previous thermally driven annulus experiments using the classical configuration (Mason 1975;Hignett et al. 1985) are superimposed. This indicates a reasonably good agreement with our experiments in the regions of different wave regimes, between the new locally forced atmosphere-like configuration compared to the classic rotating annulus case.
Conclusion
A new atmosphere-like experiment has been designed in the form of a local, convectively forced, thermally driven, wide rotating annulus to emulate more effectively than in previous experimental studies the distribution of local heat sinks in the upper troposphere of the Earth in the polar regions and a heat source near the ground associated with strong convection in tropical regions. The first experiments with this new "baroclinic sandwich" setup exhibit the thermal structure in the (r, z) cross plane anticipated from the 2D numerical simulations of Wright et al. (2017), as well as a fast 'polar vortex' and the confirmation of coexisting baroclinic and convection zones in the flow structure. This validates this new experimental setup as a tool to study equilibration processes with coexisting baroclinic and convective instabilities in a form highly relevant to what occurs in a planetary atmosphere such as the Earth's.
Baroclinic instability wave modes are found to grow and equilibrate, in broad agreement with previous results in the more classical annular configuration, with similar zonal wavenumbers and vacillation behaviour. Future work will include the addition of a sloping bottom to take into account in the laboratory effects equivalent to lateral variations of the Coriolis parameter. Particle image velocimetry experiments will also then allow us to gain quantitative insights into the nonlinear baroclinic equilibration processes on the beta plane, as well as into the geostrophic turbulence regime, to help understand the peculiar characteristics and dynamics of a mid-latitude Earth-like atmosphere. | 8,330 | 2017-05-23T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Exploring the Interplay of Antimicrobial Properties and Cellular Response in Physically Crosslinked Hyaluronic Acid/ε-Polylysine Hydrogels
The reduction of tissue cytotoxicity and the improvement of cell viability are of utmost significance, particularly in the realm of green chemistry. Despite substantial progress, the threat of local infections remains a concern. Therefore, hydrogel systems that provide mechanical support and a harmonious balance between antimicrobial efficacy and cell viability are greatly needed. Our study explores the preparation of physically crosslinked, injectable, and antimicrobial hydrogels using biocompatible hyaluronic acid (HA) and antimicrobial ε-polylysine (ε-PL) in different weight ratios (10 wt% to 90 wt%). The crosslinking was achieved by forming a polyelectrolyte complex between HA and ε-PL. The influence of HA content on the resulting HA/ε-PL hydrogel physicochemical, mechanical, morphological, rheological, and antimicrobial properties was evaluated, followed by an inspection of their in vitro cytotoxicity and hemocompatibility. Within the study, injectable, self-healing HA/ε-PL hydrogels were developed. All hydrogels showed antimicrobial properties against S. aureus, P. aeruginosa, E. coli, and C. albicans, where HA/ε-PL 30:70 (wt%) composition reached nearly 100% killing efficiency. The antimicrobial activity was directly proportional to ε-PL content in the HA/ε-PL hydrogels. A decrease in ε-PL content led to a reduction of antimicrobial efficacy against S. aureus and C. albicans. Conversely, this decrease in ε-PL content in HA/ε-PL hydrogels was favourable for Balb/c 3T3 cells, leading to the cell viability of 152.57% for HA/ε-PL 70:30 and 142.67% for HA/ε-PL 80:20. The obtained results provide essential insights into the composition of the appropriate hydrogel systems able to provide not only mechanical support but also the antibacterial effect, which can offer opportunities for developing new, patient-safe, and environmentally friendly biomaterials.
Introduction
Regenerative medicine is an interdisciplinary field that aims to repair damaged tissues. Although many efforts have been devoted to new biomaterial development, most still require open surgeries for their application. Injectable biomaterials (for example, hydrogels) can replace traditional clinical practices while being applied by minimally invasive procedures [1,2]. To be injectable, the material must meet several essential requirements; primarily, it must be biocompatible. In other words, it must not cause negative cell response [1][2][3][4]. Furthermore, it should have appropriate mechanical properties, depending on the site of application of the material in the body [1,5], as well as the high capacity for binding biological fluids, while its degradation products must not cause inflammatory processes [1,4]. For the development of injectable hydrogels, it is possible to use different polymers such as polysaccharides (hyaluronic acid [6][7][8][9], chitosan [10][11][12], alginate [13,14]), polyaminoacids (ε-polylysine [15,16], polyaspartic acid [17]), as well as proteins (silk fibroin [18,19]). The advantages of using them are related not only to the approach of their implantation into the defect (deep and hidden anatomical sites can be filled with the material, thus irregular shape defects can be repaired) but also to the fact that they strongly influence the patient's recovery time (more minor scars, reduced risk of infection) and physiological level of discomfort during the post-operative period.
Hyaluronic acid (HA) is a natural polysaccharide that is a major component of the extracellular matrix and plays a crucial role in maintaining tissue viscoelasticity [20]. Viscous solutions, with a high molecular weight of HA (>500 kDa), have been used in an array of different wound treatments and for various tissues (e.g., treatment of skin burns). However, due to the undesirable material loss stemming from the minimal control over parameters (swelling, degradation, and mechanics), extensive use of HA solutions is limited in tissue engineering applications. To overcome these disadvantages, significant focus has been placed on developing HA hydrogels for surgical implantation using minimally invasive methods (MIM) [21]. Numerous chemical and physical crosslinking mechanisms have been studied to manufacture injectable HA hydrogels. Recently, Shangzhi Li et al. modified HA with dialdehyde and crosslinked it with cystamine dihydrochloride via the Schiff base crosslinking mechanism. Synthesized hydrogels could be used as drug carriers and scaffolds for bone tissue regeneration [22]. Another strategy for preparing HA hydrogels is to use different crosslinkers such as 1-ethyl-3- [3-dimethylaminopropyl] carbodiimide hydrochloride/N-Hydroxysuccinimide (EDC/NHS) [6,23], glutaraldehyde and 1,4-butanediol diglycidyl ether (BDDE) [24]. These crosslinking agents allow the forming of different covalent bonds (i.e., ether or amide bond) by crosslinking the two polymer chains, with the end goal of creating a hydrogel prompting in better stability in the physiological environment, more controllable biodegradation dynamics than the physically crosslinked hydrogels, and better mechanical properties. Chemically crosslinked hydrogels are also formed by Michael type-addition, Schiff base formation, Diels-Alder "click" reaction, enzyme-induced crosslink, photo-polymerization, etc. On the other hand, crosslinking mechanism of physically crosslinked hydrogels is based on the intramolecular forces capable of forming non-covalent crosslinks, namely electrostatic/ionic interactions, hydrogen bonds, hydrophobic interactions, π-π interactions, etc. The key advantage of the physically crosslinked hydrogels is their biomedical safety, simplicity of fabrication, and prevailing biocompatibility. However, they present low mechanical properties and limited controllability of biodegradation [25].
ε-Polylysine (ε-PL) is a natural polyaminoacid biopolymer synthesized from the Streptomyces albulus. Because of its antibacterial nature, in recent years, ε-PL has become more interesting for scientists in biomedicine. Moreover, it is non-toxic [26,27], watersoluble [26,27], biocompatible [27], and stable in high temperatures [26,27], which is an essential aspect for choosing the sterilization method. Antimicrobial properties of ε-PL have been explained as electrostatic interactions between positively charged amino (-NH 3 + ) groups in polylysine structure and microorganism cell surface [27,28], thus causing damage to the microbial cell membrane [29]. The main factor of polylysine antibacterial activity is the count of protonated NH 3 + groups or the number of L-lysine residues in polylysine structure [30]. D. Alkekhia and A. Shukla [30] showed the molecular weight's influence on polylysine's antibacterial properties. By increasing the molecular weight, polylysine's antibacterial properties increased, and the minimum inhibitory concentration of S.aureus decreased [30]. Presently, ε-PL and its derivatives are being used in cancer therapy due to their high stability and drug encapsulation efficiency [31], mainly to make tissue adhesives [32] and bacteria-infected wound treatments [33,34].
As mentioned, hyaluronic acid in the MIM can ensure high water binding capacity while providing the material with the needed mechanical strength [35]. However, fur-ther addition of ε-polylysine could provide a highly desired local antibacterial effect [28]. Therefore, HA/ε-PL composites could be used as carriers for local drug delivery and as matrices for bone and cartilage regeneration. Due to the importance of the overall principles of green chemistry (i.e., less hazardous chemical synthesis and designing safer chemicals) producing solvents and auxiliaries is of paramount importance in tissue engineering. [36,37]. Recent studies have shown the high biocompatibility of the biomaterials made of both ε-PL and HA [20,33]. For example, the combination of HA and ε-PL into polyelectrolyte multilayer film has been demonstrated to be a promising approach for antibacterial coatings and local drug delivery [36,38]. Linkages between one carboxylic acid group per disaccharide unit of HA, and one amino group per monomer repeat unit of ε-PL, are being formed via electrostatic interactions, and nano-, micro-, or macro hydrogels can be obtained [37,39]. Physically crosslinked HA/ε-PL nanohydrogels also have potential in drug delivery. Recently, Amato et al. [37] designed physically crosslinked nanogels made of HA/ε-PL and berberine for wound treatments. With the simple mixing technology of HA and ε-PL solutions, it is possible to form drug-loaded nanocarriers with 200 nm dimensions and fast drug release capability (50% within the first 45 min, followed by the sustained release over 24 h). Lawanprasert et al. [40] investigated the encapsulation of small drug molecules and proteins in the HA/ε-PL nanogels for local drug delivery. These novel nanogels have demonstrated high biocompatibility with low systemic toxicity risk. Furthermore, Liu et al. [20] prepared dual-crosslinked hydrogels formed via Schiff base and enzymatic catalytic reaction, using oxidized HA and ε-PL, for the wound treatments. Histological studies indicated that advanced hydrogels are antibacterial-they effectively kill bacteria in the wound and promote wound healing (high cell viability >90% on the third day) [20]. Salma-Ancane et al. [41] compared three different HA and ε-PL ratios (HA to ε-PL mass ratio of 40:60 wt%, 50:50 wt%, 60:40 wt%) of physically and chemically cross-linked hydrogels to show the link between the different cross-linking mechanisms and the hydrogel chemical, physical, and biological properties. In this study, physically crosslinked hydrogels confirmed an excellent antimicrobial effect against the gram-negative bacteria Escherichia coli (E. coli), while cell-viability studies indicated pronounced cytotoxic effects of all three physically crosslinked hydrogel compositions [41].
In the current research, the main goal was to prepare physically crosslinked hydrogels with a decreased amount of ε-PL and to answer the most important question: does the newly formed range of compositions still provide the antimicrobial effect while preserving favourable conditions for in vitro cell viability? Using the novel green chemistry approach, HA/ε-PL hydrogels have been physically crosslinked with ε-PL content of 30 wt%, 20 wt% and 10 wt% and subsequently analyzed. The use of safe methods and materials complying the aforementioned principles of green chemistry approach, to eliminate the high risk of tissue cytotoxicity and increase cell viability were one of the priorities. Moreover, we have gone one step further from the current literature by analysing an abundance of different ratios of HA and ε-PL (HA to ε-PL mass ratio from 10:90 wt% to 90:10 wt%) and by evaluating the antimicrobial activity of all the compositions against E. coli, S. aureus, P. aeruginosa, and C. albicans. Furthermore, the influence of ε-PL content in hydrogels on their in vitro biocompatibility with Balb/c 3T3 cells was evaluated. Within the study, we have successfully shown the indispensable information of the effect of HA content and its interaction with ε-PL, on the set of presented properties. The results authenticated specifically designed hydrogels that are biocompatible, antimicrobial and have good mechanical properties while being synthesised without chemical cross-linking agents, special conditions or modifications. Moreover, a set of these new hydrogels have a great potential in minimally invasive surgical approach that could minimize local infection risks. supplied cell viability dyes, SYTO 9 and propidium iodide (S34854 and R37108). Deionised water (DI) was used throughout the study.
Preparation of HA/ε-PL Hydrogels
HA/ε-PL hydrogels, with different HA to ε-PL weight ratios (wt%) (from 90:10 to 10:90), were prepared by physically crosslinking two biopolymers in an aqueous medium. The solid to liquid phase ratio remained constant at 1:2.5 for all compositions. Immediately after preparation, hydrogels were used for injectability and rheological measurements. For all other experiments, hydrogels were formed in the stainless-steel moulds (h = 5 mm, d = 10 mm for assessment of biological properties and h = 10 mm, d = 10 mm for characterisation of chemical and physical properties) and left to crosslink for one hour at room temperature.
Sterilization of HA/ε-PL Hydrogels
Sterilisation was performed in an autoclave (ELARA11, Netherlands) at 105 • C for 4 min under 179.5 kPa. Sterilised samples were kept at 4 • C until further characterisation.
Swelling Behaviour
The swelling behaviour of lyophilised HA/ε-PL composites was evaluated in 20 mL of deionised water (DI) at 37 • C and 100 rpm. Freeze-dried samples were weighed and soaked in the DI. The swollen hydrogels were periodically taken out of DI and weighed after removing the excess water. Measurements were performed every hour for the first four hours and then every week for four weeks. The swelling ratio (SR) of hydrogels at different time intervals was calculated according to the following equation: where w d is the weight of the freeze-dried hydrogel sample, and w s is the weight of the swollen hydrogel sample. The results are shown as an average value ± standard deviation from three replicates of each prepared hydrogel composition.
Gel Fraction
The gel fraction (GF) of lyophilised samples was determined by measuring their insoluble part. Samples were immersed in 200 mL of deionised water (DI) at 37 • C and 100 rpm for 48 h. Afterwards, samples were removed from DI, lyophilised, and weighed again. The gel fraction was calculated as follows: where w d is the weight of the freeze-dried hydrogel sample and w i is the weight of freeze-dried sample after swelling in DI water. The results are shown as an average value ± standard deviation from three replicates of each prepared hydrogel composition.
Injection Force
The injectability of prepared hydrogels was determined using self-designed injectability equipment (Supplementary Material Figure S1). Tinius Olsen 25 ST (Horsham, PA, USA), a multifunctional, mechanical materials testing machine was used in a compression mode with a load cell of 5 kN. 1.5 g of hydrogel was placed in a 5 mL Luer-lock syringe. Injection force (IF) was measured at a rate of 1 mm/s through the syringe needle of 14 G (1.6 mm inner diameter and 70 mm in length). The results were shown as an average value ± standard deviation from 5 replicates.
Molecular Structure
The functional groups of HA, ε-PL and lyophilised HA/ε-PL hydrogel samples were identified by Fourier transform infrared spectroscopy (FT-IR). FT-IR spectra were obtained using the Varian 800 FT-IR spectrometer. Before measurements, HA/ε-PL lyophilized composites were ground to a powder composition. Three mg of sample and 300 mg of spectroscopic grade KBr were mixed using a ball mill (Mini-Mill PULVERISETTE 23 (FRITCH, Idar-Oberstein, Germany)). KBr and hydrogel pellets with a diameter of 13 mm were prepared using an isostatic press (P/O/Weber Laborpresstechnik, Remshalden, Germany) and analysed. FT-IR spectra were recorded at 30 scans/min, resolution of 4 cm −1 and a wavenumber range from 4000 cm −1 to 400 cm −1 . Before each measurement, the background air spectrum was obtained. The acquired spectrum was normalized using Origin 2018. Normalisation was optimised with a default setting of the software algorithm (range from 0 to 1 corresponding to minimum and maximum values of raw absorbance intensities, respectively). Normalised data were used to determine the ratio of NH 3 + /NH 2 , indicatively evaluating the overall charge status of prepared HA/ε-PL composites [41].
Morphology
Tescan Mira\LMU (Czech Republic) scanning electron microscope was used to examine the surface morphology and the inner structure of prepared HA/ε-PL hydrogels at an acceleration voltage of 10 kV. The hydrogel samples were fixed on aluminium pin stubs by using double-sided adhesive carbon tape. Each sample was sputter coated with a 15 nm thin gold layer before imaging using Emitech K550X (Quorum Technologies, Ashford, Kent, UK) sputter coater. The size of pores was computed manually from 25 pores per sample on several randomly chosen collected SEM images. The shape of the pores was approximated with ellipses by using two radii (Y as the longest and representing the meridian; X as the shortest and representing the equatorial axis) being determined for each pore using the imageJ© program.
Scanco Medical µCT50 (Bassersdorf, Switzerland), micro-computer tomography was used to examine the surface morphology and the inner structure of prepared HA/ε-PL hydrogels. Computed tomography images were acquired at 70 kVp and 114 µA. The aluminium filter of 0.5 mm was used for all hydrogel compositions. Hydrogel's porosity in the sample was calculated based on the scanned cross-sections at the lower, middle and upper area by using the ratio of the object to the total volume of the sample. The results were shown as an average value ± standard deviation from 3 replicates.
Oscillatory Rheology
Rheological properties of HA/ε-PL hydrogels were determined in oscillation mode on the HR20 Discovery Hybrid rheometer from TA Instruments (New Castle, USA). Rheometer was equipped with a parallel plate geometry of 20 mm. 1 mm of measuring gap was used for all HA/ε-PL ratios. Samples were prepared as described above (see Section 2.2) and immediately placed on the bottom plate with a temperature of 25 • C. A layer of silicone oil was applied to the edges of the sample to reduce evaporation. Before each measurement, hydrogel samples were equilibrated for 120 s. A frequency sweep test was carried out by subjecting the sample to 1% strain within the frequency range from 1 to 100 Hz. The storage modulus at 1 Hz was chosen to determine the stiffness of prepared hydrogel samples. An amplitude sweep test was conducted by exposing the hydrogels to a frequency of 1 Hz within the strain range of 0.01% to 100%. A time sweep test was used to determine the point at which hydrogels crosslink. The appropriate parameters from the frequency (linear viscoelastic region) of this measurement were selected from frequency (1 Hz) and amplitude sweep (1%) tests. Time sweeps were performed by subjecting the sample to the frequency of 1 Hz and strain of 1% for 300 s. The self-healing potential of prepared HA/ε-PL hydrogels was determined by cutting the sample in two pieces and leaving it to crosslink for one hour at room temperature. After one hour, the time sweep test was performed, and the obtained results were compared with those previously measured for non-cut samples. To visualise the self-healing process, one hydrogel sample was coloured with food colour, and one was uncoloured. Both samples were cut into two pieces and placed together. The self-healing process was observed at different time points (20 min, 1 h, 3 h, 14 h). The recovery properties of the prepared hydrogel samples were determined by changing the strain from 1% to 100% for three cycles at a constant frequency of 1 Hz. All experiments were performed in triplicate, and the results were presented as an average ± standard deviation.
In Vitro Antimicrobial Characterisation
Antimicrobial properties of prepared HA/ε-PL hydrogels against diverse microorganisms were investigated by the antimicrobial activity evaluation in microbial suspension and zone-of-inhibition test according to a modified ASTM E2149-10 method ("Standard Test Method for Determining the Antimicrobial Activity of Immobilized Antimicrobial Agents Under Dynamic Contact Conditions"). In this study, we used gram-positive bacteria Staphylococcus aureus (S. aureus), gram-negative bacteria Pseudomonas aeruginosa (P. aeruginosa) and Escherichia coli (E. coli) and yeast Candida albicans (C. albicans). Fresh 24 h shaken cultures of E. coli MSCL 332 (ATCC 25922), S. aureus MSCL 334 (ATCC 6538P) and P. aeruginosa MSCL 331 (ATCC 9027), grown in sterile Tryptic soy broth at 36 ± 1 • C, were used in the experiments. Yeast-like fungus C. albicans MSCL 378 (ATCC 10261) was grown in sterile Malt extract broth at 36 ± 1 • C for 48 h. Cultures were diluted with sterile water until they reached the absorbance of A 540 = 0.16 ± 0.02, which corresponded to approximately 10 6 CFU mL −1 .
Agar Well Diffusion Method
Plate count agar (Bio-Rad, Marnes-la-Coquette, France) plates for bacteria and Malt extract agar (Merck Millipore, Darmstadt, Germany) plates for fungi were inoculated with a confluent lawn of microorganisms. 8 mm diameter wells were bored in the agar medium, and HA/ε-PL samples (with a diameter of 7 mm) were placed in triplicate in agar wells. Gentamicin (KRKA, Slovenia; 10 mg/mL, 70 µL) was used as a positive control. Bacteria were cultivated on an agar medium at 36 ± 1 • C for 24 h, and C. albicans was cultivated at 36 ± 1 • C for 48 h. Sterile zones of inhibition were measured using a millimetre-scale ruler.
Dynamic Contact Test
HA/ε-PL hydrogel samples were placed in a 50 mL tube filled with 5 mL of microbial working suspension in potassium phosphate buffer. A series of dilutions were subsequently prepared from the inoculum tube that did not contain the HA/ε-PL hydrogel sample and inoculated into Petri dishes on agar media. The experiment was carried out in triplicate to confirm the initial microbial concentration in colony-forming units (CFU) mL −1 . All prepared samples were incubated at 35 ± 2 • C, shaking (200 rpm) for 1 h ± 5 min. Following the same procedure that was carried out for the sample without HA/ε-PL hydrogel, samples with the hydrogels were diluted in triplicate and inoculated into Petri dishes, with Plate count agar for bacteria and Malt extract agar for C. albicans to estimate the number of CFU mL −1 . Prepared Petri dishes were incubated at 35 ± 2 • C for 24 h. Antimicrobial properties were investigated according to a modified ASTM E2149-10 method. The mean CFU mL −1 and microbial reduction were calculated in the following manner: where A = CFU mL −1 for the tube containing HA/ε-PL hydrogel samples after 1 h contact time, and B = CFU mL −1 for not containing HA/ε-PL hydrogel samples after 1 h contact time. The results were shown as an average value ± standard deviation from 3 replicates for each hydrogel composition.
In Vitro Cytotoxicity Assay
BALB/c 3T3 cell line (ATCC) was used for cytotoxicity assays. Cells were grown in Dulbecco's modified Eagle's medium (DMEM, Sigma, Saint Louis, MO, USA), supplemented with 10% (v/v) calf serum (Sigma, Saint Louis, MO, USA) and 100 µg mL −1 of streptomycin, and 100 µg mL −1 of penicillin, at 37 • C within a humidified 5% CO 2 atmosphere. Cells were detached and passaged using 0.25% (w/v) of trypsin/EDTA (Sigma, Saint Louis, MO, USA). For all experiments, cell seeding density was 3 × 10 4 cells cm −2 . Throughout the cytotoxicity testing, previously reported data [41] were complemented with additional replicates and compared with new compositions of HA/-PL hydrogels.
Extract Test
An extract test was performed to assess the potentially toxic effects of hydrogel components. BALB/c 3T3 cells were seeded in 96-well plates and incubated for 24 h to allow them to attach and start proliferating. Hydrogel samples were washed with phosphate buffered saline (PBS, pH 7.4) and extracted with cell cultivation media for 24 h at 37 • C. The biomaterial to media ratio used for extraction was 1:5 (0.2 g per 1 mL media), except for HA/ε-PL 80:20 and HA/ε-PL 90:10 ratios where they were 1:20 and 1:50, due to swelling of hyaluronic acid during extraction period. Extracts were then collected and diluted with cultivation media to 12.5%, 25% and 50% (v/v). 0.5 mL of diluted extract was added to the corresponding cell culture wells of a 24-well plate and incubated for 24 h. Cells incubated without hydrogel samples were used as untreated controls, while sodium dodecyl sulphate was used as the positive (cytotoxic) control. Phase contrast microscopy was used to monitor cell cultures and changes in the cell confluence and morphology. After incubation, cultivation media was removed, and cells were washed with PBS. Neutral red (Sigma, Saint Louis, MO, USA) working solution (25 µg mL −1 ) in 5% serum containing cultivation media was added, and cell cultures were incubated for 3 h at 37 • C and 5% CO 2 . Neutral red media was removed, and 1% glacial acetic acid/50% ethanol solution was added to extract the accumulated dye in the viable cells. After 20 min of incubation at room temperature, absorption at 540 nm was measured. Changes in the cell viability were calculated using the following equation: Five replicate samples of extracts from every physically crosslinked hydrogel composition and four replicates of extracts from each chemically crosslinked hydrogel composition were analysed. The results were represented as an average value ± standard deviation. Balb/c 3T3 cells were seeded in 6-well plates and incubated for 24 h. Hydrogel samples (100 mg per well) were washed three times with PBS and preincubated for 1 h in 5 mL of cell cultivation media. Subsequently, hydrogels were transferred to cell cultures and incubated for 24 h. After incubation, media and hydrogel samples were removed, and cells were washed with PBS. A neutral red solution was added, and its uptake was measured as described before (see Section 2.7.1). Cells incubated without samples were used as untreated controls, while sodium dodecyl sulphate was used as the positive (cytotoxic) control. Phase contrast microscopy was used to monitor the cell cultures and changes in cell confluence and morphology. Five replicate samples from each prepared hydrogel composition were analysed, and results were represented as an average value ± standard deviation.
Cell Viability Staining
Balb/c 3T3 cells were seeded on the 4-well chamber slides and incubated for 24 h. Hydrogel samples (30 mg per chamber) were prepared as described before (see Section 2.7.2) and added to the cell cultures. After 24 h of incubation, the media was discarded, and cells were washed with PBS. A mixture of fluorescent cell viability dyes (5 µM SYTO 9 dye and 30 µM propidium iodide) was added to the cells and incubated for 15 min at 37 • C, 5% CO 2 in the dark. After incubation, cells were washed with PBS and imaged using Leica DMI400B inverted fluorescence microscope.
Hemocompatibility of Hydrogels
A hemolysis test was performed to assess the hemocompatibility of hydrogels in a direct contact test. Blood from healthy donors was collected in Monovette vacutainers containing EDTA. Blood was diluted with 0.9% sodium chloride solution (4:5 ratio by volume). Hydrogel samples were washed three times with PBS (pH 7.4) and added to 15 mL tubes containing fresh 9.8 mL PBS, which were then incubated at 37 • C and 5% CO 2 for 30 min. In the case of hydrogel extracts, 9.8 mL of 20% extracts were used. Extracts were prepared as described in Section 2.7.1. 0.2 mL of diluted blood was added to each tube and incubated at 37 • C and 5% CO 2 for 1 h and 6 h. PBS was used as a negative control and deionised water as a positive control. After incubation, tubes were centrifuged at 2000 rpm for 5 min, the supernatants were collected, and the absorbance was measured at a wavelength of 545 nm in a Tecan Infinite Pro 200 multimode plate reader.
The hemolytic ratio (HR) was calculated using the following equation: Hemocompatibility studies were performed in accordance with the approval of the Committee of research ethics of the Institute of Cardiology and Regenerative Medicine, University of Latvia, No 3/2021.
Statistical Analysis
Obtained results were presented as an average value ± standard deviation of at least three replicates. Statistical significance between all ratios of prepared composition was determined using an unpaired Student's t-test with a significant level at 95% (p < 0.05). One-way ANOVA with Tukey HSD (Honestly Significant Difference) post-hoc test was used to evaluate the statistical significance of in vitro tests. Significance was found if the p-value was less than 0.05 (p < 0.05).
Influence of HA/ε-PL Ratio on Hydrogel Swelling Behaviour and Gel Fraction
The most likely binding mechanism of the prepared hydrogels is the electrostatic interaction between the carboxylate group in the HA structure and the amino group in the ε-PL structure, as well as inter-molecular hydrogen bonds in the HA structure [6,41]. Since these are physical linkages and hydrogels are non-covalently crosslinked, the interaction could easily be disturbed by increasing the temperature, changing the pH of a surrounding medium, or mechanically affecting the hydrogels [42]. In order to evaluate the stability of prepared hydrogels their swelling behaviour, as well as the gel fraction were examined. HA/ε-PL hydrogel series with HA content from 10 wt% up to 90 wt% were prepared and compared ( Figure 1). By increasing the HA content, it was observed that hydrogels became stiffer and less transparent. Compositions with HA content of 10 wt% and 20 wt%, after one hour of crosslinking, did not retain the cylindrical form and could be easily deformed ( Figure 1A). Gel formation occurs also for HA/PL = 10/90 according to Figure 1B. However, for this wt% ratio, either the gel formation point is at a time of less than 0.1-1 min, or the system initially has G' > G" due to the macromolecular entanglements of hyaluronic acid ( Figure 1B). Most probably, the HA/PL = 10/90 does not keep its shape because it either has no yield stress or has low yield stress. The cross-over point is usually considered as the gel point or hydrogel formation point [43,44]. It was also found that the cross-over point of G' and G" curves for hydrogels containing 20 wt% of HA could be observed, indicating that HA/ε-PL hydrogels require at least 20 wt% of HA to form physical crosslinks. Furthermore, swelling experiments showed that hydrogels containing 10 wt% of HA, decomposed in the aqueous medium already after two hours, while hydrogels containing 20 wt% of HA lost their cohesion and completely decomposed after 72 h ( Figure 1C). Thus, hydrogels with HA content from 30 wt% up to 90 wt% were selected for further characterization.
Water and different medium absorption is one of the main properties to describe the behaviour of hydrogels in biological conditions. It includes nutrient supply, oxygen transmission, as well as the removal of cellular waste products from the hydrogels [45]. The water absorption kinetics of lyophilized hydrogels were determined for 672 h. Obtained results revealed that even though the samples with HA content of 90 wt% were selected for further tests, they decomposed after three hours, which could be explained by the fact that the hydrogels did not contain enough ε-PL amino groups to form sufficient physical crosslinks that are necessary for stable composite preparation. Hydrogels containing 30 wt% of HA reached the maximum swelling degree of 201.9 ± 15.0% in the first hour and retained it for up to four hours. Between 4 h and 72 h partial dissolution occurred (most probably some part of ε-PL dissolved) and the swelling degree declined. However, after 72 h samples reached again the swelling equilibrium, with the swelling degree equal to 135.2 ± 7.6%. Hydrogels containing 40 wt% of HA slowly swelled and reached the equilibrium in 504 h where the swelling degree was equal to 207.3 ± 5.9%. The equilibrium of swollen hydrogels, with HA content from 50 to 80 wt%, was reached after 168 h, where the swelling degree value for HA/ε-PL 50:50 composite was 219.2 ± 2.9%, for HA/ε-PL 60:40-241.9 ± 2.3%, for HA/ε-PL 70:30-262.6 ± 2.6%, and for HA/ε-PL 80:20-267.9 ± 11.3%. Since the water soaking capacity is over 200% these hydrogels may be described as superabsorbent hydrogels [46]. Hydrogel compositions in which HA content is at least 30 wt% retained water in its structure for more than one month, providing appropriate conditions for acute and chronic wound treatment [47]. The obtained gel fraction results showed that samples with HA content of 70 wt% exhibited the highest value of gel fraction (87.76 ± 1.09%) ( Figure 1D), indicating the highest crosslinking degree between HA and ε-PL. It was also observed that if HA content in hydrogels is <40 wt%, already 70% of prepared compositions dissolved after 48 h incubation in DI.
Hence, it can be concluded that the lowest amount of HA needed to form a stable HA/ε-PL hydrogel should be 40 wt%. Moreover, obtained results also revealed that, if HA content in the hydrogels exceeded 80 wt%, physical crosslinks in DI were disrupted within 3 h, indicating that an insufficient amount of ε-PL amino groups were introduced in the composite to form a stable hydrogel. Water and different medium absorption is one of the main properties to describe the behaviour of hydrogels in biological conditions. It includes nutrient supply, oxygen transmission, as well as the removal of cellular waste products from the hydrogels [45]. The water absorption kinetics of lyophilized hydrogels were determined for 672 h. Obtained results revealed that even though the samples with HA content of 90 wt% were selected for further tests, they decomposed after three hours, which could be explained by the fact that the hydrogels did not contain enough ε-PL amino groups to form sufficient physical crosslinks that are necessary for stable composite preparation. Hydrogels containing 30 wt% of HA reached the maximum swelling degree of 201.9 ± 15.0% in the first hour and retained it for up to four hours. Between 4 h and 72 h partial dissolution occurred (most probably some part of ε-PL dissolved) and the swelling degree declined. However, after 72 h samples reached again the swelling equilibrium, with the swelling degree equal to 135.2 ± 7.6%. Hydrogels containing 40 wt% of HA slowly swelled and reached the equilibrium in 504 h where the swelling degree was equal to 207.3 ± 5.9%. The equilibrium of swollen hydrogels, with HA content from 50 to 80 wt%, was reached after 168 h, where the swelling degree value for HA/ε-PL 50:50 composite was 219.2 ± 2.9%, for HA/ε-PL 60:40-241.9 ± 2.3%, for HA/ε-PL 70:30-262.6 ± 2.6%, and for HA/ε-PL 80:20-267.9 ± 11.3%. Since the water soaking capacity is over 200% these hydrogels may be described as superabsorbent hydrogels [46]. Hydrogel compositions in which HA content is at least 30 wt% retained water in its structure for more than one month, providing
Influence of the HA/ε-PL Ratio on Hydrogel Crosslinking Ability, Mechanical and Rheological Properties
To track the crosslinking process of HA/ε-PL compositions a time sweep test was performed. The obtained results (Figure 2A) showed that by increasing the HA content in the hydrogel composition, the value of storage modulus (G') increased, from which it can be concluded that the prepared hydrogels became stiffer, leading to higher mechanical properties [24,48,49]. Since the G', for compositions where HA ranged from 30 up to 90 wt%, was always greater than the value of the loss modulus (G"), it can be concluded that physical crosslinks were formed immediately after the liquid phase addition to the solid phase. Moreover, the time sweep test allows for estimating the potential self-healing ability. Physically crosslinked hydrogels could be characterized as self-healable or reversible hydrogel systems [42,50]. This aspect is very important for biomaterials due to the constant dynamic environment in the body. In the treatment of the wound, it is necessary that the material recovers itself and regains its function after damage or mechanical disruption [47]. To illustrate the self-healing ability of prepared samples, hydrogels were cut into two pieces, one hydrogel part was coloured in blue, while the other part was left uncoloured. Both parts were placed together ( Figure 2G) and left to crosslink. After three hours the cut was less visible, but after 14 h there was no sign of the cut and the hydrogel was completely recovered, indicating that such a hydrogel system could be able to heal and maintain its functions in a dynamic environment of the body. The obtained results of the modulus G' had the same value as it had before the cut for all HA/ε-PL hydrogels compositions, which proved the high self-healing potential of samples (Figure 2A,B). The self-healing ability of hydrogels could be explained by non-covalent interactions in hydrogel structure. Non-covalent links, in this case, electrostatic interaction and hydrogen bonds, are highly flexible and can be easily destroyed and reconstructed [51]. In order to determine the recovery properties of prepared HA/ε-PL hydrogel samples, the continuous step-strain method was used ( Figure 2F). As the strain value became equal to 100%, G' rapidly decreased approaching the value of G", indicating that the hydrogel structure was destroyed. When the strain value was decreased to 1%, the value of G' returned to the original value. It was found that hydrogel samples could recover their own structure for at least three cycles, indicating self-healing capability. To determine the mechanical properties of prepared HA/ε-PL hydrogel samples, a strain sweep test was executed ( Figure 2D). All hydrogel series showed an obvious linear viscoelastic region (LVR) up to the ε ≈ 10% strain, which is shown as a plateau region in the graph. Surprisingly, by increasing the HA content in the HA/ε-PL hydrogels, the G' and G" cross-over point, has decreased from 69.8% in the case of HA/ε-PL 30:70 down to 22.4% in the case of HA/ε-PL 90:10, indicating reduced durability to gel-liquid transition. Moreover, after reaching the cross-over point hydrogels started to behave as fluid-like substances, indicating that the crosslinked network was disrupted. Furthermore, the cross-over point modulus has increased by increasing the HA content in the hydrogel composition, revealing that the hyaluronic acid content has a dominant impact on the mechanical properties and stability of prepared HA/ε-PL hydrogel systems. By increasing the hyaluronic acid content in the prepared samples, the hydrogels were less stable in higher deformation values, but the mechanical properties were higher, as shown by the module values.
To describe the hydrogel behaviour after their injection into the living tissues, a frequency sweep test was performed ( Figure 2D). Frequency sweep test was carried out at 1 Hz, based on the amplitude sweep experiments. Obtained results showed that G' values were always greater than G" values, indicating that all tested hydrogels have solid-like structure. Furthermore, obtained tan δ values for all HA/ε-PL hydrogel series were always less than 1, at all frequencies (tan δ < 1), indicating the dominance of elastic properties [52]. Moreover, G' and G" curve cross-point was not observed, indicating that all tested ratios of HA/ε-PL hydrogels have a high cross-linking degree at all frequency values. By increasing HA content in the hydrogels, G' values increased, indicating a less viscous flow behaviour [53]. A frequency sweep test was also used to determine the stiffness of prepared HA/ε-PL hydrogels ( Figure 2E). It was found that by increasing HA content in the HA/ε-PL hydrogels, storage modulus values increased from 9.93 ± 0.89 kPa in case of HA:ε-PL 30:70, up to 61.4 ± 9.72 kPa in case of HA:ε-PL 90:10, showing stiffer hydrogel structure and thus higher mechanical properties. Moreover, no significant (p > 0.05) differences were found between the stiffness results for hydrogels with HA content of 70, 80 and 90 wt%. These findings go well together with observed values in gel fraction experiments in which the highest crosslinking degree was detected for samples with HA content of 70 wt%. Finally, the range of stiffness values clearly showed that the developed hydrogels have potential applications in tissue engineering, as the modulus values found are similar to those of biological tissues [54]. Throughout the tests pure HA diluted in DI water in ratio 1:2.5 w/v was used as a control. To describe the hydrogel behaviour after their injection into the living tissues, a frequency sweep test was performed ( Figure 2D). Frequency sweep test was carried out at 1 Hz, based on the amplitude sweep experiments. Obtained results showed that G' values were always greater than G″ values, indicating that all tested hydrogels have solidlike structure. Furthermore, obtained tan δ values for all HA/ε-PL hydrogel series were always less than 1, at all frequencies (tan δ < 1), indicating the dominance of elastic properties [52]. Moreover, G' and G″ curve cross-point was not observed, indicating that all tested ratios of HA/ε-PL hydrogels have a high cross-linking degree at all frequency
Injection Force Measurements
An injection force (IF) of 79.8 N is considered the maximum force that can be generated by both genders (shown as a red horizontal line in Figure 3A) [55]. As there is no standard method for determining injectability, it is possible to reduce the friction and the final IF value by varying the inner diameter and length of the needle (increased length and diameter lead to higher IF) [55][56][57]. On the other hand, reduced inner diameter of the syringe would increase the pressure at the same force and even more reduce the effective viscosity of the gel. Furthermore, the injection force value is also dependent on the viscosity [56] and density [55] of the material, as well as on the injection rate [56]. Obtained results revealed that by increasing the HA content in the prepared HA/ε-PL hydrogels the value of injection force increased from 42.1 ± 9.28 N in the case of HA/ε-PL 30:70 to 224 ± 76.8 in the case of HA/ε-PL 90:10 ( Figure 3A,B). These results once again confirmed the effects found when determining the gel fraction and swelling, indicating that the increase of HA content in hydrogels facilitates the formation of a higher crosslinked polymer network.
viscosity [56] and density [55] of the material, as well as on the injection rate [56]. Obtained results revealed that by increasing the HA content in the prepared HA/ε-PL hydrogels the value of injection force increased from 42.1 ± 9.28 N in the case of HA/ε-PL 30:70 to 224 ± 76.8 in the case of HA/ε-PL 90:10 ( Figure 3A,B). These results once again confirmed the effects found when determining the gel fraction and swelling, indicating that the increase of HA content in hydrogels facilitates the formation of a higher crosslinked polymer network.
Chemical Characterization
FT-IR spectroscopy analysis was performed in order to further identify the possible cross-linking mechanisms of HA and ε-PL. In the acquired FT-IR spectrum ( Figure 4) the absorption bands between 3246 cm −1 and 3081 cm −1 that correspond to NH2 stretching vibrations and protonated NH3 + , in the ε-PL structure were clearly discerned [58]. At the same time, the HA bands characteristic of amide II and amide III were observed at 1558
Chemical Characterization
FT-IR spectroscopy analysis was performed in order to further identify the possible cross-linking mechanisms of HA and ε-PL. In the acquired FT-IR spectrum ( Figure 4) the absorption bands between 3246 cm −1 and 3081 cm −1 that correspond to NH 2 stretching vibrations and protonated NH 3 + , in the ε-PL structure were clearly discerned [58]. At the same time, the HA bands characteristic of amide II and amide III were observed at 1558 cm −1 and 1389 cm −1 [59][60][61][62], while the bands at 1161 cm −1 , 1081 cm −1 , and 1037 cm −1 corresponding to the C-O-C, C-O, and C-OH groups, respectively, were also present [60,61]. Additional absorbance bands at 1669 cm −1 , 1631 cm −1 , and 1384 cm −1 can be assigned to (C=O) and (C-O) stretching vibrations of carboxyl groups of polylysine and hyaluronic acid [59,60]. However, the lack of absorbance band corresponding to the I amide bond (in the range of 1700 cm −1 -1600 cm −1 [63]) was an indicator that the crosslinking mechanism of hyaluronic acid and ε-polylysine was ionic/electrostatic interaction. Furthermore, the ratio of NH 3 + /NH 2 (the concentration of free NH 3 + groups in ε-PL side chains [41]) was calculated from normalized FT-IR spectrum of all HA/ε-PL ratios. By increasing the ε-PL content in the hydrogel samples, the ratio of NH 3 + /NH 2 increased from 0.66 in case of HA/ε-PL 90:10 up to 0.85 in case of HA/ε-PL 30:70. As it was foreseen that the antibacterial properties will be improved by increasing the ratio of NH 3 + /NH 2 [41], the results were in good agreement with antimicrobial test results that revealed the increase in antibacterial efficiency with an increase of ε-PL content in samples (see Section 3.6). [59,60]. However, the lack of absorbance band corresponding to the I amide bond (in the range of 1700 cm −1 -1600 cm −1 [63]) was an indicator that the crosslinking mechanism of hyaluronic acid and ε-polylysine was ionic/electrostatic interaction. Furthermore, the ratio of NH3 + /NH2 (the concentration of free NH3 + groups in ε-PL side chains [41]) was calculated from normalized FT-IR spectrum of all HA/ε-PL ratios. By increasing the ε-PL content in the hydrogel samples, the ratio of NH3 + /NH2 increased from 0.66 in case of HA/ε-PL 90:10 up to 0.85 in case of HA/ε-PL 30:70. As it was foreseen that the antibacterial properties will be improved by increasing the ratio of NH3 + /NH2 [41], the results were in good agreement with antimicrobial test results that revealed the increase in antibacterial efficiency with an increase of ε-PL content in samples (see Section 3.6).
Influence of HA/ε-PL Ratio on the Hydrogel Morphology
SEM was used to visualise the surface and cross-section morphology of lyophilized HA/ε-PL hydrogels (30:70 and 90:10 HA/ε-PL). HA/ε-PL composite cross-section analysis ( Figure 5A,B) revealed the interconnected porosity, with average pore size ranging from 85 µm up to 267 µm. The amount of hyaluronic acid in the prepared HA/ε-PL composite did not affect the cross-sectional microstructure. Considering that the pore size ranging from 70 µm up to 500 µm is found to be optimal for the tissue engineering [64], the prepared hydrogels can be used for cartilage, as well as for wound treatment. The surface and cross-section morphology of lyophilized hydrogel samples were also observed by µ-CT ( Figure 5C,D). Furthermore, µ-CT was used to determine the porosity of physically crosslinked hydrogels. Obtained results showed that with the change of HA/ε-PL ratio, the porosity of hydrogels has changed, but no coherent effect was observed (Table 1). By observing the cross-section of the samples, it was found that all HA/ε-PL hydrogel series have interconnected pores. The presence of interconnected pores is considered to be highly beneficial for nutritional supply in deeper scaffold areas, which are enabling cells to survive.
Antimicrobial Properties of HA/ε-PL Hydrogels
Antimicrobial properties of ɛ-PL against Gram-positive, Gram-negative bacteria and fungi have been previously proven [28,29]. Shortly, antimicrobial properties have been directly connected with the concentration of free amino (-NH3 + ) groups in the peptide structure. The positively charged NH3 + groups electrostatically interact with the negatively charged cell membrane [27,28], consequently causing damage to the cell membrane [29], which results in positive antimicrobial activity. Hence, the higher the concentration of protonated NH3 + groups in the polymer structure, the higher the antimicrobial activity.
All tested HA/ɛ-PL hydrogels demonstrated antimicrobial activity against Grampositive (S. aureus) and Gram-negative (E. coli, P. aeruginosa) bacteria. The diameter of inhibition zones ( Figure 6E-H and Supplementary material Figure S2) of all of the hydrogels ranged from 12.3 ± 0.6 mm to 24.0 ± 1.0 mm for bacteria. The diameter of inhibition zones with fungi C. albicans ranged from 12.3 ± 2.1 mm to 27.0 ± 1.0 mm for hydrogels in which HA content was from 30 wt% up to 70 wt% but samples with HA:ε-PL ratio of 80:20 and 90:10 did not show any zone of inhibition. During the experiment, it
Antimicrobial Properties of HA/ε-PL Hydrogels
Antimicrobial properties of ε-PL against Gram-positive, Gram-negative bacteria and fungi have been previously proven [28,29]. Shortly, antimicrobial properties have been directly connected with the concentration of free amino (-NH 3 + ) groups in the peptide structure. The positively charged NH 3 + groups electrostatically interact with the negatively charged cell membrane [27,28], consequently causing damage to the cell membrane [29], which results in positive antimicrobial activity. Hence, the higher the concentration of protonated NH 3 + groups in the polymer structure, the higher the antimicrobial activity. All tested HA/ε-PL hydrogels demonstrated antimicrobial activity against Grampositive (S. aureus) and Gram-negative (E. coli, P. aeruginosa) bacteria. The diameter of inhibition zones ( Figure 6E-H and Supplementary Material Figure S2) of all of the hydrogels ranged from 12.3 ± 0.6 mm to 24.0 ± 1.0 mm for bacteria. The diameter of inhibition zones with fungi C. albicans ranged from 12.3 ± 2.1 mm to 27.0 ± 1.0 mm for hydrogels in which HA content was from 30 wt% up to 70 wt% but samples with HA:ε-PL ratio of 80:20 and 90:10 did not show any zone of inhibition. During the experiment, it was found that by increasing the ε-PL concentration in prepared hydrogel samples, the attained antimicrobial activity was also increased. The aforementioned results coincided well with data observed in FT-IR analyses (see Section 3.4), as well as with the expectations found in the literature where the increased concentration of ε-PL (increased count of free NH 3 + ) provided better antimicrobial activity. Once all of the results from all studied microorganisms were summed up and compared, out of all tested samples, HA/ε-PL hydrogel with the ratio of 30:70 had the highest antimicrobial activity, while HA/ε-PL 90:10 had the lowest record. In order to test the bactericidal and fungicidal effect, the dynamic contact test was performed ( Figure 6A-D). After a contact time of 1 h, the killing efficiency of the most active sample, HA/ɛ-PL 30:70, reached 100.000% for Gram-negative bacteria E. coli and P. aeruginosa, while for S. aureus the achieved maximum was 99.964 ± 0.023%. The most negligible effect on all three bacteria used in the experiments was found for HA/ɛ-PL 70:30 (killing efficiency from 75.452% for S. aureus to 99.508% for P. aeruginosa and 99.632% for E. coli). When the killing efficiency of the hydrogels was tested on C. albicans, HA/ɛ-PL 30:70, reached the value of 99.341 ± 0.327%. Following these results, the effect of HA/ɛ-PL 70:30 was not significantly different from samples with higher ɛ-PL concentrations (p > 0.01) (killing efficiency of 73.721 ± 27.951%), possibly due to the data dispersion. In In order to test the bactericidal and fungicidal effect, the dynamic contact test was performed ( Figure 6A-D). After a contact time of 1 h, the killing efficiency of the most active sample, HA/ε-PL 30:70, reached 100.000% for Gram-negative bacteria E. coli and P. aeruginosa, while for S. aureus the achieved maximum was 99.964 ± 0.023%. The most negligible effect on all three bacteria used in the experiments was found for HA/ε-PL 70:30 (killing efficiency from 75.452% for S. aureus to 99.508% for P. aeruginosa and 99.632% for E. coli). When the killing efficiency of the hydrogels was tested on C. albicans, HA/ε-PL 30:70, reached the value of 99.341 ± 0.327%. Following these results, the effect of HA/ε-PL 70:30 was not significantly different from samples with higher ε-PL concentrations (p > 0.01) (killing efficiency of 73.721 ± 27.951%), possibly due to the data dispersion. In contrast, HA/ε-PL 80:20 and HA/ε-PL 90:10 demonstrated a fungicidal effect of only 8.462% and 11.615%, respectively, when used on C. albicans. The high antimicrobial activity of hydrogels has been ascribed to the similar composition of ε-PL. These results once again corroborated that the availability of charged amino (NH 3 + ) groups of ε-PL is the main reason for the antimicrobial efficacy. However, one exception has been observed, HA/ε-PL 90:10 was more effective than HA/ε-PL 70:30 under dynamic test conditions for Gram-positive bacteria S. aureus. The main reason for this effect can be accredited to the sole structure of the hydrogel (i.e., its partial dissolution due to the high content of HA and immediate bioavailability of ε-PL).
3.7.
In Vitro Cell Biocompatibility of HA/ε-PL Hydrogels 3.7.1. Biocompatibility and Hemocompatibility of Hydrogels In order to consider any material for a potential medical application, in vitro cell biocompatibility should be evaluated. In the direct cytotoxicity assay, hydrogel samples were tested against subconfluent (60-70%) Balb/c 3T3 cell culture and incubated for 24 h. It was noticed that viability and confluence of the cells was less than 60% in the presence of hydrogel samples containing >40 wt% of ε-PL, if compared to the control that was sodium dodecyl sulphate as positive (cytotoxic) control and cells incubated without samples as negative (biocompatible) control ( Figure 7A). For samples HA/ε-PL 30:70, HA/ε-PL 40:60 and HA/ε-PL 50:50 average cell viability was 37.64%, 36.67% and 40.75%, respectively. Whereas incubation with HA/ε-PL 60:40 sample resulted in 60.53% viability. The highest viability was observed for sample HA/ε-PL 70:30 (152.57%), followed by HA/ε-PL 80:20 (142.67%), while for the HA/ε-PL 90:10 viability was 79.98%. Therefore, following the cell viability trend it can be clearly seen that the increase of HA concentration (up until 70 wt%) in the prepared hydrogels led to higher cell viability. Differences in the results for high HA content samples (80 and 90 wt%) could be connected to the mechanical characteristics of the material-the higher the HA content, the less attractive the structure might be for cell growth. Similarly, the decrease of cell viability for samples containing ε-PL from 40 wt% up to 70 wt%, could be associated with polylysine bactericidal and fungicidal properties. Statistical significance between all ratios of prepared compositions was determined using ANOVA (p < 0.05), and the viability was statistically different, in comparison to the control, for all of the samples. Furthermore, it should be noted that samples HA/ε-PL 80:20 and HA/ε-PL 90:10 swelled considerably. In the case of HA/ε-PL 90:10 swelling in cultivation media might have also interfered with gas and nutrient exchange and thus resulted in slightly reduced cell viability. Furthermore, swelling of the HA also interfered with the extraction of neutral red accumulated in the viable cells, which led to higher variations in the results.
Additional to the cytotoxicity assessment with the direct contact test, hydrogel extracts were tested ( Figure S3). Hydrogels containing up to 70% HA were extracted with the cell cultivation media at the ratio of 1:5 and were tested at concentrations of 12.5%, 25% and 50%. Hydrogels that contained 80 wt% and 90 wt% HA were extracted at ratios 1:20 and 1:50, respectively, due to the swelling, and the extracts were tested at 100%, 50% and 25% concentrations. Results were in line with those of the direct contact test-hydrogels with higher ε-PL content have a more negative effect on cell viability. Extracts of HA/ε-PL 30:70 and HA/ε-PL 40:60, in a concentration of 50% were the most toxic ones and they reduced the cell viability by 57.21% and 60.05%, respectively ( Figure S3). When these extracts were further diluted, the cytotoxic effect decreased. In the case of HA/ε-PL 50:50 and HA/ε-PL 60:40, the highest concentration of extracts reduced viability by up to 40%. Differences between the three tested extract concentrations were less pronounced compared to the hydrogels with higher ε-PL content. The extract from the HA/ε-PL 70:30 did not reduce the cell viability by more than 20% in all of the tested concentrations. In the case of the extracts obtained from hydrogels that contained up to 70 wt% ε-PL, the changes in the cell viability were statistically significant compared to the control for all 50% and 25% extracts as well as for 12.5% HA/ε-PL 40:60 extract.
Differences in the results for high HA content samples (80 and 90 wt%) could be conn to the mechanical characteristics of the material-the higher the HA content, the attractive the structure might be for cell growth. Similarly, the decrease of cell viabilit samples containing ε-PL from 40 wt% up to 70 wt%, could be associated with polyl bactericidal and fungicidal properties. Statistical significance between all ratio prepared compositions was determined using ANOVA (p < 0.05), and the viability statistically different, in comparison to the control, for all of the samples. Furthermo should be noted that samples HA/ε-PL 80:20 and HA/ε-PL 90:10 swelled considerab the case of HA/ε-PL 90:10 swelling in cultivation media might have also interfered gas and nutrient exchange and thus resulted in slightly reduced cell viab Furthermore, swelling of the HA also interfered with the extraction of neutra accumulated in the viable cells, which led to higher variations in the results. Additional to the cytotoxicity assessment with the direct contact test, hyd extracts were tested ( Figure S3). Hydrogels containing up to 70% HA were extracted the cell cultivation media at the ratio of 1:5 and were tested at concentrations of 1 25% and 50%. Hydrogels that contained 80 wt% and 90 wt% HA were extracted at r 1:20 and 1:50, respectively, due to the swelling, and the extracts were tested at 100% and 25% concentrations. Results were in line with those of the direct contact t hydrogels with higher ɛ-PL content have a more negative effect on cell viability. Ext of HA/ε-PL 30:70 and HA/ε-PL 40:60, in a concentration of 50% were the most toxic and they reduced the cell viability by 57.21% and 60.05%, respectively ( Figure S3). W these extracts were further diluted, the cytotoxic effect decreased. In the case of HA It was also found that undiluted HA/ε-PL 80:20 extracts reduced the cell viability by 32.78% ( Figure S3). When diluted to 50% and 25%, viability was 75.43% and 90.16%, respectively. Reduction of the cell viability after the incubation with HA/ε-PL 80:20 100% and 50% extracts was statistically significant compared to the control. Extracts of HA/ε-PL 90:10 did not have any negative effects on cell viability.
Hemocompatibility is one of the main criteria for biomaterials that will be in direct contact with blood. During contact with the blood, the surface of the biomaterial should not cause adverse reactions, including coagulation, as well as activation or destruction of blood components such as platelets, leukocytes, erythrocytes etc. [65]. Obtained results showed ( Figure 7B,C) that all tested hydrogels did not cause hemolysis of human blood. For all tested samples, hemolysis ratio was below 1.5%. Additionally, to evaluate if prolonged exposure could lead to hemolysis, 6 h incubation period was tested. However, no increase in hemolytic ratio was observed. Nevertheless, the differences among the samples were observed. These results indicated that prepared hydrogels did not have lytic effects on red blood cells and that the negative effects of some specimens (on the viability of actively dividing cells as fibroblasts) might be explained by the interference of the hydrogel components on cellular metabolism. However, further studies are needed to fully claim the connections with the polymer structures.
Overall, the obtained results indicated that with increasing ε-PL content safety and biocompatibility of hydrogels decreased. We assume that cytotoxic effects of hydrogel extracts with high ε-PL content could be attributed to the toxicity of high leaking concentration of freely positively charged NH 3 + , which induces cell membrane damage.
Fluorescence Microscopy
Microscopy of the cell cultures incubated with the neutral red (prior to the dye extraction) substantiate the above-described effects on their viability ( Figure S4). High cell densities and accumulation of neutral red were observed for HA/ε-PL 90:10, HA/ε-PL 80:20 and HA/ε-PL 70:30, whereas incubation with HA/ε-PL 60:40 extracts resulted in lower densities of viable cells. In the presence of 50% extract of HA/ε-PL 70:30, cell viability was reduced more than in the case of 25% extract, but for the HA/ε-PL 80:20 sample in the presence of 100% extract, the density of the viable cells was high. However, these findings did not fully match the spectrophotometric readings of neutral red dye accumulation. This difference may be due to the interference from the leaked HA in the extraction media affecting the measurement of the dye. To further investigate the effects of hydrogels on the viability of mammalian cells, live-dead staining was performed. Hydrogel samples HA/ε-PL 70:30, HA/ε-PL 80:20 and HA/ε-PL 90:10 that produced the highest cell viability in the direct test were chosen for the live-dead staining. The results showed that cell confluences are similar to the control, with a slightly higher cell density in the presence of HA/ε-PL 80:20 ( Figure 8) and HA/ε-PL 90:10 samples ( Figure S5). The high density of the viable cells was observed after the incubation with all samples. However, a slightly higher number of dead cells was observed in the cell cultures incubated with HA/ε-PL 70:30 hydrogel (Figure 8). This contradicts the neutral red uptake measurements and could be explained by interference of the hydrogel with the neutral red dye extraction and absorption measurement that is most probably due to the swelling of the samples. These results emphasized the necessity for various complementing methodologies for a comprehensive characterization of hydrogels' impact on the cell viability and proliferation.
Conclusions
The results of this study have demonstrated the potential of using hyaluronic acid (HA) and ε-polylysine (ε-PL) to develop physically crosslinked hydrogels that are free from toxic crosslinking agents or residues. The impact of HA content on the properties of the hydrogels was evaluated using various chemical, physical, and biological characterization methods. The results showed that the HA content significantly influenced the swelling behaviour, gel fraction, injection ability, mechanical stability, cell response as well as antimicrobial activity of the hydrogels. The minimum amount of HA needed to create a stable HA/ε-PL hydrogel was 40 wt%, while increased HA content resulted in elevated mechanical properties, decreased hydrogel stability in higher deformation values and improved cell biocompatibility. The FT-IR data confirmed the electrostatic/ionic interaction between HA and ε-PL as the crosslinking mechanism. All HA/ε-PL hydrogels showed self-healing properties and exhibited antimicrobial properties against Gram-positive and Gram-negative bacteria, as well as fungi C. albicans. As the concentration of ε-PL increased, the antimicrobial properties of the HA/ε-PL hydrogels improved but concurrently resulted in decreased cell viability. While hydrogels
Conclusions
The results of this study have demonstrated the potential of using hyaluronic acid (HA) and ε-polylysine (ε-PL) to develop physically crosslinked hydrogels that are free from toxic crosslinking agents or residues. The impact of HA content on the properties of the hydrogels was evaluated using various chemical, physical, and biological characterization methods. The results showed that the HA content significantly influenced the swelling behaviour, gel fraction, injection ability, mechanical stability, cell response as well as antimicrobial activity of the hydrogels. The minimum amount of HA needed to create a stable HA/ε-PL hydrogel was 40 wt%, while increased HA content resulted in elevated mechanical properties, decreased hydrogel stability in higher deformation values and improved cell biocompatibility. The FT-IR data confirmed the electrostatic/ionic interaction between HA and ε-PL as the crosslinking mechanism. All HA/ε-PL hydrogels showed self-healing properties and exhibited antimicrobial properties against Gram-positive and Gram-negative bacteria, as well as fungi C. albicans. As the concentration of ε-PL increased, the antimicrobial properties of the HA/ε-PL hydrogels improved but concurrently resulted in decreased cell viability. While hydrogels containing 50 wt% or less of HA were deemed to possess the greatest potential as injectable biomaterials, the optimal balance between antimicrobial efficiency and cell viability was found in the HA/ε-PL 70:30 and HA/ε-PL 80:20 compositions. Finally, the obtained results provided the new perception into development of new, patient-safe, and environmentally friendly biomaterials with great potential in minimally invasive surgical procedures able to reduce local infection risks.
Funding:
The authors acknowledge financial support from the Latvian Council of Science project No. lzp-2019/1-0005 "Injectable in situ self-crosslinking composite hydrogels for bone tissue regeneration (iBone)". The authors also acknowledge the financial support for granting Open Access, the access to the infrastructure and expertise of the BBCE-Baltic Biomaterials Centre of Excellence (European Union's Horizon 2020 research and innovation programme under grant agreement No. 857287).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 14,378 | 2023-04-01T00:00:00.000 | [
"Materials Science",
"Biology",
"Engineering"
] |
Phonological Koinéization in Kathmandu Tibetan !
! Christopher Geissler 1 Yale University !!
!
This paper tests the new-dialect formation model of Peter Trudgill (1986Trudgill ( , 2000 inter alia) inter alia) by examining the Tibetan language spoken in the diaspora community of Kathmandu, Nepal.As with previous studies of this phenomenon, this is a community where speakers of many varieties of the language under study have come together in a region where that language was not previously spoken, and by now two generations have been born and raised in this new location.Phonological and lexical data was gathered in fieldwork conducted in the summer of 2016 in an attempt to capture the process of new-dialect formation in progress.
Trudgill's model predicts that the second generation born in the new region should exhibit both simplification, the failure of marked variants to transmit to subsequent generations, and focusing, the selection of particular variants by the community.While younger Tibetans (under age 30) in Kathmandu should belong to this generation, and they do indeed exhibit simplification of marked variants, evidence of focusing is not found.Indeed, both younger and older Diaspora-born speakers exhibit a comparable degree of diversity in their use of these variants, suggesting that the process of new-dialect formation, at least among Tibetans in Kathmandu, is proceeding more slowly than predicted.
! 2 New dialect formation ! 2.1 Trudgill's model Perhaps the most influential voice in the study of new dialect formation is that of Peter Trudgill, who has studied the development of a dialect in a community settled by speakers of several dialects in a place where the language had not previously been spoken.Drawing especially from research on New Zealand English, Trudgill (1986) proposed that such a new dialect develops predictably over the course of three generations.
The first generation consists of the original immigrants to the new location, who bring with them the speech of their home regions.As they interact with each other, some degree of accommodation takes place on the scale of individual conversations as well as over the course of speakers' lives (e.g.Giles and Powesland 1975 and subsequent work).These first-generation speakers have children, who grow up exposed to speakers of many dialects and lack a single dialect to copy.Thus, second-generation speakers display a wide range of linguistic features as different children select different variants from the diverse input.For New Zealand English, some of these speakers were recorded later in life in an oral history project; these recordings were acquired under the leadership of Elizabeth Gordon and now maintained by the Origins of New Zealand English (ONZE) project at the Department of Linguistics at the University of Canterbury, Christchurch.Trudgill et al. (2000) and Trudgill (2004) find that these speakers exhibit ideolects with features from many British dialects, but in a combination that matches no individual dialect.Additionally, these speakers display high intra-and inter-speaker variability, often producing several forms of a single feature as well as combinations of forms markedly different even from other speakers of their generation from the same small town.
Finally, Trudgill finds in the third generation, the second generation of native-born speakers, the emergence of a new dialect, "a stable, crystallized variety" (Trudgill 2004:307), a process he terms "focusing."Along the way, Trudgill (1986) describes the dialect as having passed through two major ! 1
Geissler
Phonological Koinéization in Kathmandu Tibetan processes: leveling and simplification .In leveling, marked variants are reduced as speakers choose socially 1 and/or structurally unmarked alternatives.Simplification, as defined here, involves an "increase in morphophonemic regularity" and an "increase in morphological and lexical transparency" (Trudgill 1986:103).In particular, following continued leveling processes at each generational stage, Trudgill notes that the koiné emerging in the third generation consists primarily of forms that are unmarked and common to many of the original input dialects.
In subsequent work, Trudgill (2000Trudgill ( , 2004) ) has stressed the role of a variant's frequency across the firstgeneration speakers in its survival to the koiné stage, in an explicitly deterministic manner.Thus the resulting forms were often those of Southeast England, the largest contributing dialect region, but other forms also survived when they were common to enough of the other dialects in the original mixture.Failing to find evidence that early Anglophone New Zealand colonists selected variants that indexed particular social identities, Trudgill asserts that this kind of straightforward majority-rules framework explains the development of New Zealand English.
Trudgill's theory of new dialect formation has proven popular among linguists studying dialect contact.Kerswill and Williams (2000, inter alia) largely follow Trudgill in their analysis of the changing speech of the English "New Town" of Milton Keynes from its expansion in 1967 to the 1990s.While this community was not strictly formed ex nihilo as Anglophone New Zealand was, the original small village of Milton Keynes formed only a small portion of the population of the city.In particular, the first two "Principles of Dialect Contact" developed by Kerswill and Williams (2000) follow straightforwardly from Trudgill's theory: "Majority forms found in the mix, rather than minority forms, win out," and "Marked regional forms are disfavored."Importantly, they also find confirmation of Trudgill's timeline, which they distill as "From initial diffusion, focusing takes place over one or two generations."In Milton Keynes, Kerswill and Williams find a small degree of focusing in the second generational phase--the first local-born children--but primarily observe focusing in the third generation, as Trudgill did in New Zealand.This finding qualifies Trudgill's theory by claiming that koineization can proceed somewhat more quickly, with focusing in the second generation.
Another study which draws heavily on Trudgill's model is that of Hornsby ( 2006), which studied changes in the regional French of the northern French town of Avion.There, with the growth of the town as a coal-mining center, a regional dialect developed that Hornsby interprets as a koiné of working-class Standard French and Picard, the formerly dominant language of the region.While often classed as different languages, Standard French and Picard are both langue d'oïl varieties, and so close that their interaction was one of dialect contact.Although Hornsby's interest lies primarily in the effect of changing social networks and more recent shifts toward Standard French rather than the timeframe of the development of Regional French in Avion, the approximate timeframe of 1890-1930 suggests an approximately generational scale to Trudgill's observations of New Zealand English.
Critiques of Trudgill
The koineization model of Trudgill (1986, inter alia) outlined in the previous section has also come under criticism, in particular for its claim that the characteristics of a new dialect are purely the result of the ratios of variants present in the input dialects.The only exception to this is forms that are especially marked, which usually do not survive; otherwise, no other social or linguistic factors factor into the development of a new dialect.Baxter et al. (2009) attack the claim of determinism with reference to Trudgill's own observation that New Zealand English had largely crystallized in three generations, as manifest in the speech of the second generation of settlers born in New Zealand.To do this, they created a mathematical model of speaker networks to estimate how long it would take for inter-speaker interaction to converge on a single, consistent form of a linguistic variable.This model of neutral interactor selection with a population the size of New Zealand requires "hundreds of generations" to come to completion, far longer than was observed in history.Indeed, follow-up work on mathematical models of linguistic change by Blythe and Croft (2012) indicates supports the notion that in order to achieve the S-shaped curve in variable frequency over time observed across sociolinguistic studies, "any theory of language change must put some kind of differential weighting of linguistic variants in a central role" (Blythe and Croft 2012:294).In other words, speakers necessarily exhibit asymmetric preference for one form of a linguistic variable over another, and change proceeds too quickly to rely on unweighted interactions across a social network.
This finding should come as no surprise, given the enormous amount of work that sociolinguists have put into learning how and why linguistic variables propagate through a population.To take only one ! 2 A third, reallocation, involves retaining a feature used as a social marker and reanalyzing it to mark a different social 1 category example, Labov (2006) investigated the way speakers chose forms for a single variable on the basis of a complex web of interacting social pressures, identities, and aspirations.Indeed, Eckert (2000) has found these dynamics play out in quite sophisticated ways in even small communities.Although Mufwene (2008) agrees with Trudgill that a simple drive to create a group identity through similar speech characteristics does not explain the formation of a new dialect, he does still maintain that the choice of variants in a contact situation is affect by weighting factors.Nevertheless, in the case of dialect contact, he stresses interspeaker accommodation and more deterministic factors, such as the Founder Principle (section 2.3, below).Still, the place of mechanistic vs. social factors in contact-induced changed remains an important question.
2.3
The founder principle The selection by speakers of particular variants from the set of many possibilities present in the preceding generation, a crucial aspect of Trudgill's (1986) theory, closely resembles the idea of a "feature pool" proposed by Mufwene (2001, inter alia), a fact that Mufwene ( 2008) himself has noted.Originally developed to model the formation of creole languages, this idea holds that speakers select the linguistic variants of their speech from the full range of input forms they are exposed to; this process is said to apply in all cases of linguistic evolution across generations, from the small-scale changes within a language to the more dramatic cases of creole formation.
Mufwene ( 2001) discusses a corollary to this idea, the Founder Principle, which is adapted from biological evolutionary theory .For a population with an initial founding population as well as subsequent Of approximately six million total speakers of Tibetan, around 150,000 currently live outside traditional Tibetan regions in a diaspora which began in 1959, following the Chinese conquest of Tibet.While several thousand Tibetans now live outside South Asia, most Tibetans in diaspora live in India, Nepal, and Bhutan.Their communities form a web of interconnected enclaves: formally-designated settlements, monastic institutions, boarding schools, and urban neighborhoods.While they often use the Tibetan language among themselves, Tibetans are often multilingual, regularly using the dominant languages of their host countries such as Nepali, Hindi, or English, and many also speak Chinese (Denwood 1999, Roemer 2008).Although spread over a geographically wide area, Tibetans in diaspora frequently move between these areas and continue vibrant use of the Tibetan language, thus defining a speech community separate from those inside China.
Over the past six decades, several waves of migrants have come from all Tibetan regions of China into diaspora.However, Roemer (2008) reports that 70% of the first wave of 85,000 Tibetans to arrive in India and Nepal in 1959-1962 came primarily from the central U-Tsang region.Of the remainder, 25% came from the eastern region of Kham, and only 5% from the northeastern region of Amdo.Subsequent rates of immigration varied over time, but generally more balanced proportions of Tibetans from the three regions.U-Tsang, however, remains prominent in diaspora, particularly due to the presence of this historic capital city, Lhasa, and since it was the region where the Dalai Lamas and many other elites traditionally lived.
! ! ! 3
Note that this differs from the "Founder Effect," also from evolutionary biology, whereby a daughter population can 2 differ from its source population because of the random effects of small groups of founders.The application of the Founder Effect to historical linguistics has been critiqued by Bowern (2011), and indeed the populations discussed in this paper have relatively large founding populations, making this sort of random drift less likely.
!
The three large regions of U-Tsang, Kham, and Amdo are broad cultural regions, and all are composed of several sub-dialects.Nevertheless, that these three do correspond to macro-dialectal regions has been upheld by modern linguists such as Denwood (1999), Tournadre and Dorje (2003), and Dondrup Tsering (2011).Following such consensus, this paper treats the three as source dialects from which the Tibetan variety spoken in diaspora developed.
Predictions for Diaspora Tibetan
According to the model of Trudgill (1986), the younger Tibetans raised in diaspora today--those born after about 1985--would correspond to the third generation of settlers, whose grandparents were the age of the founding diaspora population.Middle-aged Tibetans raised in diaspora would correspond to the second generation of settlers, the first raised in diaspora.These older speakers would be expected to exhibit a mix of features not corresponding to any particular Tibetan dialect, while the younger speakers should be well on their way to converging toward a new dialect unique to diaspora.
What would this new dialect look like?A deterministic theory that takes into account the Founder Principle would predict it to group with the U-Tsang dialects and to share features broadly common to many Tibetan dialects.In line with the notion of leveling, features closely associated with any particular regiolect should be absent; likewise, simplification predicts particularly complex forms or rules not to survive into the new dialect.Given the high social prestige of the Lhasa dialect, however, some features particularly characteristic of the Lhasa dialect may be apparent.This may be supported by the intuition common to many speakers interviewed that Tibetans in diaspora speak dbus skad `U-Tsang dialect' or lha sa skad `Lhasa dialect' (anonymous interviews, 2016).
!
Interviews for this study were conducted in the Kathmandu Valley, Nepal (henceforth, "Kathmandu") in the summer of 2016, one of several population centers of the Tibetan diaspora.Most interviews were conducted in the major Tibetan population center of Boudhanath, the neighborhood surrounding a major Buddhist pilgrimage site and home a substantial portion of Nepal's Tibetan community.This includes a number of monasteries and nunneries and two Tibetan settlements, as well as schools, shops, restaurants, ! 4 and small factories.Many Tibetans live, work, shop, attend school, worship, and engage in recreation in Boudhanath, which is near the edge of the urban agglomeration of Kathmandu; however, some Tibetans also live in other areas of the city and regularly travel around the city and valley.
As the author is obviously not a native speaker of Tibetan, all interviews were primarily conducted by one of two native speakers hired by the author, with the author present and offering occasional input or questions.As relative insiders, the interviewers stood a greater chance of eliciting naturalistic speech than the author alone, and helped mitigate against speaker accommodation to a clearly non-native speaker.Both interviewers were ethnically Tibetan, were born and raised in Nepal, and attended Tibetan schools through the "+2" (grade 12) level.One speaker was aged 23 and male, the other aged 19 and female; each conducted approximately half of the interviews.
Speakers to interview were identified through the author's and interviewer's social networks, largely friends-of-friends and extended family members.In several cases, the author and an interviewer visited a monastic institution, retirement home, or school following previous contact by the interviewer; once there, several residents, students, and/or staff were referred to be interviewed by members of that institution.One school, one retirement home, and seven monastic institutions in Kathmandu proper and the nearby town of Pharping were visited in this manner.The remainder of the interviews were conducted at the home of the speaker, at an interviewer's home, or at a neighboring monastery.Familiar locations and acquaintance relationships were used to elicit trust and comfort despite the presence of recording equipment and an unfamiliar foreign researcher.
Each interview began with speakers reading from a wordlist or, for the small number of speakers who were non-literate or had vision difficulties, equivalent lexical items were elicited by pictures in a slideshow shown on a laptop computer.This was followed by a choice task between light verbs (see section 4.1), a short reading passage, elicitation from storyboards, and finally a period of free conversation or narration according to the interests of the speaker and interviewer, but often involving discussion of the speaker's language attitudes.The speaker was provided with a paper copy of the wordlist, word choice task, and reading passage; the interviewer also asked for the speaker's preference for each item in the word choice.Interviews took between 20-60 minutes.Recordings were made on a Zoom H4N recorder with an Audio-Technica ATM73a headworn microphone and an Audio-Technica AT202 microphone on a small tripod.
Sound files were edited and reviewed in Praat (Boersma and Weemick 2011); while acoustic measurements were made, the categorization of speaker variables was made by the judgement of the author.An attempt was made to identify admixture using STRUCTURE (Pritchard et al 2000), while NeighborNets were generated using SplitsTree (Huson and Bryant 2006).
In total, 73 speakers were interviewed.Of these, 43 were male (of whom 14 were monastic) and 29 were female (of whom 6 were monastic).Basic demographic questions asked of speakers included what regions they were born in and had lived in, their educational history, and what languages they spoke.Before each interview, speakers were told they would remain anonymous, and in most cases the author did not know the speaker's name.For statistical purposes, speakers were classified by gender, and the region where they had spent the substantial majority of childhood and teenage years, and for speakers raised in diaspora, aged 18-30 or over 40.An "other" category used for certain tests consists of five speakers from ethnically-Tibetan regions outside Tibet or the Diaspora communities, four speakers in their thirties, and seven who moved between regions such that they could not be said to have been raised in either diaspora or one of the major Tibetan regions.The resulting demographic breakdown is represented below:
! !
Note that, because several interviews were not complete, full data was not able to be elicited or analyzed for each speaker.The two rightmost columns in the above table show how many speakers' data of each category was analyzed; phonological data from 67 speakers was analyzed, as was verb-choice data from 60 speakers.!4.1 Features A range of phonological and lexical features were chosen in order to capture a general portrait of the state of each speaker's ideolect.Of these, eight were phonological variables coded based on each speaker's pronunciation of the wordlist, and eleven were light verbs coded according to speakers conscious judgements of their own speech.The phonological variables were selected based on the author's experience with the Tibetan language and reflect known variables in the literature such as reported in Tsering (2011); also following Tsering (2011), orthography was used as a proxy for etymological correspondences.The phonological features and their coding are as follows:
! !
In addition to the phonological features, the verbal choice task was based on a list compiled by a native-speaker scholar g.yu sgang 'od zer, titled bya tshig sbyong thabs gsar ma grangs nges kun grok (Dirk Schmidt, personal communication, July 10, 2016).Speakers were asked which sounds more natural to use in ordinary speech.Each item consisted of an object-verb collocation, where the verb could either be a more specific lexical verb or one of several frequent light verbs: brgyab `strike', bzos `make', btang `send', and in one case, shor `slip'.The hypothesis was that younger diaspora speakers would prefer the light verbs, while older and non-diaspora speakers would prefer the lexical verbs:
Analytic tools
The first analytic goal in this project was to determine which speakers group together according to the several variables.In order to do this, NeighborNets were generated using the SplitsTree.As discussed by Bryant and Moulton (2004) and Bryant et al. (2005), NeighborNets, which are primarily used in evolutionary biology, are also used in historical linguistics to develop networks of language families.The NeighborNet algorithm takes a matrix of data coded by taxa (e.g.species, languages) and computes the distances between the taxa.Each taxon is assigned a node, and each node is paired with two other nodes (rather than one, as in other related algorithms).These three nodes are then reduced to two, a process that is repeated to generate the splits that represent groupings of taxa.The distances involved in these splits are represented by branch lengths.Splits and branch lengths are depicted in graphical form, and while these are not evolutionary trees, the similarities constructed can be used to infer historical relationships.
In this study, the Neighbor-Net approach is still valuable as a tool to identify and depict groupings between taxa, in this case, individual speakers.Two types of information can be conveyed in these visualizations: branch lengths depict distance, the degree of difference between speakers, and the pattern of splits creates clusters of speakers with more similar traits.Following the predictions in section 3.1, this tool is predicted to find loose groupings of speakers for each of the three traditional regions as well as a cluster of younger Diaspora speakers, the latter perhaps within the U-Tsang cluster; older Diaspora speakers should appear scattered around the tree, in particular mixed among the U-Tsang speakers.
Another statistical tool borrowed from evolutionary biology, STRUCTURE (Pritchard et all 2000), was also used in an attempt to identify admixture between groups.This algorithm uses a Bayesian Markov Chain Monte Carlo estimation, randomly assigning taxa to a specified number of clusters (K), and continually re-assigns the taxa according to frequency estimates.The result is an estimate of the admixture of traits between populations, in this case, linguistic features.For these STRUCTURE runs, all 19 features were used, with a burning period of 10,000 and 50,000 repetitions, and K values from 2 to 10, with five runs per value of K.
5.1
Overall trends Among the data, several phonological variables immediately stood out as patterning closely with particular regions.Firstly, vowel harmony was only found in four speakers.Of these, three of the four were born and raised in Lhasa, and the fourth, though coded as "other" for having spent time in both Kham and Lhasa, also spent a significant portion of her childhood in the city.Since none of the diaspora speakers exhibited vowel harmony, it can be said that this feature has not been transmitted past the first generation of migrants from Lhasa.
In contrast, a group of Kham and Amdo speakers clearly emerged with reference to other features.As coded, all 7 Kham and 3 Amdo speakers lacked tonal contrasts, while all 7 Kham speakers exhibited voicing in their orthographic <z> and <zh> fricatives; in these cases, no member of other groups had these feature values.Additionally, six of seven Kham speakers and two of three Amdo speakers were the only speakers recorded to produce orthographic bilabial stop + glide clusters as fricatives rather than affricates, as all other speakers did.These contribute to the divergence of Kham and Amdo speakers from the other groups, as shown below.Neighbor-Net analysis of phonological and verb-choice features together failed to produce the expected clusters.Kham and Amdo speakers did indeed cluster together, separate from the U-Tsang and diaspora speakers, but other groups, including the expected younger diaspora group, were not found.This is shown in Figure 1.
!
However, when Neighbor Nets are generated by phonological and verb-choice data separately, it becomes clear that the separation of Kham and Amdo speakers from the rest is primarily on the basis of phonological data (Figure 2), not verb choice (Figure 3).
!
The highly un-tree-like image of Figure 4 suggests that this word-choice data is not a useful predictor of region.Indeed, the predicted simplification as lexical verbs are replaced by light verbs does not appear to be taking place, as speakers of all categories use a mix of both.The phonological data, however, does find a diffuse grouping of U-Tsang and diaspora speakers on the one hand and Kham and Amdo speakers on the other.This shows that the phonology of the diaspora speakers is based on that of the features of U-Tsang varieties, not those of Kham and Amdo.However, the failure to isolate a single cluster of younger diaspora speakers indicates that, with respect to these features, those speakers are not more similar to each other than are U-Tsang and diaspora speakers in general.
Results of STRUCTURE did not provide any additional information.Both admixture and noadmixture models consistently identified one group of Kham and Amdo speakers and another group of U-Tsang and Diaspora speakers, thus replicating the findings of the Neighbor-Net.For K-values above 2, no additional patterns emerged.This program thus failed to find evidence of admixture between the populations.
Phonological variables
Of the phonological variables, several were associated with speakers from Amdo and Kham, and were not present among Diaspora speakers.Contrastive tone was present for all speakers except for those of Amdo and Kham, all of whom lacked it, as did one speaker of uncertain origin.Likewise, orthographic pj-series clusters were realized as affricates except for two of three Amdo speakers and six of seven Kham speakers, who realized them as fricatives.Thirdly, the voicing of z/zh-series fricatives was only found among six of seven Kham speakers and no others.Finally, the kj-series clusters were realized as fricatives only by three Kham speakers, and as affricates by the remaining four Kham speakers, all three Amdo speakers, and one U-Tsang speaker; for all others, including all Diaspora speakers, they were clusters.
Vowel Harmony, a process described as unique to the city of Lhasa by Denwood (1999), Dawson (1980), and others, was predictably only observed among three U-Tsang speakers, all from the Lhasa area, as well as a fourth speaker who had spent time there in childhood (that speakers' data was not counted on account of having grown up in several regions).Like z/zh-voicing, kj-lenition, and pj-fricativization, this appears to be a regional feature that was not adopted by any Diaspora speaker interviewed.
Onset Stop Voicing, while not as clearly regionally-marked, was not particularly instructive.55 out of 68 speakers voiced only <rg>-series orthographic clusters.This included two out of three Amdo speakers, all seven Kham speakers, and nearly all of the Diaspora and U-Tsang speakers as well.One Amdo speaker exhibited voicing for all velar stops, while two younger Diaspora speakers and one U-Tsang speaker did not appear to use voicing contrasts in stops.Of the remaining categorized speakers, one older Diaspora and one U-Tsang speaker exhibited voicing inconsistent with the orthographic series discussed.The differences between older and younger Diaspora speakers was not statistically significant (p = .547).
More variable across groups was Nasal Coda Reduction.While no Amdo and only one Kham speaker were coded as exhibiting reduced nasal codas, the numbers for U-Tsang and Diaspora populations were more mixed.eight out of seventeen U-Tsang speakers were coded with reduction, as were five of eight older Diaspora speakers and four of seventeen younger Diaspora speakers.The ratio of Nasal Coda Reduction among younger and older Diaspora speakers was found to be just above statistical significance (p =.087), though a larger sample size could produce a more robust effect.
Palatalization of post-alveolar fricatives served in part to pick out Amdo and Kham speakers, which produced notably more palatalized forms.All three Amdo and six of seven Kham speakers produced forms that were rated "2" or "3" on a 0-3 scale.In contrast, this was the case for only two of eight older Diaspora speakers and three of seventeen younger Diaspora speakers (this difference was not significant).However, nine of seventeen U-Tsang speakers were also found to have a higher degree of palatalization, indicating the presence of diversity in this feature among some U-Tsang varieties as well as those of Kham and Amdo.
The phonological variables thus pattern differently across the demographic categories.Several, namely Tone, pj-fricativization, kj-lenition, and z/zh-voicing, were present among speakers of particular regions but not among any speakers from the Diaspora or U-Tsang.Similarly, Vowel Harmony was only found among speakers from Lhasa itself.Onset Stop Voicing did not pattern clearly by region, while Nasal Coda Reduction was found among a mix of all U-Tsang and Diaspora speakers.Finally, Palatalization of postalveolar fricatives was rare among Diaspora speakers, but reliably observed among Kham and Amdo speakers as well as a portion of U-Tsang speakers.Taken together, these variables suggest that Diaspora speakers generally pattern with U-Tsang speakers rather than those from Kham and Amdo, but that there is little distinction drawn between younger and older Diaspora speakers.!! 10
Light Verbs
Responses in this task varied; speakers sometimes indicated a simple preference for one or the other, or expressed no preference.Unsurprisingly, speakers often described a difference in register, with the light verb as more informal and the lexical verb as more formal or of written style.In other cases, speakers described a difference in meaning; for instance, eleven speakers said yi ke skur `mail a letter' involves sending by post, while yi ke btang `send a letter' has the letter delivered by a friend or messenger.Six options were coded for most: clear preference for one form or the other, preference expressed with equivocation for one form or the other, register difference, meaning difference, or no difference.For `build a house' and `have fun,' speakers expressed clearer judgements, so the "preference with equivocation" categories were omitted, and for `have fun' another category was added to reflect different judgements with respect to tense.
Due to this diversity of judgements and the generally small sample sizes, however, comparisons between older and younger Diaspora speakers consistently failed to reach statistical significance, even when clear-preference and preference-with-equivocation judgments were grouped together.As with speakers of the other demographic categories, these speakers expressed a variety of judgements about each form, suggesting that the demographic categories chosen are not strong predictors of speakers' judgements.
Further complicating the situation, many speakers offered differences in meaning, register, or style to the two options.This was especially pronounced for `have fun', where skyid po byung might be translated as `enjoy oneself', while skyid po btang involved seeking out an enjoyable activity.Some speakers described a difference in terms of tense, likely building on the additional role of byung (the lexical verb for `to receive') as a "receptive egophoric past tense auxiliary," according to Tournadre and Dorje (2005).
Among the two options for `play the lute', the more-specific light verb dkrol `strum' and the moregeneral light verb btang `send', younger Diaspora speakers were the only category for whom no members chose the more-specific dkrol `strum'.However, this choice was not particularly popular among the other groups either, with only two of seven older Diaspora speakers choosing it; the difference in the choice of dkrol `strum' was nearly but not significant (p =.100).
!
Taken together, these results do show a degree of support for the classical model of Trudgill (1986).The failure of Lhasa vowel harmony to survive transmission in diaspora follows from the simplification and loss of socially and linguistically marked variants.Likewise, the phonological features of diaspora speakers pattern with those of U-Tsang rather than Kham and Amdo, which can also be seen as a reduction of regionally-marked variants such as palatal fricatives and lenited affricates and clusters.
The dominance of U-Tsang features in their influence of the speech of diaspora Tibetans can also be seen as an example of the Founder Principle.As noted in section 3, 70% of the first wave of Tibetan migrants came from U-Tsang, and while more Kham and Amdo Tibetans have arrived since then, the Founder Principle predicts just this kind of outsized effect of U-Tsang Tibetan.
Still, the persistent diversity of the younger diaspora speakers remains a problem for Trudgill's theory.These young people, as the third generation in diaspora, do not appear nearly as focused in their speech, and a new dialect has yet to emerge.How can this be explained?Indeed, while Trudgill and Mufwene (2008) dismiss the role of group identity in new dialect formation, the Tibetan diaspora community, with strong ideologies favoring solidarity promoted by authority figures as well as ordinary people (Roemer 2008, anonymous interviews), should offer nearly as strong a case as any for which such social factors would encourage dialect focusing.Interestingly, while many Tibetans I spoke with stressed the importance of the preservation of their language, the emphasis seemed to be on the Tibetan language as opposed to other languages, not the interactions between dialects.This suggests that, perhaps, the diaspora Tibetan conversations on language ideology have not significantly addressed dialect diversity, thus allowing changes within the language to proceed with less social interference than might be expected.
The absence of a new diaspora Tibetan dialect after nearly sixty years offers a counterexample to Trudgill and his supporters, who propose that new-dialect emergence should be well underway in the third generation, if not before.Instead, these young Tibetans continue to speak with as much variation as their parents' generation, with each speaker using a novel combination of linguistic variants.The ideas of leveling, and, to a lesser extend, simplification proposed by Trudgill (1986) remain useful tools with which to examine linguistic changes in this community, but only time will tell how long focusing will take.
!
Figure 2: Neighbor Net of speakers by phonological and verb-choice data.Numbers reflect speaker codes; letter labels are as follows: U = U-Tsang, K = Kham, A = Amdo, Dy = younger diaspora, Do = older diaspora, ?= other.
GeisslerPhonological
Koinéization in Kathmandu Tibetan ! Figure 3: Neighbor Net of speakers by phonological data only.Numbers reflect speaker codes; letter labels are as follows: U = U-Tsang, K = Kham, A = Amdo, Dy = younger diaspora, Do = older diaspora.! Figure 4: Neighbor Net of speakers by word-choice data only.Numbers reflect speaker codes; letter labels are as follows: U = U-Tsang, K = Kham, A = Amdo, Dy = younger diaspora, Do = older diaspora. | 7,721 | 2018-02-10T00:00:00.000 | [
"Linguistics"
] |
Longitudinal Joint Modelling of Binary and Continuous Outcomes : A Comparison of Bridge and Normal Distributions
Background: Longitudinal joint models consider the variation caused by repeated measurements over time as well as the association between the response variables. In the case of combining binary and continuous response variables using generalized linear mixed models, integrating over a normally distributed random intercept in the binary logistic regression sub-model does not yield a closed form. In this paper, we assessed the impact of assuming a Bridge distribution for the random intercept in the binary logistic regression submodel and compared the results to that of a normal distribution. Method: The response variables are combined through correlated random intercepts. The random intercept in the continuous outcome submodel follows a normal distribution. The random intercept in the binary outcome submodel follows a normal or Bridge distribution. The estimations were carried out using a likelihood-based approach in direct and conditional joint modelling approaches. To illustrate the performance of the models, a simulation study was conducted. Results: Based on the simulation results and regardless of the joint modelling approach, the models with a Bridge distribution for the random intercept of the binary outcome resulted in a slightly more accurate estimation and better performance. Conclusion: Our study revealed that even if the random intercept of binary logistic regression is normally distributed, assuming a Bridge distribution in the model leads to in more accurate results.
INTRODUCTION
Multivariate response variables are widely recorded longitudinally in many medical areas.Longitudinal joint models assess the effect of covariates on two or more correlated responses while it considers the association between the various response variables as well.Generalized linear mixed-effects models (GLMM) are probably the most widely used methods for analyzing longitudinal data.These models are composed of generalized linear models (GLM) and mixed effect regression models (MRM).A variety of responses (such as continuous, binary, count and etc.) can be analyzed using the GLM model.The variation caused by repeated measurements is taken into account by the MRM component [1].
Joint modelling response variables is more complex compared to that of univariate.The selection of joint modellingapproach depends on the nature of the outcomes.The most common responses in many medical studies are continuous and binary.To jointly model such responses, there are two main approaches.The first one, which was proposed by Tate [2] and utilized by many others [3][4][5][6][7], is based on the product of marginal model for one of the response variables and a conditional model for the other outcome (conditioned on the former outcome) The second approach builds a joint model for the two response variables directly [8,9].Hereafter, the first and second approaches are expressed as conditional and direct approaches in this article.
Catalano [10] and Fitzmaurice [11] used the marginal generalized linear model to combine continuous and discrete responses.A covariance pattern model with a special correlation coefficient for each outcome was used to allow the variation caused by the repeated measurements over time.The model was then extended by Catalano and Ryan.Regan and Catalano [12] proposed a widespread survey of longitudinal joint modelling of continuous and discrete outcomes such as bivariate GLMMs [13].The random effects in GLMMs usually follow normal distributions with a zero mean.In joint modelling of longitudinal responses, the random effects of several sub-models follow a multivariate normal distribution with a variance-covariance matrix which accounts for the between-response associations.
Wang and Louis showed that, assuming a Bridge distribution for the random intercept in a logistic mixed model forces the fixed effects to have the same odds ratio interpretation in marginal (i.e., integrated over the random intercepts) and conditional forms (conditional on the random intercepts) [14].However, assuming distributions other than normal for random trends can result in complexities complexities [14].This idea was then applied by Lin et al. for evaluating the association between binary and continuous clustered data [15].
The current study does not compare the direct and conditional approaches, but aims to find if considering a Bridge distribution for the random intercept of binary outcome can benefit the performance of the direct and conditional joint modelling approaches.We restrict our study to random intercepts models only.A simulation study was conducted to assess the accuracy of the estimations.The models were also applied to a real dataset from a clinical trial investigating the effect of coriander fruit syrup on the duration (as continuous response) and the severity (as binary response) of migraine attacks.
Migraine are described as a chronic and debilitating neurological disorder.They result in adverse consequences for patients and society and causes lots of adverse consequences for the patients and society [16].The World Health Organization recommends the use of traditional medicine in unresolved diseases such as migraines [17].Coriander fruit is a commonly used in alternative medical treatments.It is believed to heal headaches, anxiety and depression and to potentially affect the frequency, duration and severity of migraine attacks [18,19].This fruit is one of the most commonly prescribed herbs in Persian medicine [20].According to the strong association between the characteristics of migraine attacks such as severity and duration [18], the longitudinal joint models were fitted.
Let
and represent the continuous and binary responses respectively, for a subject i at the occasion j.The binary response can take the values 0 and 1 while the continuous can take all the values between -∞ and +∞.The two associated response variables follow a general form as follows: ( In this formula, is the expected outcome, "h" is a proper link function according to the type of response variable (e.g.identity for continuous response) and the expected response is assumed to differ from the systematic component ( ) by a subject-specific effect ( ).
Associations and distributions
Bridge and normal distributions are assumed for the random intercept in the binary outcome sub-model with a logit link function.A normally distributed random intercept for the continuous outcome sub-model is postulated.The random intercepts follow a bivariate normal distribution while a copula approach is used in the case of different distributions.A correlation parameter ρ takes Longitudinal Joint Modelling the association between the random intercepts and hence the response variables into account.The bivariate responses, a continuous and a binary, are assumed to follow normal and binomial distributions respectively.The continuous response variable is linked to a linear function of covariates and a normally distributed random intercept (mean zero and variance ) via an identity function.The binary response is also linked to the covariates through a logit link, assuming a Bridge or a normal distribution for the random intercept (mean zero and variance ).It was mentioned before that the Bridge distribution proposed by Wang and Louis [14] allows both the conditional and marginal probabilities of the binary response to follow a logistic structure.
Model specification and Likelihood
The two associated response variables can be predicted by different covariates (not necessarily the same covariates).To combine the associated response variables, direct and conditional approaches can be applied based on the nature of the association between the responses.The conditional approach is appropriate when the specification of a joint distribution can be factorized by a product of a marginal and a conditional density.This approach reduces the modelling tasks of separate specification of models.The conditional approach requires a reliable type of association between the response variables such that one of the variables plays the role of a time-varying covariate for the other one.In addition to some complexities for marginalizing one response by integrating over the conditional density, problems such as the asymmetric behavior of the responses lead to more difficulties in modelling [9].
The model and the likelihood function for the direct approach can be specified as follow: ( (3) The model and the likelihood function for the conditional approach can be written as follow: (4) (5) Regarding that and are continuous and binary variables respectively, and are forms of normal and binomial density functions.In the current study, two different assumptions about the distribution of the binary outcome random intercepts has been compared.Thus, is a multivariate normal or a normal copula function of the random intercepts.In other words, in the case of different distributional assumptions for the random intercepts, a normal copula distribution was used to combine the random intercepts.
In the conditional approach, in addition to the correlation of response variables for the same subject, the association at the same time was induced.In other words, a dependence parameter ( ) performs the association at the same time point by conditioning one response on the residual of the latter (equations 4, 5) [15].The maximum likelihood estimation was carried out by taking the expectation with respect to the joint density of the random intercepts.Non-adaptive Gaussian quadrature techniques were utilized to perform the integrals and Newton-Raphson technique was implemented for the optimization.
Bridge distribution
Let G(.) be an inverse link function with the characteristics of being monotone, increasing and twice differentiable and also let be the Bridge distribution for the subject specific random effect u.This distribution carries the feature that the marginal and conditional link functions have the same form as (6) where and c are the attenuation and unknown constant parameters, respectively.(6) After differentiating and applying the Fourier transformation of (6), one can determine the density function of Bridge distribution as (7) where F is the Fourier transformation as (8) and .
The density and cumulative distribution function of the random effect (u) can be derived as in ( 9) and ( 10) respectively.Finally, after integrating the conditional binary logistic model based on the random effect, one can observe that the logit interpretation can be satisfied carrying an additional parameter ( ).The mean and variance of the Bridge distribution are zero and , respectively.The intraclass correlation (ICC) can be determined by 1-.
(9) (10) The variance matrix of two response variables for the ith subject at the jth occasion can be derived using following matrixes: (11) where is the diagonal overdispersion matrix, is the diagonal variance matrix of response variables assuming zero random effects, and (here an identity matrix) is the matrix denoting the correlation between residual errors.Moreover, let . ( , here = 0. To compare the proposed models, we used the Akaike Information criterion (AIC).
Computational Support
The R software version 3.3.1 packages such as "copula", "bridgedist", and "MASS" as well as the SAS program version 9.2 "nlmixed" procedure were utilized to simulate and assess the data preparation and to the proposed joint models.The SAS codes are available in the Appendix.
Simulation study
A simulation study was conducted to assess the impact of a Bridge random intercept on the estimations.To do this, following settings were considered.At each step, a continuous variable (time) in 10 different occasions and a binary variable (group) were generated from uniform and binomial distributions respectively.The continuous response variable was generated from a normal distribution with the mean equal to its systematic component.The binary response was generated from a binomial distribution using the probabilities associated with logit link function.The two random intercepts were generated from a bivariate normal distribution.The correlations between the random intercepts were chosen as zero, 0.4 and 0.8.The model was fitted on three different sample sizes 50, 200 and 500.Two different approaches of joint modeling (direct and conditional) were utilized.The 18 scenarios were modeled using two different assumptions for the distribution of the binary logistic regression random intercept (normal, Bridge).The models were specified as follow: The direct approach: The conditional approach: The true values:
Migraine Data
We analyzed data of a prospective, two-arm, randomized, triple-blind, placebo-controlled trial in the neurology clinic of Shohadaye-Tajrish hospital, Tehran, Iran [18].The patients were randomly divided into two equal groups, a control group and a group that received the treatment.In addition to 500 mg of sodium valproate per day, the patients received either 15 mL of coriander fruit syrup or 15 mL of placebo syrup, three times a day, for a month.This distribution was organized according to the code provided by the department of traditional pharmacy in the Tehran University of Medical Sciences, Tehran, Iran.The subjects were followed at weeks 1, 2, 3 and 4. The mean severity of pain was evaluated, by a ten-point visual analog scale (VAS).Moreover, the patients were requested to write down the duration (hour) of their migraine attacks.At the end of each week, patients were referred to the neurology clinic to report the requested items.Severity was categorized into two levels (0-0.40 and 0.41-1) as the binary response [21].Moreover, the duration of migraine attacks was assumed as the continuous response.
A Real Data Example (Migraine Data)
Descriptive statistics of continuous and categorical characteristics of the two groups are described in details elsewhere [18].Table 1 exposes the distribution of migraine attacks severity and duration during the 4 weeks in the intervention and control groups.The two response variables were strongly associated within different points of time.
Longitudinal Joint Modelling
The results of the models are shown in Table 2.Although the results were almost the same for the four performed models, the lowest AIC was seen in models with the assumption of a Bridge distribution for the random intercept of severity in both the direct and conditional approaches.Significant variances of the random intercepts for the duration and the severity of migraine attacks showed a high level of heterogeneity among patients at the baseline.The duration and severity of migraine attacks decreased significantly during the intervention over the time.According to the results from the direct approach with the assumption of a Bridge distribution, the intervention of coriander fruit syrup decreased the duration of migraine attacks.The slope of decrease in migraine attack duration was 2.92 more than in the control group.In contrast to the baseline, the duration of the attacks reduces significantly for the intervention group over the time.One week longer intervention of coriander fruit syrup was associated with a 0.93% reduction of severe migraine attacks as compared to the placebo (OR=exp (-2.63) =0.07).Regarding the application of a Bridge distribution for the random intercept of the binary outcome, the same interpretation of the odds ratios is possible for both of population average and subject-specific frameworks.As well as the direct approach, almost the same results was found the conditional joint modelling approach.
Simulation results
Tables 3 to 5 show the simulation results.Using the absolute value of biases (AVB=|E ( )|), one can find that the estimated values are almost close to the true values.Comparing the mean AICs as well as the AVB in both of the direct and conditional approaches, the models with a Bridge distribution for the random intercept of the binary outcome resulted in better performances.The larger the sample size, the better the estimations.Based on the lowest AIC, this simulation study showed that regardless of the amount of association between the random intercepts as well as the sample size, assuming a Bridge distribution benefits the models and makes the same population average and subject-specific interpretations possible in terms of odds ratios.Longitudinal Joint Modelling researchers have well discussed the conditional models as a major approach toward joint methods.The second approach combines the responses directly and was extended by Catalano [10] and Molenberghs et al. [23].In addition to GLMMs, the probit-normal approach and Placket-Dale model have been proposed in the literature as well.The extension of Placket-Dale model to other mixed responses is straightforward and it is well described by Faes et al. [22].In contrast to the direct model, the conditional approach adds a parameter to the likelihood function, assessing the direct association between the binary and continuous outcomes at the same time.However, using either of the joint modelling approaches needs an almost full understanding of the association between the response variables.
The generalizability of GLMM makes the extensions possible to other settings of combined discrete and continuous outcomes.According to the special characteristics of GLMMs, more complex models have been presented for dealing with special aspects of problems.For example, a logit link function is frequently used in binary logistic regression according to its ease of interpretation.However, integrating over a normally distributed random intercept does not result in a closed form [14,15].To make similar subject specific and population average interpretations in terms of odds ratios, Bridge distribution was introduced by Wang and Louis [14].
Previous studies have shown that regression effects in the random intercept logistic models are estimated almost the same for different distributional assumptions for the random effect [24,25].Our simulation study assessed the impact of assuming a Bridge distribution in direct and conditional joint modelling approaches on the performance of the models and the accuracy of estimations.Correlated random intercepts were used to combine the response variables.The random intercepts were generated from a bivariate normal distribution.In the models, we assumed that the random intercept of the binary logistic regression follows a Bridge and a normal distribution.Based on the results of the simulation study, it was shown that the models in which a Bridge distribution is considered performs better than that of a normal distribution.Although the accuracy of the estimations was the same for both of the assumptions, those with the Bridge assumed random intercepts had a smaller absolute value of biases.
In the current study, we used coriander fruit syrup data in which the duration and pain severity of the attacks were combined and assessed among migraine patients.We showed that the intervention significantly reduces the adverse outcomes of migraine.It has been demonstrated that Linalool is the main component of coriander [26,27].The results of univariate analysis have shown that the duration and frequency of migraine attacks as well as pain degree decrease over the time with use of coriander [18].
CONCLUSION
Assuming a bridge distribution for the random intercept of binary outcome provides the same interpretation of parameter estimates in both cases of integrating and not integrating over the random effects.In addition, our study revealed that even if the random intercept of binary logistic regression followed a normal distribution, assuming a Bridge distribution for this random effect in the model leads to slightly more accurate results.This result was observed in both of direct and conditional joint modelling approaches.
TABLE 1 .
Mean (SD) and frequency (percentage) of severity (Continuous response) and Pain (Binary Response) along with 4 time points
TABLE 2 .
The results of Direct and MC approaches
TABLE 3 .
Simulation study with zero correlation between the random intercepts *sample size;**the absolute value of biases; ***Mean square error
TABLE 4 .
Simulation study with 0.4 correlation between the random intercepts *sample size;**the absolute value of biases; ***Mean square error | 4,242 | 2018-03-20T00:00:00.000 | [
"Mathematics"
] |
New record and new species of Laubierpholoe Pettibone, 1992 (Annelida, Sigalionidae) from the soft bottom of submarine caves near Marseille (Mediterranean Sea) with discussion on phylogeny and ecology of the genus
. A new species of Laubierpholoe Pettibone, 1992 (Annelida, Sigalionidae), Laubierpholoe massiliana Zhadan sp. nov., was found in two submarine caves near Marseille (France). This is the fi rst record of the genus in the Mediterranean Sea. The new species di ff ers from congeners by inhabiting soft sediments instead of having an interstitial lifestyle and by several morphological characters: the ventral tentacular cirri slightly shorter or of similar length to the dorsal tentacular cirri, the presence of bidentate neurochaetae, the body length, and the number of segments. Molecular phylogenetic analysis using 18S rRNA and 28S rRNA sequences con fi rmed that the new species belongs to the genus Laubierpholoe , as well as the monophyly of the genus. The ecology of the new species and its adaptation to the cave-dwelling lifestyle are discussed. An identi fi cation key for all known species of Laubierpholoe is provided.
Introduction
Pholoinae Kinberg, 1858 is a group of scale-worms which are small (up to 2 cm long, up to 90 segments) and common in all kinds of marine intertidal and subtidal habitats. They have been treated as a separate family or as a part of Sigalionidae Kinberg, 1856(Barnich & Fiege 2003Wiklund et al. 2005;Norlinder et al. 2012). According to recent phylogenetic analysis of Aphroditiformia Levinsen, 1883 using four molecular markers and 87 morphological characters, Pholoinae is nested within Sigalionidae and considered to be a subfamily (Gonzalez et al. 2018a). The diagnostic characters of Pholoinae are: body typically elongate and narrow with as few as 15 segments; notochaetae geniculate and fi nely tapered; simple neurochaetae absent; neuropodial stylodes possibly present; elytral brooding present in some genera (Gonzalez et al. 2018a).
Within Pholoinae, the genus Laubierpholoe Pettibone, 1992 includes fi ve meiobenthic species. The fi rst species of the genus was described as Pholoe antipoda (Hartman, 1967) from Tierra del Fuego (South Atlantic Ocean) (Fig. 1A). Its main diff erences from other species of Pholoe Johnston, 1839 were a much smaller size, a reduced number of segments and details of the elytra and neuropodial falcigers. The second species, Pholoe swedmarki (Laubier, 1975) was later described from Bermuda (Northeast Atlantic Ocean) (Fig. 1A) from 2-8 m depth. The author was unaware of the previous description of P. antipoda and compared P. swedmarki with other species of Pholoe. Laubier (1975) noticed its much smaller size (maximum length 1.6 mm, width 400 μm), smaller number of notochaetae, elongated palps but reduced ventral cirri, and smooth elytra with few papillae. He also described developing embryos inside the elytra, postulated internal fertilization and considered these characters as adaptations to interstitial life style (Laubier 1975). Pettibone (1992) performed the revision of Pholoidae Kinberg, 1858 and provided a diagnosis of this family including diagnoses of all genera and species as well as identifi cation keys to all taxa. She erected the new genus Laubierpholoe for the two above-mentioned previously described species and for two new species: L. maryae Pettibone, 1992 and L. riseri Pettibone, 1992, both from New Zealand (Fig. 1A). The new genus was distinguished by the following main characters: achaetous tentaculophores lateral to prostomium, each with a long dorsal and a very short ventral tentacular cirrus; stout, very long palps; notochaetae of a single type, slightly curved or straight; and up to 29 segments.
The last species of Laubierpholoe, L. indooceanica Westheide, 2001, was described from coral reef fl ats of Krusadai Island and the Seychelles (Indian Ocean) (Fig. 1A). It is the smallest (less than 1 mm long) of all species of the genus and diff ers by the presence of hook-like blades in some of the compound neurochaetae. Phylogenetic analysis of Aphroditiformia confi rmed the position of Laubierpholoe within Pholoinae and the clade was well supported by molecular data and by the apomorphy presence of elytral brooding (Gonzalez et al. 2018a). The other interstitial pholoin genera Imajimapholoe Pettibone, 1992 and Taylorpholoe Pettibone, 1992 also have elytral brooding and direct development. In contrast, the interstitial Metaxypsamma uebelackerae Wolf, 1986 lacks elytra. No molecular data are available for these genera at the moment.
Potential reservoirs of such easily overlooked meiobenthic taxa exist very close to the shore. In the Mediterranean, one of the best-studied areas in the world, underwater marine caves have been shown to be hotspots of unknown diversity because of their more diffi cult access and peculiar environmental conditions. Caves are numerous near Marseille (SE France) thanks to the karstic nature of the seashore (Harmelin et al. 1985). Darkness and poor water circulation create oligotrophic environmental conditions similar to the deep sea and it is remarkable that many organisms found in marine caves belong to deep water taxa (e.g., Calado et al. 2004;Janssen et al. 2013;Chevaldonné et al. 2015;Cárdenas et al. 2018;Chevaldonné & Pretus 2021). In addition, some caves with particular geomorphologies provide a cold thermal regime (mean ca 13-15°C) similar to that of the Mediterranean deep sea (Vacelet et al. 1994;Bakran-Petricioli et al. 2007). Only a few studies have focused on cave sediment fauna, often with an emphasis on targeted taxonomic groups (e.g., Janssen et al. 2013;Zeppilli et al. 2018). In the 3PP and Jarre Caves in the Calanques National Park near Marseille (Fig. 1B-С), it has been shown that sea water remained colder than the outside environment or other caves throughout the year (Bakran-Petricioli et al. 2007). For instance, though it is only 25-30 m deep, the 3PP Cave has been shown to display meiofaunal organisms usually found at abyssal sites (Janssen et al. 2013). These are also large caves with extended areas of silty bottom favourable for the study of cave sediment fauna. Therefore, they have been the focus of recent investigations of macro-and meiofauna, especially of presently poorly-studied groups (Vortsepneva et al. 2021). A diverse annelid fauna was discovered there including a new species of Laubierpholoe described here with discussion of its taxonomy, phylogeny and the ecology of the group.
Sampling and preservation
All material was sampled in the Calanques National Park near Marseille ( Fig. 1B-С) in 2018-2020. Samples were taken close to the entrance, in the middle (40 m from entrance) and the deep (60-70 m from entrance) parts of Jarre and 3PP Caves at 18-25 m depth. The coordinates of the entrance of Jarre Cave are 43.19556° N, 5.3658333° E and 43.16306° N, 5.6° E for 3PP Cave. Bottom sediment was collected by SCUBA diving with 20 cm-wide sampling boxes at about one cm sediment depth and a length of 120 cm. Samples were washed using the fl oatation method and a sieve with mesh size 130 μm. In total, about 50 specimens of Laubierpholoe were collected. They were relaxed using a magnesium chloride (MgCl 2 ) solution isotonic to seawater for 30 min and preserved in 96% ethanol or in 2.5% glutaraldehyde (Electron Microscopy Supplies, EMS, Pennsylvania, USA) buff ered with 0.1 mol phosphate buff er and transferred to 70% ethanol after washing with the same buff er.
Granulometric analysis
Subsamples of sediment were taken in 3PP and Jarre Caves in the deep, middle and near entrance parts and granulometric analysis were performed in Limited Liability Company "Laboratory" (Sankt-Petersburg, Russia).
Morphological analysis
Whole specimens were photographed via stereo microscope and compound microscope using an iPhone 6 with a Labcam (iDu Optics, Detroit, Michigan, USA) adapter for living specimens and a Leica microscope (Leica Microsystems GmbH, Germany) with a Leica adapter for preserved ones. For SEM, specimens were dehydrated in a graded ethanol series (20-25 min per step), then transferred to acetone and critical point dried. Whole specimens and their fragments were coated with platinum-palladium and examined using a Camscan S-2 (Cambridge Instruments, London, United Kingdom) scanning electron microscope at the Laboratory of Electron Microscopy at the Biological Faculty of Moscow University.
DNA amplifi cation and sequencing
The Promega Wizard SV Genomic DNA Purifi cation Kit and protocol (Promega Corporation, Madison, USA) were used for tissue lysis and DNA purifi cation. Polymerase chain reaction (PCR) amplifi cation of nuclear 18S rRNA and fragments of 28S rRNA was accomplished with the standard primers. The 18S rRNA gene was PCR amplifi ed in three overlapping fragments using primer pairs 1 F-5R, 3 F-18Sbi and 18Sa2.0-9 R (Giribet et al. 1996(Giribet et al. , 1999. For the 28S rRNA gene we used C1' and C2 primers (Le et al. 1993). The universal primers 16Sar-L and 16Sbr-H (Palumbi & Kessing 1991) did not yield products that could be sequenced, therefore for the 16S rRNA gene we tried to use the primer pair 16SAnnF-16-SAnnR (Sjölin et al. 2005), unfortunately with the same result. We were also unable to amplify the CO1 gene fragment (Folmer fragment) using the primers jgLCO1490 and jgHCO2198 (Geller et al. 2013). All loci were amplifi ed using the Encyclo PCR kit (Evrogen JointStock Company, Russia). We amplifi ed a 25 μl reaction mix containing 1x PCR buff er, 1 μl of 10 μM of primer pair mix, 1 μl of template, 0.2 mM of each dNTP and 0.5 units Taq polymerase. Reaction mixtures were heated on Veriti® Thermal Cycler to 95°C for 300 s, followed by 35 cycles of 15 s at 95°C, 20 s at a specifi c annealing temperature, and 45-60 s at 72°C, depending on the length of fragment, and then a fi nal extension of 7 min at 72°C. Annealing temperature was set to 49°C for the 18S primer pairs 1 F-5R and 18Sa2.0-9 R, 52°C for the 18S primer pair 3 F-18Sbi and for the 28S primer pair C1′-C2 (Rousset et al. 2007). We used the Promega PCR Purifi cation Kit and protocol (Promega) to purify our amplifi cation products which were sequenced in both directions. Each sequencing reaction mixture included 1 μl of BigDye (Applied Biosystems, PerkinElmer Corporation, Foster City, CA), 1 μl of 1 μM primer and 1 μl of DNA template and was processed for 40 cycles of 96°C (15 s), 50°C (30 s) and 60°C (4 min). Samples were purifi ed prior to sequencing by ethanol precipitation to remove unincorporated primers and dyes. Products were re-suspended in 12 μl formamide and electrophoresed in an ABI Prism 3500 sequencer (Applied Biosystems). All new DNA sequences have been submitted to the NCBI GenBank repository.
Data analysis
Nucleotide sequences were edited using the software CodonCode Aligner ver. 5.0.2 (CodonCode Corporation) and checked for identity against the nuclear redundant (default) database of GenBank using BLASTn (Altschul et al. 1990). Data analysis included 8 sequences from this study (four specimens of Laubierpholoe massiliana Zhadan sp. nov., 18S rRNA and 28S rRNA) and sequences of 18S rRNA and 28S rRNA obtained from GenBank for species of Sigalionidae including Pelogeniinae Chamberlin, 1919, Pholoinae, Pisioninae Ehlers, 1901, and Sigalioninae Kinberg, 1856. Harmothoe impar (Johnston, 1839), species of Polynoidae Kinberg, 1856, was included as an outgroup species. GenBank accession numbers for sequences used in the present study are provided in Table 1. The sequences were aligned with the MAFFT multiple alignment tool (Katoh & Standley 2013) with default parameters: MAFFT fl avour: auto; gap extension penalty: 0.123; gap opening penalty: 1.53; direction of nucleotide sequences: do not adjust direction; matrix selection: no matrix. Then sequences were curated with Gblocks (Castresana 2000) with default parameters: minimum number of sequences for a conserved position (b1): 50% of the number of sequences +1; minimum number of sequences for a fl ank position (b2): 85% of the number of sequences; maximum number of contiguous non-conserved positions (b3): 8; minimum length of a block (b4): 10; allowed gap positions (b5): none. The fi nal lengths for individual alignments were 1407 and 675 bp for the 18S and 28S respectively. The 18S rDNA and 28S rDNA alignments were concatenated using MEGA X software (Kumar et al. 2018). We performed phylogenetic reconstruction for concatenated 18S+28S alignment using Bayesian inference (BI) and maximum likelihood (ML) analyses. For BI the best fi t model selection was performed using MEGA X software the GTR+G+I model was selected. The BI analysis was run in MrBayes ver. 3.2.7 (Huelsenbeck & Ronquist 2001). We made one run, four chains were ran simultaneously, three heated and one cold. The number of generations was set to 1 000 000. Chains were sampled every 500 th generation and 0.25 of the samples was discarded as burnin. The ML analysis was performed in PhyML (Guindon et al. 2010) with SMS (Smart Model Selection) (Lefort et al. 2017); statistical criterion to select the model was AIC (Akaike information criterion), selected model was GTR+G+I; tree topologies were searched using SPR (Subtree pruning and regrafting); node support was assessed with 200 bootstrap replicates. All analyses were performed using NGPhylogeny.fr server (Lemoine et al. 2019). Phylogenetic trees were visualized and processed in ITOL v5 (Letunic & Bork 2021). The trees were rooted using Harmothoe impar as an outgroup. Posterior probability (PP) and bootstrap support (B) were used for nodal support in BI and ML analyses respectively. Genus Laubierpholoe Pettibone, 1992 Type species
Diagnosis (after Pettibone 1992), emended (changes in bold)
Body small, linear, with relatively few segments (up to 29). Elytra and elytrophores on segments 2, 4, 5, 7, continuing on alternate segments to 23, then on every segment to end of body. Dorsal tubercles on segments lacking elytra. Elytra delicate, with few short papillae on lateral border and on surface. Without dorsal cirri or branchiae. Prostomium and fi rst or tentacular segment fused, ventrally forming anterior lip of mouth, without facial tubercle, with or without papillae. Prostomium rounded, bilobed; median antenna with ceratophore in anterior notch of prostomium; lateral antennae absent; with or without 2 pairs of eyes. Tentaculophores lateral to prostomium, achaetous, each with long dorsal and much shorter ventral tentacular cirrus or tentacular cirri of about same length; palps stout, very long, emerging ventral and lateral to tentaculophores, rugose. Second or buccal segment with fi rst pair of large elytrophores and elytra, biramous parapodia, long ventral buccal cirri, and forming lateral and posterior lips of mouth. Muscular pharynx with 9 dorsal and 9 ventral border papillae and 2 pairs of jaws. Parapodia biramous; notopodial conical acicular lobe without subdistal bract; neuropodial conical acicular lobe without distal papillae. Notochaetae simple, slender, capillary, slightly curved and straight. Neurochaetae stouter than notochaetae, compound, falcigerous or spinigerous; shafts with or without distal spinules; blades capillary or falcate with unidentate or uni-and bidentate tips. Ventral cirri short, tapering, on all segments. Pygidium with pair of anal cirri. Development characterised by reduction of egg number and development of embryos and juveniles within elytra (elytral brooding).
Etymology
The species name refers to the type locality (Massilia -the old Roman name for Marseille).
Distribution
The Calanques, near Marseille, Jarre and 3PP marine caves.
Ecology
Inhabits the upper layer of soft sediments in the middle and deep parts of marine caves at a depth of 19-25 m. The sediment type in Jarre Cave was defi ned as silty sand in the deep part and sandy silt in the middle part, and in 3PP Cave as clayey silt in both deep and middle parts (Table 2).
Emendation of generic diagnosis of Laubierpholoe and its conseque nces
A very short ventral tentacular cirrus has been considered a diagnostic character for Laubierpholoe, in contrast to Pholoe, Taylorpholoe and Metaxypsamma Wolf, 1986 where tentacular cirri are subequal (Pettibone 1992). Since ventral tentacular cirri in L. massiliana sp. nov. are as long as or slightly shorter than the dorsal ones, this character lost its importance, leading to a modifi cation of the generic diagnosis to: "Tentaculophores … each with long dorsal and much shorter ventral tentacular cirrus or tentacular cirri of about same length". This emendation leads to a problem in distinguishing genera of Pholoinae although other characters remain useful. Laubierpholoe and Pholoe can be discerned by the number of segments (up to 90 in Pholoe, up to 29 in Laubierpholoe), the elytral brooding of embryos in Laubierpholoe and the presence of two types of notochaetae in Pholoe (shorter, strongly bent (i.e., geniculate), and longer, slightly curved or straight) whereas all notochaetae in species of Laubierpholoe are similar, slightly curved or straight. The genus Laubierpholoe has common characters with other interstitial pholoin genera such as Taylorpholoe and Imajimapholoe. They also have a small size, elytral brooding and a single type of notochaetae. Laubierpholoe is distinguished by a medial antenna inserted in the anterior notch while in Taylorpholoe and Imajimapholoe the median antenna is situated occipitally at the posterior border of the prostomium. Another interstitial pholoin genus, Metaxypsamma, is easily recognized by its uniramous parapodia and rudimental elytra, transformed to fi liform papillae. The other character of L. massiliana sp. nov. leading to an emendation of the generic diagnosis is the presence of bidentate neurochaetae. It is unique for Pholoinae and discussed below. The comparison of Pholoinae genera is summarized in Table 3. (Table 4) Laubierpholoe massiliana sp. nov. has the typical characters of the genus: small size, few segments and elytra, median antenna in anterior notch of prostomium, one type of notochaetae, and elytral embryo brooding. Laubierpholoe massiliana sp. nov. is up to 1.2 mm long, with 17-19 segments, larger than L. indooceanica, which is shorter than 1 mm with 13-15 segments, but smaller than other species which can reach up to 1.6 mm with 26 segments (L. swedmarki) and up to 3 mm and 26-29 segments in L. antipoda, L. riseri and L. maryae.
Comparison with other species of Laubierpholoe
According to Gonzalez et al. (2018a), the genus Laubierpholoe has prostomial "cephalic" peaks. These structures, similar to those in polynoids, are illustrated by Pettibone (1992) and Westheide (2001). There the prostomium looks bilobed with the anterior lobes forming acute horns or peaks. Conversely, in L. swedmarki the prostomium is described as trapezoidal, with two lateral horns situated more ventrally. These horns, projected forward, are below the prostomium in fi g.1 (Laubier 1975). The shape of the prostomium was used to distinguish L. swedmarki from other species of Laubierpholoe: "bilobed prostomium rounded anteriorly, with small laterally projecting horns" versus "bilobed prostomium with anterior lobes projecting forward" (Pettibone 1992). SEM investigation showed that in L. massiliana sp. nov. the prostomium has an anterior depression where the medial antenna rises, but its anterior lobes are rounded, with notches above the tentaculophores. Two conical horns are situated more ventrally on the level of the ventral tentacular cirri, most probably originating from the tentacular segment. Therefore, they are not homologous with polynoid cephalic peaks. Other species of Laubierpholoe should be reinvestigated to establish the shape of their prostomium and position anterior of horns.
Unlike other species of Laubierpholoe, in L. massiliana sp. nov. ventral tentacular cirri are as long as the dorsal ones or slightly shorter. This led us to modify the generic diagnosis (discussed above) and allows to distinguish L. massiliana sp. nov. from its congeners.
One of the distinguishing characters for the species of Laubierpholoe is the number of notochaetae. It varies from 2-4 in L. swedmarki to 4-8 in L. indooceanica to numerous in L. antipoda, L. riseri and L. maryae; L. massiliana sp. nov. has a few (3-6) notochaetae, which is slightly more than in L. swedmarki but less than in other species. They can be straight or slightly curved which is typical for Laubierpholoe.
Laubierpholoe massiliana sp. nov. diff ers from all congeners by the shape of its neurochaeta. They are unusual not just for the genus Laubierpholoe but for the whole Pholoinae subfamily, as the blades of at least some of the neurochaetae are long and bidentate. Bidentate, sometimes deeply split neurochaetal blades are present in other sigalionids, e.g., Pelogeniinae subfamily, genera Sthenelais Kinberg, 1856, Fimbriosthenelais Pettibone, 1971, Willeysthenelais Pettibone, 1971, Euthalenessa Darboux, 1899(Pettibone 1970, 1971, 1997 but have not been previously described in Pholoinae. Pettibone (1992) diagnosed the former family 'Pholoidae' as having "blades short, falcate, and unidentate". Laubierpholoe indooceanica, described after this revision, has several types of neurochaetal blades, some comparatively long and hook-like (Westheide 2001: fi g. 2J-K, N). Bidentate tips in L. massiliana sp. nov. are seen only with high magnifi cation or with SEM and might have been overlooked in other species. We changed the diagnosis of the genus in that point to: "…blades unidentate or uni-and bidentate".
Ecology
All previously described species of Laubierpholoe inhabit interstitial biotopes like coarse sand, gravel, broken shells, or carpets of benthic diatoms (Table 4). They share adaptations to interstitial lifestyle like small size, very long palps, reduction of head appendages, small number of notochaetae, smooth elytra with few papillae, and intra-elytral brooding of embryos. Laubierpholoe massiliana sp. nov. is unique within the genus for living in soft sediment habitats, such as silty sand and clayey silt (Tables 2, 4). It is otherwise very similar to other species in size, morphology, and reproduction biology. Since no adaptations to digging in soft sediment were found, we suppose that L. massiliana sp. nov. moves on or just below the sediment surface. This should be confi rmed by direct observation of living worms. We have not found L. massiliana sp. nov. in samples collected outside the caves in diff erent sediments.
Adaptation to cave dwelling
Polynoidae inhabiting marine caves have long sensory parapodial cirri and no eyes or pigmentation, unlike their non-cave dwelling relatives (eyes were plausibly lost in correlation with specialization and colonization of deep-sea habitats) (Gonzalez et al. 2018b;Capa et al. 2022). Among Sigalionidae, only the genus Laubierpholoe includes cave dwelling representatives.
Laubierpholoe sp. is reported from several anchialine caves in the Canary Islands. This undescribed species was found in the Corona lava tube (Lanzarote Island) in a carpet of benthic diatoms and in the sandy bottom of the Tenerife littoral Martínez García et al. 2009). Riera et al. (2018) recorded the same species also from Los Cerebros Cave (Tenerife Island) in sand or gravel sediments. One specimen of this undescribed species from the Tenerife cave was used for phylogenetic analysis and represented as Laubierpholoe sp. C. (Gonzalez et al. 2018a). This species lacks eyes, but has short parapodial cirri; pigmentation information is absent.
Another species of Laubierpholoe lacking eyes is L. maryae, collected on coarse sediment in the intertidal and shallow subtidal zone of New Zealand (Pettibone 1992). The reason for this absence is not clear as interstitial lifestyle alone does not lead to eye loss in other species of Pholoinae.
Laubierpholoe massiliana sp. nov. has developed eyes and short parapodial cirri. Its elytra are transparent and lack pigment and its body is whitish and semitransparent. Living specimens have very long anal cirri (longer than palps, half of the body length), easily lost during sample treatment and preservation. They may play the same role as elongated parapodial cirri in cave polynoids. This hypothesis needs confi rmation through comparison with species of Laubierpholoe of other marine biotopes. Little information exists on body coloration and length of anal cirri within the genus's interstitial non-cave dwelling species. Laubierpholoe indooceanica are whitish-brown to colourless, almost transparent, some with yellowishbrown spots on the elytra and long pygidial cirri, much shorter than the palps (Westheide 2001). Laubierpholoe swedmarki is of white colour and without pygidial cirri (most probably lost) (Laubier 1975). Laubierpholoe riseri has long anal cirri, also shorter than the palps (Pettibone 1992). The body colour and pygidial cirri presence in other species of Laubierpholoe are unknown. The pigmentation loss and longer anal cirri displayed by Laubierpholoe massiliana sp. nov. have to remain unexplained for the time being. They could be regarded as adaptations to the cave dwelling lifestyle or they could be inherited from their ancestors.
Geography
Other species of Laubierpholoe were recorded in the Atlantic (off South America, Bermuda, Cuba), Indian (Seychelles) and Pacifi c (New Zealand) Oceans (Fig. 1A, Table 4). Our fi nding is the fi rst record of the genus in the Mediterranean Sea. The presence of L. massiliana sp. nov. is currently confi rmed only for the two marine caves in the Calanques, but the species is likely to be found in other caves with soft sediments in the Mediterranean Sea. However, we have not found L. massiliana sp. nov. in samples collected outside the caves.
Phylogeny
Our results inferred both from BI and ML confi rmed that Laubierpholoe massiliana sp. nov. indeed belongs to the genus Laubierpholoe, as well as the genus monophyly. It matches the previous results by Gonzalez et al. (2018a). The weak support and discordance with ML analysis of the clades within Laubierpholoe can be explained by data defi ciency for other species of Laubierpholoe (Table 1). Gonzalez et al. (2021) studied the mitochondrial genomes of Polynoidae with diff erent lifestyles and found similarity in cave and pelagic polynoids. Exciting future research could investigate the mitochondrial genome of cave dwelling Sigalionidae in comparison with other representatives of the family. | 5,773.6 | 2023-06-16T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Enhancing Word Embeddings with Knowledge Extracted from Lexical Resources
In this work, we present an effective method for semantic specialization of word vector representations. To this end, we use traditional word embeddings and apply specialization methods to better capture semantic relations between words. In our approach, we leverage external knowledge from rich lexical resources such as BabelNet. We also show that our proposed post-specialization method based on an adversarial neural network with the Wasserstein distance allows to gain improvements over state-of-the-art methods on two tasks: word similarity and dialog state tracking.
Introduction
Vector representations of words (embeddings) have become the cornerstone of modern Natural Language Processing (NLP), as learning word vectors and utilizing them as features in downstream NLP tasks is the de facto standard. Word embeddings (Mikolov et al., 2013;Pennington et al., 2014) are typically trained in an unsupervised way on large monolingual corpora. Whilst such word representations are able to capture some syntactic as well as semantic information, their ability to map relations (e.g. synonymy, antonymy) between words is limited. To alleviate this deficiency, a set of refinement post-processing methods-called retrofitting or semantic specialization-has been introduced. In the next section, we discuss the intricacies of these methods in more detail.
To summarize, our contributions in this work are as follows: • We introduce a set of new linguistic constraints (i.e. synonyms and antonyms) created with BabelNet for three languages: English, German and Italian. * Equal contribution • We introduce an improved post-specialization method (dubbed WGAN-postspec), which demonstrates improved performance as compared to state-of-the-art DFFN and AuxGAN (Ponti et al., 2018) models.
• We show that the proposed approach achieves performance improvements on an intrinsic task (word similarity) as well as on a downstream task (dialog state tracking).
Related Work
Numerous methods have been introduced for incorporating structured linguistic knowledge from external resources to word embeddings. Fundamentally, there exist three categories of semantic specialization approaches: (a) joint methods which incorporate lexical information during the training of distributional word vectors; (b) specialization methods also referred to as retrofitting methods which use post-processing techniques to inject semantic information from external lexical resources into pre-trained word vector representations; and (c) post-specialization methods which use linguistic constraints to learn a general mapping function allowing to specialize the entire distributional vector space. In general, joint methods perform worse than the other two methods, and are not model-agnostic, as they are tightly coupled to the distributional word vector models (e.g. Word2Vec, GloVe). Therefore, in this work we concentrate on the specialization and post-specialization methods. Approaches which fall in the former category can be considered local specialization methods, where the most prominent examples are: retrofitting (Faruqui et al., 2015) which is a post-processing method to enrich word embeddings with knowledge from semantic Lexical Resources (WordNet, BabelNet, etc.) lexicons, in this case it brings closer semantically similar words. Counter-fitting (Mrkšić et al., 2016) likewise fine-tunes word representations; however, conversely to the retrofitting technique it counterfits the embeddings with respect to the given similarity and antonymy constraints. Attract-Repel (Mrkšić et al., 2017b) uses linguistic constraints obtained from external lexical resources to semantically specialize word embeddings. Similarly to counter-fitting it injects synonymy and antonymy constraints into distributional word vector spaces. In contrast to counter-fitting, this method does not ignore how updates of the example word vector pairs affect their relations to other word vectors.
Linguistic Constraints
On the other hand, the latter group, postspecialization methods, performs global specialization of distributional spaces. We can distinguish: explicit retrofitting that was the first attempt to use external constraints (i.e. synonyms and antonyms) as training examples for learning an explicit mapping function for specializing the words not observed in the constraints. Later, a more robust DFFN method was introduced with the same goal -to specialize the full vocabulary by leveraging the already specialized subspace of seen words.
Methodology
In this paper, we propose an approach that builds upon previous works Ponti et al., 2018). The process of specializing distributional vectors is a two-step procedure (as shown in Figure 1). First, an initial specialization is performed (see §3.1). In the second step, a global specialization mapping function is learned, allowing to generalize to unseen words (see §3.2).
Initial Specialization
In this step a subspace of distributional vectors for words that occur in the external constraints is specialized. To this end, fine-tuning of seen words can be performed using any specialization method. In this work, we utilize Attract-Repel model (Mrkšić et al., 2017b) as it offers stateof-the-art performance. This method allows to make use of both synonymy (attract) and antonymy (repel) constraints. More formally, given a set A of attract word pairs and a set of R of repel word pairs, let V S be the vocabulary of words seen in the constraints. Hence, each word pair (v l , v r ) is represented by a corresponding vector pair (x l , x r ). The model optimization method operates over mini-batches: a mini-batch B A of synonymy pairs (of size k 1 ) and a mini-batch B R of antonymy pairs (of size k 2 ). The pairs of negative examples The negative examples serve the purpose of pulling synonym pairs closer and pushing antonym pairs further away with respect to their corresponding negative examples. For synonyms: where τ is the rectifier function, and δ att is the similarity margin determining the distance between synonymy vectors and how much closer they should be comparing to their negative examples. Similarly, the equation for antonyms is given as: A distributional regularization term is used to retain the quality of the original distributional vector space using L 2 -regularization.
where λ reg is a L 2 -regularization constant, and x i is the original vector for the word x i .
Consequently, the final cost function is formulated as follows:
Proposed Post-Specialization Model
Once the initial specialization is completed, postspecialization methods can be employed. This step is important, because local specialization affects only words seen in the constraints, and thus just a subset of the original distributional space X d . While post-specialization methods learn a global specialization mapping function allowing them to generalize to unseen words X u . Given the specialized word vectors X s from the vocabulary of seen words V S , our proposed method propagates this signal to the entire distributional vector space using a generative adversarial network (GAN) (Goodfellow et al., 2014). Hence, in our model, following the approach of Ponti et al. (2018), we introduce adversarial losses. More specifically, the mapping function is learned through a combination of a standard L 2 -loss with adversarial losses. The motivation behind this is to make the mappings more natural and ensure that vectors specialized for the full vocabulary are more realistic. To this end, we use the Wasserstein distance incorporated in the generative adversarial network (WGAN) as well as its improved variant with gradient penalty (WGAN-GP) (Gulrajani et al., 2017). For brevity, we call our model WGAN-postspec, which is an umbrella term for the WGAN and WGAN-GP methods implemented in the proposed post-specialization model. One of the benefits of using WGANs over vanilla GANs is that WGANs are generally more stable, and also they do not suffer from vanishing gradients.
Our proposed post-specialization approach is based on the principles of GANs, as it is composed of two elements: a generator network G and a discriminator network D. The gist of this concept, is to improve the generated samples through a minmax game between the generator and the discriminator.
In our post-specialization model, a multi-layer feed-forward neural network, which trains a global mapping function, acts as the generator. Consequently, the generator is trained to produce predictions G(x; θ G ) that are as similar as possible to the corresponding initially specialized word vectors x s . Therefore, a global mapping function is trained using word vector pairs, such that On the other hand, the discriminator D(x; θ D ), which is a multilayer classification network, tries to distinguish the generated samples from the initially specialized vectors sampled from X s . In this process, the differences between predictions and initially specialized vectors are used to improve the generator, resulting in more realistically looking outputs.
In general, for the GAN model we can define the loss L G of the generator as: While the loss of the discriminator L D is given as: In principle, the losses with Wasserstein distance can be formulated as follows: and An alternative scenario with a gradient penalty (WGAN-GP) requires adding gradient penalty λ coefficient in the Eq. (8).
Experiments
Pre-trained Word Embeddings. In order to evaluate our proposed approach as well as to compare our results with respect to current state-ofthe-art post-specialization approaches, we use popular and readily available 300-dimensional pretrained word vectors. Word2Vec (Mikolov et al., 2013) embeddings for English were trained using skip-gram with negative sampling on the cleaned and tokenized Polyglot Wikipedia (Al-Rfou' et al., 2013) by Levy and Goldberg (2014), while German and Italian embeddings were trained using CBOW with negative sampling on WacKy corpora (Dinu et al., 2015;Artetxe et al., 2017Artetxe et al., , 2018. Moreover, GloVe vectors for English were trained on Common Crawl (Pennington et al., 2014).
Linguistic Constraints. To perform semantic specialization of word vector spaces, we exploit linguistic constraints used in previous works (Zhang et al., 2014;Ono et al., 2015; (referred to as external) as well as introduce a new set of constraints collected by us (referred to as babelnet) for three languages: English, German and Italian. We use constraints in two different settings: disjoint and overlap. In the first setting, we remove all linguistic constraints that contain any of the words available in SimLex (Hill et al., 2015), SimVerb (Gerz et al., 2016) and WordSim (Leviant and Reichart, 2015) evaluation datasets. In the overlap setting, we let the SimLex, SimVerb and WordSim words remain in the constraints. To summarize, we present the number of word pairs for English, German and Italian constraints in Table 1.
Let us discuss in more detail how the lists of constraints were constructed. In this work, we use two sets of linguistic constraints: external and babelnet. The first set of constraints was retrieved from WordNet (Fellbaum, 1998) and Roget's Thesaurus (Kipfer, 2009), resulting in 1,023,082 synonymy and 380,873 antonymy word pairs. The second set of constraints, which is a part of our contribution, comprises synonyms and antonyms obtained using NASARI lexical embeddings (Camacho-Collados et al., 2016) and BabelNet (Navigli and Ponzetto, 2012). As NASARI provides lexical information for BabelNet words in five languages (EN, ES, FR, DE and IT), we collected each word with its related BabelNetID (a sense database identifier) to extract the list of its synonyms and antonyms using BabelNet API. Furthermore, to improve the list of Italian words, we also followed the approach proposed by Sucameli and Lenci (2017). The authors provided a new dataset of semantically related Italian word pairs. The dataset includes nouns, adjectives and verbs with their synonyms, antonyms and hypernyms. The information in this dataset was gathered by its authors through crowdsourcing from a pool of Italian native speakers. This way, we could concatenate Italian word pairs to provide a more complete list of synonyms and antonyms.
Similarly, we refer to the work of Scheible and Schulte im Walde (2014) that presents a new collection of semantically related word pairs in German, which was compiled through human evaluation. Relying on GermaNet and the respective JAVA API, the list of the word pairs was generated with a sampling technique. Finally, we used these word pairs in our experiments as external resources for the German language.
Initial Specialization and Post-Specialization. Although, initially specialized vector spaces show gains over the non-specialized word embeddings, linguistic constraints represent only a fraction of their total vocabulary. Therefore, semantic specialization is a two-step process. Firstly, we perform initial specialization of the pre-trained word vectors by means of Attract-Repel (see §2) algorithm. The values of hyperparameter are set according to the default values: λ reg = 10 −9 , δ sim = 0.6, δ ant = 0.0 and k 1 = k 2 = 50. Afterward, to perform a specialization of the entire vocabulary, a global specialization mapping function is learned. In our WGAN-postspec proposed approach, the post-specialization model uses a GAN with improved loss functions by means of the Wasserstein distance and gradient penalty. Importantly, the optimization process differs depending on the algorithm implemented in our model. In the case of a vanilla GAN (AuxGAN), standard stochastic gradient descent is used. While in the WGAN model we employ RMSProp (Tieleman and Hinton, 2012). Finally, in the case of the WGAN-GP, Adam (Kingma and Ba, 2015) optimizer is applied. Table 2: Spearman's ρ correlation scores on SimLex-999 (SL), SimVerb-3500 (SV) and WordSim-353 (WS). Evaluation was performed using constraints in three settings: (a) external, (b) babelnet, (c) external + babelnet.
Word Similarity
We report our experimental results with respect to a common intrinsic word similarity task, using standard benchmarks: SimLex-999 and WordSim-353 for English, German and Italian, as well as SimVerb-3500 for English. Each dataset contains human similarity ratings, and we evaluate the similarity measure using the Spearman's ρ rank correlation coefficient. In Table 2, we present results for English benchmarks, whereas results for German and Italian are reported in Table 3. Word embeddings are evaluated in two scenarios: disjoint where words observed in the benchmark datasets are removed from the linguistic constraints; and overlap where all words provided in the linguistic constraints are utilized. We use the overlap setting in a downstream task (see §5.2).
The results suggest that the post-specialization methods bring improvements in the specialization of the distributional word vector space. Overall, the highest correlation scores are reported for the models with adversarial losses. We also observe that the proposed WGAN-postspec achieves fairly consistent correlation gains with GLOVE vectors on the SimLex dataset. Interestingly, while exploiting additional constraints (i.e. external + babelnet) generally boosts correlation scores for German and Italian, the results are not conclusive in the case of English, and thus they require further investigation.
Dialog State Tracking
We also evaluate our proposed approach on a dialog state tracking (DST) downstream task. This task is a standard language understanding task, which allows to differentiate between word similarity and relatedness. To perform the evaluation we follow previous works (Henderson et al., 2014;Williams et al., 2016;Mrkšić et al., 2017b). Concretely, a DST model computes probability based only on pre-trained word embeddings. We use Wizard-of-Oz (WOZ) v.2.0 dataset (Wen et al., 2017;Mrkšić et al., 2017a) composed of 600 training dialogues as well as 200 development and 400 test dialogues.
In our experiments, we report results with a standard joint goal accuracy (JGA) score. The results in Table 4 confirm our findings from the previous word similarity task, as initial semantic specialization and post-specialization (in particular WGAN-postspec) yield improvements over original distributional word vectors. We expect this conclusion to hold in all settings; however, additional experiments for different languages and word em-beddings would be beneficial.
Conclusion and Future Work
In this work, we presented a method to perform semantic specialization of word vectors. Specifically, we compiled a new set of constraints obtained from BabelNet. Moreover, we improved a state-of-theart post-specialization method by incorporating adversarial losses with the Wasserstein distance. Our results obtained in an intrinsic and an extrinsic task, suggest that our method yields performance gains over current methods.
In the future, we plan to introduce constraints for asymmetric relations as well as extend our proposed method to leverage them. Moreover, we plan to experiment with adapting our model to a multilingual scenario, to be able to use it in a neural machine translation task. We make the code and resources available at: https://github.com/ mbiesialska/wgan-postspec | 3,655.4 | 2020-05-20T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Identification of Evaluation Results in E-Banking Services Transaction for Product Recommendation using the BIRCH and Davies Bouldin Index Method
: E-banking transaction services in the banking world include many products offered to customers. However, the existence of regulatory factors may limit the extent to which banks can promote e-banking services, especially in cases where promotions involve incentives or special offers. Besides, this research aims to help recommend product promos from these services by using data analysis. Recommendations for this product promo can be known from the evaluation process of data collected from e-banking transaction services for purchases and payments. The research method that is used for this research is the clustering method. The clustering method for providing significant and influential re-sults compared to other methods suitable for this research is BIRCH, which is assisted by the Davies Bouldin index method to determine the list of product groups with the lowest value. The results of this research depicted that data can be grouped based on which services have low use levels. The services in question are Deposits, Credit Cards on Mobile Services, OVB, and Inter-Bank Transfers on Mobile Services. Therefore, this service can be used as a reference to increase product promotion by the bank. These services can be used as a reference by the bank to improve promotions so that all services can be used and utilized well by customers, thereby increasing the value of the bank’s services
1 Introduction E-banking transaction services include fund transfers, bill payments, balance checks, and other banking activities that can be accessed electronically via the Internet or banking applications.E-banking transaction services are in excellent customer demand because they provide easy banking access, time efficiency, and convenience [1].Users can carry out various transactions anytime and anywhere without coming directly to the bank.Apart from that, its advanced security features also make it increasingly popular with users.
The bank promotes e-banking services through various marketing channels, including advertisements on social media, television, radio, and newspapers.They can also organize unique promotional campaigns to attract the attention of potential customers.Offering special discounts, incentives, or bonuses for using e-banking services is also often used as a promotional strategy.In addition, the bank also focuses on consumer education, explaining the benefits and convenience of e-banking services through marketing materials and online guides.Several obstacles to promotional strategies at banks involve security concerns, mainly due to increased cybercrime cases.
Furthermore, obstacles may arise from challenges in effectively communicating the advantages of e-banking services to consumers and a need for more awareness regarding online security.Regulatory constraints can also restrict the extent banks can promote ebanking services, particularly when incentives or special offers are involved.Lastly, unequal access to technology or a deficiency in digital literacy within specific segments of society can pose additional barriers.
Moreover, customers' responses to e-banking services tend to differ.Many customers embrace this service favourably due to its convenience and accessibility, regularly engaging in online transactions, balance checks, and utilizing various e-banking features.Nonetheless, some customers may still require reassurance regarding the security of online transactions and may lean towards traditional methods [2].Customer behavior in e-banking is influenced by age and digital literacy levels.Banks actively enhance security measures, offer educational initiatives to alter perceptions, and promote the adoption of e-banking services.
Recommendations for e-banking service products from the results of data analysis have several benefits.First, it can improve user experience by providing solutions that suit their needs and transactional behavior [3].Second, it helps increase the penetration of e-banking services by providing relevant advice to users, encouraging wider adoption of the service.Third, improve customer retention by providing added value through accurate and useful recommendations.Data analysis allows banks to understand customer behavior patterns and provide more personalized and tailored recommendations.
Several researchers proposed clustering as a suitable method for the banking transaction domain [4][5][6][7][8].The clustering method is a data analysis method that groups objects or data into similar groups based on specific characteristics or attributes.Some popular clustering methods include K-Means, Minibatch K-Means, Hierarchical Clustering, BIRCH, and DBSCAN [6].The main goal of this method is to create groups that are homogeneous within them and heterogeneous between them.Clustering is used in various fields, such as data mining, pattern analysis, and market segmentation.
In e-banking services transactions, market segmentation employs clustering to categorize customers according to their transactional behavior, banking requirements, or risk profiles.This aids banks in delivering more personalized services and devising more efficient https://ejournal.ittelkom-pwt.ac.id/index.php/infotelmarketing strategies.Clustering is also utilized to pinpoint suspicious transaction patterns or identify groups of accounts potentially engaged in illicit activities, thereby enhancing security and fraud detection in banking operations.Additionally, clustering assists in grouping financial instruments or investment portfolios with similar risks, aiding banks in risk management and investment decision-making.Besides, financial institutions can optimize operations, enhance customer service, and better manage risks by applying clustering techniques to banking data.
Research Method
At this stage, a series of procedures or systematic approaches are used by researchers to plan, implement, and analyze data.Research methods help ensure that the research process is carried out well so that the results are reliable and can be interpreted correctly.The research methods for this research include the steps researchers take to design studies, collect data, and model data to obtain expected results.Figure 1 shows the research steps taken to get product promotion recommendations based on the results of evaluating ebanking transaction services using the clustering method.A detailed explanation of each step can be seen in the following subchapter.
Figure 1: The architecture of the e-banking service product promo recommendation process using the clustering method.
Data Collection
Research methods include ways of collecting data appropriate to the research objectives.This may involve using interviews, questionnaires, observations, experiments, or various methods.In this research, data collection was carried out with a focus on in-depth analysis of one case.Apart from that, there were direct observations and interviews with banking parties from Bank PT.XYZ is combined with literature studies to obtain the expected research results.Figure 2 is an example of e-banking transaction data from PT. XYZ from 2016-2023 (TotalJumlah -> Sum; KodeRegional -> Regional Code; Nominal -> Nominal).This data is used as a reference for evaluating e-banking transaction services so that the bank can obtain later information for product promotion recommendations.
Pre-Processing Data
Data pre-processing is a series of steps or techniques performed on data before the data can be used for analysis or modelling [1].Data pre-processing ensures that the data used in research or modelling is good quality, clean, and ready to use.Following are some general steps in data pre-processing:
Data Cleaning
This process is carried out to resolve and handle missing or incomplete data, detect and handle outliers or abnormal values, and identify and handle duplicates in the data [9].Additionally, it reduces the number of variables or features in the data if necessary.This technique also holds missing values by filling in or deleting empty data.
Data Transformation
Data transformation is carried out to normalize or standardize the data to ensure that all variables have a similar scale and carry out logarithmic or other adaptations to change the data distribution if necessary [9].Additionally, the format or data type is changed if required. https://ejournal.ittelkom-pwt.ac.id/index.php/infotel
Encoding Categorical Variable
Here, we will convert categorical variables into a form that can be used in analysis or modelling, such as one-hot encoding.
The results of data preprocessing will then be used in the data modelling stage using the clustering method to obtain product recommendations from e-banking transaction services.
Data Modeling using Clustering Methods
Selection of the appropriate clustering method depends on the nature and structure of the data, as well as the desired analysis objectives [7,[10][11][12].Three clustering methods are popular for banking transactions: K-Means, Minibatch K-Means, and BIRCH.The methods can group the number of large datasets.However, comparing the methods is used to choose the best method to provide the best cluster result from the modelling phase.Therefore, the clustering results can be evaluated using the Davies-Bouldin score.Data modelling with clustering involves grouping data into similar groups based on specific characteristics or patterns [13].The main goal of clustering is to group data so that data in groups has high similarities while different groups have significant differences.There are several commonly used clustering methods.
1. K-Means K-Means is a popular clustering method in data analysis.This method groups data into k groups (clusters) based on similar attributes.The result is grouping the data into k groups, where each has its center [1,2,10].K-Means is iterative and can efficiently solve clustering problems with large amounts of data.K-Means can be suitable for the analysis of banking e-banking services.K-Means can help banks group customers based on e-banking service usage patterns.This allows banks to understand customer needs and preferences to provide more tailored services.Banks can analyze customer transaction patterns using K-Means and identify groups with similar transactional behaviour.This can help in developing more effective service and promotion strategies.In addition, K-Means can be used to compile a product portfolio that better suits the needs of each customer group.Banks can customize ebanking products and service offerings based on the characteristics of each segment.By understanding different customer groups, banks can improve the user experience by presenting a more tailored interface and providing relevant service recommendations.K-Means can also analyze suspicious transaction patterns or groups requiring more security attention.Although K-Means has its advantages, it should be noted that the results can be affected by initialization and can produce rounded clusters, which may only sometimes reflect the actual structure of the data.Therefore, interpretation of K-Means results needs to be done carefully.
Minibatch K-Means
Minibatch K-Means is a variation of the K-Means algorithm designed to overcome some of the computational challenges associated with processing large amounts of data [1,10].In the conventional K-Means algorithm, the entire dataset is used to update the cluster centers at each iteration, which can be computationally expensive if the dataset is huge.In Minibatch K-Means, processing is performed on a small portion or "minibatch" of the dataset at each iteration.Minibatch K-Means provides a trade-off between computational efficiency and accuracy.Although the results may differ slightly from conventional K-Means, this approach allows faster processing, especially on large datasets.Minibatch K-Means is generally used in the context of big data or when computing resources are limited.The main difference between K-Means and Minibatch K-Means is how the two methods process data.K-Means uses the entire dataset to calculate and update cluster centers at each iteration.Meanwhile, Minibatch K-Means only randomly uses a small amount of data (minibatch) of the whole dataset at each iteration.In addition, K-Means can update cluster centers based on the entire dataset at each iteration.Meanwhile, Minibatch K-Means can update cluster centers based on the minibatch selected at each iteration.K-Means may also be more computationally expensive, especially on large datasets, because it involves processing the entire dataset at each iteration.Meanwhile, Minibatch K-Means is more computationally efficient because it only processes a small amount of data at each iteration.This makes it more suitable for handling big data or when computing resources are limited.Another thing is that K-Means is more likely to provide accurate results because it uses the entire dataset in each iteration.Minibatch K-Means provides results that may differ slightly from K-Means because it only uses a small sample of data.However, this is often accepted as a trade-off for computational efficiency.The choice between K-Means and Minibatch K-Means depends on the size of the dataset, the availability of computing resources, and the desired level of accuracy.Minibatch K-Means is often used when handling big data, or resource limitations are important factors.
In the context of e-banking, Minibatch K-Means can provide several advantages regarding computational efficiency and data analysis.Minibatch K-Means can help banks group e-banking customers into segments based on transactional behaviour or service usage.This allows banks to customize marketing strategies and services according to the preferences and needs of each part.In managing large volumes of e-banking transactions, Minibatch K-Means can provide computational efficiency by processing a small number of transactions at each iteration.This can help in improving system performance and responsiveness.Minibatch K-Means can be used to analyze suspicious or unusual transaction patterns.Banks can efficiently detect suspicious activity without processing the entire dataset by batching several transactions at each iteration.By understanding small segments' e-banking service usage patterns at each iteration, banks can improve the personalization of services and user interfaces for a better user experience.Applying Minibatch K-Means in e-banking can help banks overcome the challenges of handling big data while providing relevant and efficient analysis.However, as with any analytical method, interpretation of results and data security considerations must be carefully considered.
BIRCH
The balanced iterative reducing and clustering using hierarchies (BIRCH) method is a clustering algorithm designed to handle large amounts of data and build hierarchical structures efficiently [14][15][16].Here are some of the main characteristics of the BIRCH method: • Use of clustering feature (CF) trees: BIRCH uses a tree structure called a clustering feature tree (CF Tree) to represent data and group it based on specific attributes. https://ejournal.ittelkom-pwt.ac.id/index.php/infotel • Iterative and incremental: This algorithm works iteratively and incrementally, which means it processes data gradually and can adapt to adding new data without re-processing the entire dataset.• Use of cluster feature (CF): Each node in the CF tree stores statistical information, such as the number of objects, squares, and the center of mass of the objects represented by that node.This allows BIRCH to make clustering decisions quickly.• Dynamic cluster selection: BIRCH can dynamically determine the optimal number of clusters based on the CF tree structure, avoiding determining the number of groups in advance.• Scalability and Efficiency: Designed to handle large datasets, BIRCH achieves scalability by grouping data incrementally and efficiently using a tree structure.
The BIRCH method is generally suitable for applications where data grouping based on a hierarchical structure and extensive data handling are required, such as log analysis, data streaming processing, and clustering on geospatial data.In e-banking, the BIRCH method can be applied for various purposes related to data analysis and customer grouping.BIRCH can group e-banking customers based on service usage patterns, number of transactions, or risk profile.This helps banks understand customer preferences and needs to provide more tailored services.BIRCH can assist in efficient and large-scale processing by grouping e-banking transactions iteratively and incrementally.This can be applied to improve system performance and responsiveness to customer transactions.BIRCH can be used to detect suspicious transaction patterns or unusual groups of data, which could be potential indicators of fraudulent activity or questionable security.By understanding the cluster hierarchy, banks can provide more personalized services tailored to the needs of each customer group.This may include product recommendations, special offers, or customized user interfaces.BIRCH can help banks analyze hierarchical structures in e-banking data, such as relationships between accounts and subaccounts or incremental transactions over time.Applying BIRCH in e-banking can benefit from efficiently managing and analyzing big data, enabling banks to make smarter decisions and provide more relevant customer services.However, as with all analytical methods, interpretation of results and data security need to be a primary concern.
Davies Bouldin
The Davies-Bouldin index (DBI) is an internal evaluation metric to assess a dataset's clustering quality.It measures how well-separated and compact the clusters are.A lower DBI indicates more homogeneous and well-separated clusters [17][18][19][20].A better (lower) DBI is achieved when clusters are more homogeneous (smaller distances within clusters) and well-separated from each other (more considerable distances between clusters).The DBI is a measure of how well-separated and compact the clusters are.Here's a general outline of the steps to calculate the DBI: • Form clusters using a clustering algorithm (BIRCH) on your e-banking transaction data.Calculate the centroid (average position) for each cluster based on the data points' features.• Calculate the distances between data points within each cluster and between clusters.Standard distance metrics include Euclidean distance, Manhattan distance, or other appropriate measures based on the data characteristics.• Calculate the normalized Davies-Bouldin value and the DBI as the average of each normalized DBI value for all clusters.
A good clustering will have a DBI close to zero.The Davies-Bouldin index is one of several internal evaluation metrics used to assess clustering quality.While useful, it's essential to consider other metrics and contextual understanding related to the data and analysis goals for a comprehensive evaluation.A lower DBI indicates better cluster separation and compactness.Choosing an appropriate clustering algorithm and evaluating the results in the context of your e-banking transaction data and specific goals is essential.Additionally, we need to experiment with different clustering methods and parameter settings to optimize the DBI for your dataset.
The clustering method in data modelling shows that the BIRCH method has higher accuracy results than other methods.This can be seen in Figure 3.The BIRCH method provides better data cluster results than other methods.This accuracy value can be seen from the error value; the fewer errors given, the higher the accuracy value.Therefore, the BIRCH method will be applied to produce product promo recommendations based on evaluating the e-banking service data.The use of the Davies Bouldin index can be evaluated in the BRICH method after this method is implemented.A more detailed explanation is in the next chapter.
Result
The BIRCH method is chosen for cluster processing purchase and payment e-banking transaction service data.The data that has been collected has, of course, gone through data pre-processing.The robust scaler technique is used to normalize and overcome outliers, fit training data, and test data transformation to avoid data leakage.Figure 4 shows the results of the data transformation.
The data that has passed pre-processing is then processed using the elbow method, which is used in data modelling using the BIRCH method to help determine the optimal number of groups or clusters for a data set.The elbow method aims to identify where increasing the number of clusters does not significantly improve clustering quality.This https://ejournal.ittelkom-pwt.ac.id/index.php/infotelmethod can also evaluate the quality of clustering with various numbers of clusters with optimal values in a hierarchical structure.Figure 5 is a clustering result from the elbow method with optimal cluster values.
Figure 5 shows the results of grouping data, namely k=3.BIRCH forms a hierarchical structure, so the optimal number of clusters may be more subjective.Two different group results were obtained from the cluster results.This grouping was then used for further analysis using the Davies-Bouldin method.This method scores the groups of e-banking transaction services that are most widely used and customers most commonly use. Figure 6 shows purchase and payment transaction data details for e-banking services (TotalJumlah -> Sum; KodeRegional -> Regional Code; Nominal -> Nominal).Meanwhile, Figure 7 provides an overview in the form of a graphic of what services the evaluation results need to be reviewed or focused on for product promotion by the Bank to its customers.
Figure 6 and Figure 7 depict that the services that need to be focused on to increase promotion are deposits, OVB, and ATB transfers on mobile services.Recommendations for e-banking service products resulting from the data processing provide several benefits.This can improve user experience by providing solutions that suit their needs and transactional patterns.It can then help increase the adoption of e-banking services by pro-
Discussion
Transaction evaluation is a process in which banks assess and analyze financial transactions carried out by customers.The results of this transaction evaluation can be used as a benchmark or basis for determining product or service recommendations that suit customer needs and transactional behaviour.The results of these transaction evaluations can form the basis for providing more personalized and relevant product recommendations for customers.Creating a better banking experience and meeting customers' needs is important.
The result of this research is that the BIRCH method and the Dalvin Bouldin score help assess how often customers make transactions and what types are usually carried out.It offers the ability to understand customer preferences and requirements, analyze the overall volume of transactions and the associated financial value, and assess customer financial capacity.Furthermore, by scrutinizing transactional activities, banks can discern customer spending patterns, identify spending preferences, and customize product recommendations.Additionally, this analysis can highlight additional services or product features that customers may require based on their transactional behaviour, such as digital banking, insurance, or investment services.https://ejournal.ittelkom-pwt.ac.id/index.php/infotelThe BIRCH technique efficiently organizes data hierarchically.Simultaneously, the Davies-Bouldin index (DBI) assesses internal validity to gauge the effectiveness of a clustering method, with lower DBI values indicating better clustering.According to the evaluation findings, the bank is advised to intensify promotions and emphasize key services such as deposits, OVB, and ATB transfers via mobile services.Recommendations drawn from the analysis of e-banking service data present several benefits, including improving user satisfaction through providing customized solutions that align with their needs and transactional behaviours.
As a research result, it assists in increasing the usage of e-banking services by providing relevant guidance that encourages wider adoption.Furthermore, it enhances customer retention by providing additional value through accurate and beneficial recommendations.Thus, it enables banks to understand customer behavioural patterns, facilitating the delivery of more personalized suggestions tailored to their specific needs.
Conclusion
E-banking transaction services for purchases and payments encompass various products, such as fund transfers, bill payments, balance inquiries, and other banking activities accessible through the Internet or mobile banking applications.E-banking transaction services are highly sought after by customers due to their convenient accessibility, time-saving benefits, and flexibility.Users appreciate the ability to perform diverse transactions without the need to visit a physical bank office.The growing popularity of this service is further attributed to its advanced security features, which enhance user confidence and satisfaction.By selecting the best clustering method, BIRCH, which provides the highest accuracy compared to other methods, namely Minibatch K-Means and K-Means, is applied to obtain maximum group results to see the evaluation results of processing e-banking service transaction data.From the results of this research, there are services whose promotion needs to be increased, including the use of Deposits, OVB, and ATB Transfers.Some of these services can be used as a reference by the bank to improve promotions so that all services can be used and utilized well by customers, thereby increasing the value of the bank's services.However, this research can be enhanced by using other methods for grouping similar items or entities based on certain characteristics or features.In addition to its application in e-banking services, clustering can be employed in different domains for various purposes.
Figure 2 :
Figure 2: Sample of e-banking transaction data.
Figure 3 :
Figure 3: Comparison results of data modelling accuracy with the clustering method.
Figure 5 :
Figure 5: Cluster elbow results with optimal values.
Figure 6 :
Figure 6: Detailed evaluation results of e-banking transaction services.
Figure 7 :
Figure 7: Graph of evaluation results of e-banking transaction services. | 5,185.6 | 2024-05-31T00:00:00.000 | [
"Business",
"Computer Science",
"Economics"
] |
FiniteFlow: multivariate functional reconstruction using finite fields and dataflow graphs
Complex algebraic calculations can be performed by reconstructing analytic results from numerical evaluations over finite fields. We describe FiniteFlow, a framework for defining and executing numerical algorithms over finite fields and reconstructing multivariate rational functions. The framework employs computational graphs, known as dataflow graphs, to combine basic building blocks into complex algorithms. This allows to easily implement a wide range of methods over finite fields in high-level languages and computer algebra systems, without being concerned with the low-level details of the numerical implementation. This approach sidesteps the appearance of large intermediate expressions and can be massively parallelized. We present applications to the calculation of multi-loop scattering amplitudes, including the reduction via integration-by-parts identities to master integrals or special functions, the computation of differential equations for Feynman integrals, multi-loop integrand reduction, the decomposition of amplitudes into form factors, and the derivation of integrable symbols from a known alphabet. We also release a proof-of-concept C++ implementation of this framework, with a high-level interface in Mathematica.
Introduction
Scientific theoretical predictions often rely on complex algebraic calculations. This is especially true in high energy physics, where current and future experiments demand precise predictions for complex scattering processes. One key ingredient for making these predictions are scattering amplitudes in perturbative quantum field theory. The complexity of these predictions depends on several factors, most notably the loop order, where higher loop orders are required for higher precision, the number of scattering particles involved, and the number of independent physical scales describing the process.
A major bottleneck in many analytic predictions is the appearance of large expressions in intermediate stages of the calculation. These can be orders of magnitude more complicated than the final result. Large analytic cancellations often happen in the very last stages of a calculation. While computer algebra extraordinarily enhances our capability of making such predictions, due to the reasons above, it needs to complemented with more effective techniques when dealing with the most challenging computations.
One can trivially observe that the mentioned bottleneck is not present in numerical calculations with fixed precision, where every intermediate result is a number (or a list of numbers). However, in some fields, high-energy physics being one of them, analytic calculations provide more valuable results -since they can provide a more accurate numerical evaluation, and the possibility of further checks, studies and manipulations-and in some cases our only reliable way of obtaining them.
An effective method for sidestepping the bottleneck of complex intermediate expressions consists of reconstructing analytic expressions from numerical evaluations. This can be effectively used in combination with finite fields, i.e. numerical fields with a finite number of elements. In particular, we may choose fields whose elements can be represented by machine size integers, where basic operations can be done via modular arithmetic. Numerical operations over these fields are therefore relatively fast, but also exact, while they avoid the need of using multi-precision arithmetic, which is computationally expensive. Full analytic expressions for multivariate rational functions can then be obtained, using functional reconstruction techniques, from several numerical evaluations with different input values and, if needed, over several finite fields. Thanks to these algorithms, the problem of computing a rational function is reduced to the problem of providing an efficient numerical evaluation of it over finite fields. This implies that they can be applied to a very broad range of problems. Moreover, numerical evaluations can be massively parallelized, taking full advantage of the available computing resources.
Finite fields have been used by computer algebra systems for a long time. In highenergy physics, they were introduced in ref. [1] for the solution of (univariate) integration-byparts (IBP) identities. In ref. [2] we developed a multivariate reconstruction algorithm which is suitable for complex multi-scale problems and showed how to apply it to other techniques in high-energy physics, such as integrand reduction [3][4][5][6][7][8] and generalized unitarity [9][10][11][12]. Since then, functional reconstruction techniques based on evaluations over finite fields have been successfully employed in several calculations which proved to be beyond the reach of conventional computer algebra systems, with the available computing resources (see e.g. ref.s [13][14][15][16][17][18][19][20][21] for some notable examples).
Despite the remarkable results which have already been obtained with functional reconstruction techniques, there are still some obstacles which prevent a more widespread usage of them. A first one is the lack of a public implementation of functional reconstruction tech-niques suitable for arbitrary multivariate rational functions. 1 A second obstacle is the need of providing an efficient numerical implementation of the functions to be reconstructed, which is typically best done in statically compiled low-level languages, such as C, C++ or Fortran. In this paper, we try to address both these problems.
Let us assume a functional reconstruction algorithm is available, and consider the problem of providing an efficient numerical evaluation of an algorithm representing a rational function over finite fields. The first possibility is obviously low-level coding. This offers great performance and flexibility, but it is also hard and time-consuming to program and therefore it limits the usability of these techniques, especially if compared with the ease of use of computer algebra systems.
Another strategy consists in coding up some algorithms in low-level languages and providing interfaces in higher level languages and computer algebra systems. This combines the efficiency of low-level languages with the ease of use of high-level ones. As an example, consider the problem of solving a linear system of equations with parametric rational entries. Most computer algebra systems have dedicated built-in procedures for this. One could build another procedure, with a similar interface, which instead sends the system to a C/C++ code, which in turn solves it numerically several times and reconstructs the analytic solution from these numerical evaluations. For the user of the procedure, there is very little difference (except for performance) with respect to using the built-in procedure. Unfortunately, this strategy strongly limits the flexibility of functional reconstruction, since one is limited to use a set of hardcoded algorithms. Moreover, these algorithms often solve only an intermediate step of a more complex calculation needed by a scientific prediction. For instance, in most cases, one needs to substitute the solution of a linear system into another expression and then perform other operations or substitutions before obtaining the final result. Significant analytic simplifications often occur at the very last steps of the calculation, making thus the reconstruction of the intermediate steps a highly inefficient strategy. We thus need something which is much more flexible and applicable to a wider variety of problems.
One can observe that many different complex calculations share common building blocks. For instance, many calculations involve the solution of one or more linear systems, changes of variables, linear substitutions, and so on, in intermediate stages. These intermediate calculations, however, need to be combined in very different ways, depending on the specific problem an algorithm is meant to solve. Building on this observation, we propose a strategy which allows to easily combine these basic building blocks into arbitrarily complex calculations.
In this paper, we introduce a framework, that we call FiniteFlow, which allows to easily define complicated numerical algorithms over finite fields, and reconstruct analytic expressions out of numerical evaluations. The framework consists of three main components. The first component is a set of basic numerical algorithms, efficiently implemented over finite fields in a low-level language. These include algorithms for solving linear systems, linear fits, evaluating polynomials and rational functions, and many more. The second component is a system for combining these basic algorithms, used as building blocks, into arbitrarily more complex ones. This is done using dataflow graphs, which provide a graphical representation of a complex calculation. Each node in the graph represents a basic algorithm. The inputs of each of these algorithms are in turn chosen to be the outputs of other basic algorithms, represented by other nodes. This provides a simple and effective way of defining complicated algebraic calcu-lations, by combining basic building blocks into complex algorithms, without the need of any low-level coding. Indeed, this framework can be more easily used from interfaces in high-level languages and computer algebra systems. Dataflow graphs can be numerically evaluated and their output represents -in our framework -a list of rational functions. Numerical evaluations with different inputs can be easily performed in parallel, in a highly automated way. Indeed, this defines an algorithm-independent strategy for exploiting computing resources consisting of several cores, nodes, or machines. The third and last component consists of functional reconstruction algorithms, which are used to reconstruct analytic formulas out of the numerical evaluations (which in turn, as stated, may be represented by a graph). We propose here an improved version of the reconstruction algorithms already presented in [2].
The idea of using dataflow graphs for defining a numerical calculation is not new. For instance, they are notably used in the popular TensorFlow library [23], in the context of machine learning and neural networks. Although in this paper, we are interested in a very different application, one can point out a few similarities. For instance, the TensorFlow library allows to define complex functions (which, in that case, often represent neural networks) from high-level languages, which then need to be efficiently evaluated several times. To the best of our knowledge, this paper describes for the first time an application of dataflow graphs for the purpose of defining (rational) numerical algorithms over finite fields, to be used in combination with functional reconstruction techniques. In particular, we will show, by providing several examples, that they are suited for solving many types of important problems in high-energy physics.
With this paper, we also release a proof-of-concept C++ implementation of this framework, which includes a Mathematica interface. This code has already been used in a number of complex analytic calculations, including some recently published cutting-edge scientific results [14,17,21], and we thus think its publication can be highly beneficial. We stress that FiniteFlow is not meant to provide the solution of any specific scientific problem, but rather a framework which can be used for solving a wide variety of problems. We also provide public codes with several packages and examples of applications of FiniteFlow to very common problems in high-energy physics, which can be easily adapted to similar problems as well.
The paper is organized as follows. In section 2 we review some basic concepts about finite fields and rational functions, and we describe an efficient functional reconstruction algorithm for multivariate functions. In section 3 we describe our system for defining numerical algorithms, based on dataflow graphs. In section 4 we describe the implementation of several numerical algorithms over finite fields, which are the basic building blocks of the dataflow graphs representing a more complex computation. In the next sections, we describe the application of this framework to several problems in high-energy physics. In section 5 we discuss the reduction of scattering amplitudes to master integrals or special functions, as well as the Laurent expansion in the dimensional regulator. In section 6 we discuss the application to differential equations for computing master integrals. In sections 7 and 8 we discuss multi-loop integrand reduction and the decomposition of amplitudes into form factors respectively. In section 9 we talk about the derivation of integrable symbols from a known alphabet. Finally, in section 10 we give some details about our public proof-of-concept implementation, and in section 11 we draw our conclusions.
Finite fields and functional reconstruction
In this section, we set some notation by reviewing well-known facts about finite fields and rational functions. We also describe a multivariate reconstruction algorithm based on numerical evaluations over finite fields. The latter is based on the one described in [2] with a few modifications and improvements. A slightly more thorough treatment of the subject, which uses a notation compatible with the one of this paper, can be found in ref. [2] (in particular, in sections 2, 3 and Appendix A of that reference).
Finite fields and rational functions
Finite fields are mathematical fields with a finite number of elements. In this paper, we are only concerned with the simplest and most common type of finite field, namely the set of integers modulo a prime p, henceforth indicated with Z p . In general, for any positive integer n, we call Z n the set of non-negative integers smaller than n. All basic rational operations in Z n , except division, can be trivially defined using modular arithmetic. One can also show that if a ∈ Z n and gcd(a, n) = 1 then a has a unique inverse in Z n . In particular, if n = p is prime, an inverse exists for any non-vanishing element of Z p , hence any rational operation is well defined. This also defines a map between rational numbers q = a/b ∈ Q and Z n , for any rational whose denominator b is coprime with n. It also implies that any numerical algorithm which consists of a sequence of rational operations can be implemented over finite fields Z p . In particular, polynomials and rational functions are well defined mathematical objects.
Given a set of variables z = {z 1 , . . . , z n } and a numerical field F, one can define polynomial and rational functions of z over F. More in detail, any list of exponents α = {α 1 , . . . , α n }, defines the monomial (2.1) Polynomials over F have a unique representation as linear combinations of monomials with coefficients c α ∈ F. Rational functions are ratios of two polynomials with n α , d α ∈ F. Notice that the representation of f (z) in Eq. (2.3) is not unique. A unique representation can, however, be obtained by requiring numerator and denominator to have no common polynomial factor, and fixing a convention for the normalization on the coefficients n α , d α . We find that a useful convention is setting d min(α) = 1, where z min(α) is the smallest monomial appearing in the denominator with respect to a chosen monomial order. Using this convention, the constant term in the denominator, if present, is always equal to one. An important result in modular arithmetic is Wang's rational reconstruction algorithm [24,25] which allows, in some cases, to invert the map between Q and Z n . More in detail, given the image z ∈ Z n of a number q = a/b ∈ Q, Wang's algorithm successfully reconstructs q if n is large enough with respect to the numerator and the denominator of the rational numbermore precisely if and only if |a|, |b| < n/2. Hence, if a prime p is sufficiently large, one can successfully reconstruct a rational number from its image in Z p . However, our main reason for using finite fields is the possibility of performing calculations efficiently using machine size integers, which on most modern machines can have a size of 64 bits. This requirement forces us to use primes such that p < 2 64 . One can overcome this limitation by means of the Chinese remainder theorem, which allows to deduce a number a ∈ Z n from its images a i ∈ Z n i if the integers n i have no common factors. Hence, given a sequence of primes {p 1 , p 2 , . . .}, from the image of a rational number over several prime fields Z p 1 , Z p 2 , . . . one can deduce the image of the same number over Z p 1 p 2 ... . Once the product of the selected primes is large enough, Wang's reconstruction algorithm will be successful.
The functional reconstruction algorithm we will describe in the next section can be performed over any field, but in practice, it will only be implemented over finite fields. The coefficients of the reconstructed function (i.e. n α , d α appearing in Eq. (2.3)) are then mapped over the rational field using Wang's algorithm and checked numerically against evaluations of the function over other finite fields. If the check is unsuccessful, we proceed with reconstructing the function over more finite fields Z p i , and combine them using the Chinese remainder theorem as explained above, in order to obtain a new result over Q. The algorithm terminates when the result over Q agrees with numerical checks over finite fields which have not been used for the reconstruction.
Multivariate functional reconstruction
We now turn to the, so called, black box interpolation problem, i.e. the problem of inferring, with very high probability, the analytic expression of a function from its numerical evaluations. We assume to have a numerical procedure for evaluating an n-variate rational function f , whose analytic form is not known. More in detail, the procedure takes as input numerical values for z and a prime p and returns the function evaluated at z over the finite field Z p , We also allow the possibility for this procedure to fail the evaluation. We call this evaluation points bad points or singular points. Notice that these do not necessarily correspond to a singularity in the analytic expression of the function, but also to spurious singularities in intermediate steps of the procedure, or to any other interference with the possibility of evaluating the function with the implemented numerical algorithm. When this happens, the singular evaluation point is simply replaced with a different one. We stress, however, that the occurrence of such cases is extremely unlikely for a realistic problem, provided that the evaluation points are chosen with care (we will expand on this later). A functional reconstruction algorithm aims to identify the monomials appearing in the analytic expression of the function as in Eq. (2.3), and the value of their coefficients n α , d α . The basic reconstruction algorithm we discuss in this section is based on a strategy already proposed in ref. [2]. However, we find it is useful to briefly summarize it here in order to point out a few modifications and improvements, and also because the discussion below will benefit from having a rough knowledge of how the functional reconstruction works.
For univariate polynomials, our reconstruction strategy is based on Newton's polynomial representation [26] f where R is the total degree, and y 0 , y 1 , y 2 , . . . are a sequence of distinct numbers. One can easily check that, with this representation, any coefficient a r can be determined from the knowledge of the value of the function at z = y r and from the coefficients a j with j < r. In particular, it does not require the knowledge of the total degree R. This allows to recursively reconstruct the coefficients a r of the polynomial, starting from a 0 which is determined by f (y 0 ). If the total degree of the polynomial is not known, the termination criterion of the reconstruction algorithm is the agreement between new evaluations of the function f and the polynomial defined by the coefficients reconstructed so far. In some cases, the total degree, or an upper bound to it, is known a priori (see e.g. when this is used in the context of a multivariate reconstruction) and therefore one can terminate the reconstruction as soon as this bound is reached. After the polynomial is reconstructed, it is converted back into a canonical representation. For univariate rational functions, we distinguish two cases. The first case, which will be useful in the context of multivariate reconstruction, is when the total degree of the numerator and the denominator of the function are known and the constant term in the denominator does not vanish. This means, remembering the normalization convention we introduced in section 2.1, that we can parametrize the function as for known total degrees R and R . Given a sequence of distinct numbers y 0 , y 1 , y 2 , . . ., one can build a linear system of equations for the coefficients n j and d j by evaluating the function f at z = y k , namely This strategy is even more convenient when a subset of the coefficients is already known since it allows to significantly reduce the number of needed evaluations of the function (this will also be important later).
For the more general case where we do not have any information on the degrees of the numerator and the denominator of the function, we use Thiele's interpolation formula [26], where y 0 , y 1 , . . . is, once again, a sequence of distinct numbers. Thiele's formula is the analogous for rational functions of Newton's formula, and indeed it can be used in order to interpolate a univariate rational function using the same strategy we illustrated for the polynomial case. Similarly as before, the result is converted into a canonical form after the reconstruction. The reconstruction of multivariate polynomials is performed by recursively applying Newton's formula. Indeed a multivariate polynomial in z = {z 1 , . . . , z n } can be seen as a univariate polynomial in z 1 whose coefficients are multivariate polynomials in the other variables z 2 , . . . , z n , For any fixed numerical value of z 2 , . . . , z n one can apply the univariate polynomial reconstruction algorithm in z 1 to evaluate the coefficients a r . This means that the problem of reconstructing an n-variate polynomial is reduced to the one of reconstructing an (n − 1)variate polynomial. Hence we apply this strategy recursively until we reach the univariate case, which we already discussed. The result is then converted into the canonical form of Eq. (2.2). Before moving to the case of multivariate rational functions, it is worth making a few observations on the choice of the sequence of evaluation points y 0 , y 1 , . . . which appear in all the previous algorithms. We want to make a choice which does not interfere with our capability of evaluating the function f -which may have singularities both in its final expression and in intermediate stages of its numerical evaluation -and of inverting the relations for obtaining Thiele's coefficients. While making a choice which works for any function is clearly impossible, in practice we can easily make one which almost always works in realistic cases. This is done by choosing as y 0 a relatively large and random-like integer in Z p , where common functions are extremely unlikely to have singularities. We then increase the integer by a relatively large constant δ for the next points, i.e. y i+1 = y i + δ mod p. In the multivariate case, we use a different starting point y 0 and a different constant δ for each variable. Heuristically we find that, with this strategy, especially when using 64-bit primes, one can reasonably expect to find no singular point even in millions of evaluations.
We finally discuss the more complex problem of reconstructing a multivariate rational function f = f (z). We first observe that the reconstruction is much simpler when the constant term in the denominator is non-vanishing since this unambiguously fixes the normalization of the coefficients. As suggested in ref. [27], we can force any function to have this property by shifting its arguments by a constant vector s = {s 1 , . . . , s n } and reconstruct f (z + s) instead. In practice, by default, we find it is convenient to always shift arguments by a vector s such that any function coming from a realistic problem is unlikely to be singular in z = s. The criteria for the choice of s are similar to the ones for choosing the sample points, i.e. choosing relatively large and random-like numbers in Z p . The result is shifted back to its original arguments after the full reconstruction over a finite field Z p is completed (note that this detail differs from what is proposed in ref.s [2,27]). Hence, in the following, we assume that the function f has a non-vanishing constant term in the denominator, which by our choice of normalization is equal to one.
The key ingredient of the algorithm, which was also proposed in ref. [27], is the introduction of an auxiliary variable t which is used to rescale all the arguments of f . This defines the function h = h(t, z), which takes the form (2.10) In other words, h(t, z) is a univariate rational function in t, whose coefficients p r and q r are multivariate homogeneous polynomials of total degree r in z. This allows to reconstruct f (z) = h(1, z) by combining the algorithms discussed above for univariate rational functions and multivariate polynomials. In practice, we start with a univariate reconstruction in t for fixed values of z using Thiele's formula, in order to get the total degree of numerator and denominator. This allows to check that the denominator has indeed a non-vanishing constant term. The knowledge of the total degree also allows to use the system-solving strategy for the next reconstructions in t. We also perform a univariate reconstruction of the unshifted function f in each variable z j for fixed values of all the other variables. The minimum degrees in each variable are used to factor out polynomial prefactors in the numerator and denominator of the function, which can significantly reduce the number of required evaluations (note that it is essential that this is done before shifting variables since after the shift any realistic function is unlikely to have monomial prefactors). The maximum degrees are used to provide, together with the total degree r of each polynomial p r and q r , the possibility of terminating the polynomial reconstructions in the interested variables earlier. They are also used in order to estimate a suitable set of sample points for reconstructing the function before performing the evaluations, as we will explain when discussing the parallelization strategy in section 2.3. We then proceed with using the system-solving strategy for univariate rational functions, reconstructing h(t, z) as a function of t for any fixed numerical value of z. This provides an evaluation of the polynomials p r and q r at z. By repeating this for several values of z, we reconstruct these multivariate polynomials using Newton's formula recursively.
A few observations are in order. First, because the polynomials p r and q r are homogeneous, we can set z 1 = 1 and restore its dependence at the end. This makes up for having introduced the auxiliary variable t. Moreover, each reconstruction in t provides an evaluation of all the polynomial coefficients at the same time. For this reason, for each z we cache the reconstructed coefficients so that we can reuse the evaluations in several polynomial reconstructions. As for the reconstruction of the polynomials themselves, we proceed from the ones with a lower degree to the ones of higher degree (this detail is also different from what is presented in ref. [2]). This way the polynomials with a lower degree, which can be reconstructed with fewer evaluations, become known earlier and can thus be removed from the system of equations in Eq. (2.7) when reconstructing the ones with higher degrees. This makes the system of equations for higher-degree polynomial coefficients smaller, and hence it further reduces the number of needed evaluations. As already mentioned, we also use the information on the total degree r of each polynomial, as well as the maximum degree with respect to each variable, in order to terminate the polynomial reconstructions earlier, when possible.
When combining all these ingredients, we find that the number of evaluations we need for the reconstruction is comparable (if not better) to the one we would need by writing a general ansatz based on the degrees of the numerator and the denominator of the function (both the total ones and the ones with respect to each variable). However, while the ansatz-based approach is impractical for complicated multivariate functions since it requires to solve huge dense systems of equations, the method presented here is instead able to efficiently reconstruct very complex functions depending on several variables. It has indeed been applied to a large number of examples, some of which have been mentioned in the introduction.
Finally, we point out that so far we only discussed single-valued functions, but in the most common cases the output of an algorithm will actually be a list of functions (2.11) In this case, the reconstruction proceeds as described above considering one element of the output at the time. However, for each functional evaluation, the whole output, or a suitable subset of it, is cached (more details on our caching strategy are discussed in section 10) so that the same evaluations can be reused for the reconstruction of different functions f j (z).
Parallelization
A well-known advantage of functional reconstruction techniques is the possibility to extensively parallelize the algorithm. The most important step which can be parallelized is the evaluation of the function. Since numerical evaluations are independent of each other, they can be run in parallel over different threads, nodes, or even on different machines.
Building an effective parallelization strategy is actually easier for the multivariate case. As discussed above, the multivariate reconstruction begins with a univariate reconstruction in t of the function h(t, z) in Eq. (2.10), and univariate reconstructions in each variable z j for fixed values of all the other ones. This amounts, for an n-variate problem, to n + 1 univariate reconstructions, which being independent of each other can be all run in parallel. These univariate reconstructions are significantly faster than a multivariate one and provide valuable information for the multivariate reconstruction. After this step we use this information to determine a suitable set of evaluation points for the reconstruction of each function in the output of the algorithm, assuming the result is a generic function constrained by the degrees found in the univariate fits. While this might result in building a set of evaluation points which is slightly larger than needed, it allows to obtain a list of sample points which can be independently evaluated before starting any multivariate reconstruction. Any performance penalty, due to this oversampling of the function, is very small compared to what we gain from the possibility of parallelizing the evaluations. Therefore, this list of points can be split according to the available computing resources and evaluated in parallel over as many threads and cores as possible.
The main advantage of this parallelization strategy is the relative ease of implementation since it requires minimal synchronization (each thread just needs to evaluate all the points assigned to it and wait for the others to finish), and the fact that it does not depend on the specific numerical algorithm which is implemented.
After the evaluations have completed, they are collected and used for the multivariate reconstruction algorithm described above, except that the calls to the numerical "black box" procedure are now replaced by a lookup of its values in the set of cached evaluations.
In building the list of evaluation points, one may initially assume that a given number n p of primes will be needed for the reconstruction over Q (typically, one will start with the choice n p = 1), and that additional primes will only be used for a small number of evaluations, for the purpose of checking the result. If the rational reconstruction described in section 2.1 fails these checks, points using additional primes will be added to the list and another evaluation step will be performed for these.
We now turn to the univariate case. Since building an effective and clean parallelization strategy here is significantly harder than for the multivariate case and in general, one does not need too many evaluations for univariate reconstructions, by default we don't perform any parallelization in the univariate case. Indeed, the strategy illustrated above would require us to perform a univariate reconstruction over a finite field before performing any parallelization, in order to obtain information on the degree of the result. For the univariate case, however, this task is of comparable complexity than performing a complete reconstruction over Q. Despite this, if needed, we can still parallelize the evaluations in a highly automated way as follows. We start by making a guess on the maximum degrees of numerator and denominator and build a set of evaluation points based on this assumption. Then we perform the evaluations in parallel, as before. After this, we proceed with the reconstruction using the cached evaluations. If during the reconstruction we realize we need more evaluation points, we make a more conservative guess of the total degrees and proceed again with the evaluation (in parallel) of the additional points needed. This can be done automatically, by gradually increasing the maximum degree by a given amount in each step. We proceed this way until the reconstruction is successful. Obviously, making an accurate guess of the total degrees may not be easy. While making a conservative choice of a high degree might result in too many evaluations, choosing a total degree which is too low will cause the reconstruction to fail and it will create additional overhead in launching the parallel tasks for evaluating the additional points until the successful reconstruction. This method also requires some additional input from the user of the reconstruction algorithm, which needs to provide these guesses, since one cannot obviously make a choice which is good for any problem. For these reasons, we usually prefer to avoid parallelization in univariate reconstruction, but it is still important to know that a parallelization option is available for these cases as well.
Another step which can be, to some extent, parallelized, is the reconstruction itself. As mentioned, in the most common cases, the output of our algorithm is not a single function but a list of functions. Since the reconstructions of different functions from a set of numerical evaluations are independent of each other, they can also be run in parallel. Even if this is generally not as important as the parallelization of the functional evaluations, which are the typical bottleneck, it can still yield a sizeable performance improvement.
Dataflow graphs
In this section, we describe one of the main novelties introduced in this paper, namely a method for building numerical algorithms over finite fields using a special kind of computational graphs, known as dataflow graphs.
The algorithms described in the previous sections reduce the problem of computing any (multivariate and multivalued) rational function to the one of providing a numerical implementation of it, over finite fields Z p . The goal of the method described in this section is providing an effective way of building this implementation, characterized by good flexibility, performance, and ease of use.
Graphs as numerical procedures
Dataflow graphs are directed acyclic graphs, which can be used to represent a numerical calculation. The graph is made of nodes and arrows. The arrows represent data (i.e. numerical values in our case) and the nodes represent algorithms operating on the (incoming) data received as input and producing (outgoing) data as output. In the following, we describe a simplified type of dataflow graphs which we use in our implementation. Figure 1: A node in a dataflow graph, where arrows represent lists of values and nodes represent numerical algorithms. In our implementation, a node can take zero or more incoming arrows as input and has exactly one outgoing arrow as output.
In our case, an arrow represents a list of values. A node represents a basic numerical algorithm. A node can take zero or more incoming arrows (i.e. lists) as input and has exactly one outgoing arrow 2 (i.e. one list) as output (see fig. 1). For simplicity, we also require that each list (represented by an arrow) has a well-defined length which cannot change depending on the evaluation point. We also understand that nodes can also contain metadata with additional information needed to define the algorithm to be executed.
Typically nodes encode common, basic algorithms (e.g. the evaluation of rational functions, the solution of linear systems, etc. . . ) which are implemented, once and for all, in a low-level language such as C++. We will give an overview of the most important ones in section 4. Complex algorithms are defined by combining these nodes, used as building blocks, into a computational graph representing a complete calculation, where the output of a building block is used as input for others. This way complex algorithms are easily built without having to deal with the low-level details of their numerical implementation. The graph can indeed be built from a high-level language, such as Python or Mathematica.
Several explicit examples will be provided in the next sections.
In each graph, there are two special nodes, namely the input node and the output node. The input node does not represent any algorithm, but only the list of input variables z of the graph. The output node can be any node of the graph and represents, of course, its output. A dataflow graph thus defines a numerical procedure which takes as input the variables z represented by the input node and returns a list of values which is the output of the output node.
Graphs are evaluated as follows. First, every time we define a node, we assign to it an integer value called depth. The depth of a node is the maximum value of the depths of its inputs plus one. The depth of the input node is zero, by definition. When an output node is specified, we recursively select all the nodes which are needed as inputs in order to evaluate it, and we sort this list by depth. We then evaluate all the nodes from lower to higher depths and store their output values to be used as inputs for other nodes. Once the output node has been evaluated, its output is returned by the evaluation procedure.
Learning nodes
As we already mentioned, each node has exactly one list of values as output, and the length of this list is not allowed to change depending e.g. on the evaluation point. However, for some algorithms, we cannot know the length of the output at the moment the numerical procedure is defined.
Consider, as an example, a node which solves a linear system of equations. The length of the output of such a node depends on whether the system is determined or undetermined and on its rank. This information is usually not known a priori but it must be learned after the system is defined. In this case, it can easily be learned by solving the system numerically a few times.
For this reason, we allow nodes to have a learning phase. The latter is algorithm-dependent and typically consists of a few numerical evaluations used by a node in order to properly define its output. Hence, the output of these nodes can be used as input by other nodes only after the learning phase is completed (since, before that, their output cannot be defined at all).
More algorithms which require a learning phase will be discussed later.
Subgraphs
An important feature which makes this framework more powerful and usable in realistic problems is the possibility of defining nodes in which one can embed other graphs. Consider a graph G 1 with a node N which embeds a graph G 2 . We say that G 2 is a subgraph. Typically, the node N will need to evaluate the subgraph G 2 a number of times in order to produce its output.
The simplest case of a subgraph is when the node N takes one list as input, passes the same input to G 2 in order to evaluate it, and then returns the output of G 2 . This case, which we call simple subgraph, is equivalent to having the nodes of G 2 attached to the input node of N inside the graph G 1 directly, but it can still be useful in order to organize more cleanly some complicated graphs.
Another interesting example, which we call memoized subgraph, can be beneficial when parts of the calculation are independent of some of the variables. This type of subgraph effectively behaves the same way as the simple subgraph described above, except that it remembers the input and the output of the last evaluation. If the subgraph needs to be evaluated several times in a row with the same input, the memoized subgraph simply returns the output it has stored. This is particularly useful when combined with the Laurent expansion, the subgraph fit, or the subgraph multi-fit algorithms. We will give a description of these later in this paper, but for now, it suffices to know that they require to evaluate a dataflow graph several times for fixed values of a subset of the variables. In such cases, one may not wish to evaluate every time the parts of a graph which only depend on the variables which remain fixed for several evaluations. One can thus optimize away these evaluations by embedding the appropriate parts of the graph in a memoized subgraph.
One more useful type of subgraph is a subgraph map. This takes an arbitrary number of lists of length n as input, where n is the number of input parameters for G 2 . The graph G 2 is then evaluated for each of the input lists, and the outputs are chained together and returned. This is useful when the same algorithm needs to be evaluated for several inputs.
There are however other interesting cases, where the node N requires to evaluate G 2 several times and perform non-trivial operations on its output. Some useful examples are
Numerical algorithms over finite fields
In this section, we discuss several basic, numerical algorithms which can be used as nodes in a graph. These are best implemented in a low-level language such as C++ for efficiency reasons. In later sections, we will then show how to combine these basic building blocks into more complex algorithms which are relevant for state-of-the-art problems in high-energy physics.
Evaluation of rational functions
Most of the algorithms we are interested in have some kind of analytic input, which can be cast in the form of one or more lists of polynomials or rational functions. The numerical evaluation of rational functions is, therefore, one of the most ubiquitous and important building blocks in our graphs. These nodes take as input one list of values z and return a list of rational functions {f j (z)} evaluated at that value, as schematically illustrated in fig. 2.
Polynomials are efficiently evaluated using the well known Horner scheme. Given a univariate polynomial Horner's method is based on expressing it as This formula only has R multiplications and R additions for a polynomial of degree R, and it can be easily obtained from the canonical representation in Eq. 4.1. Therefore, it is a great compromise between ease of implementation and efficiency. For multivariate polynomials, Horner's scheme is applied recursively in the variables. In practice, we use an equivalent but non-recursive implementation and we store all the polynomial data (i.e. the integer coefficients c j and the metainformation about the total degrees of each sub-polynomial) in a contiguous array of integers.
Rational functions are obviously computed as ratios of two polynomials. If the denominator vanishes for a specific input, the evaluation fails and yields a singular point.
Dense and sparse linear solvers
A wide variety of algorithms involves solving one or more linear systems at some stage of the calculation. Moreover, the solution of these systems is often the main bottleneck of the procedure, hence having an efficient numerical linear solver is generally very important.
In general, consider a n × m linear system with parametric rational entries in the parameters z, This is defined by the matrix A = A(z), the vector b = b(z), and the set of m variables or unknowns {x j }. We assume there is a total ordering between the unknowns, x 1 x 2 · · · x m . Borrowing from a language commonly used in the context of IBP identities, we say that x 1 has higher weight than x 2 and so on. This simply means that, while solving the system, we always prefer to write unknowns with higher weight in terms of unknowns with a lower weight.
For each numerical value of z and prime p, the entries A ij (z) and b i (z) are evaluated and the numerical system is thus solved over finite fields. If the system is determined, for each numerical value of z the solver returns a list of values for the unknowns x j . In the more general case where there are fewer independent equations than unknowns, one can only rewrite a subset of the unknowns as linear combinations of others. This means that we identify a subset of independent unknowns and the complementary subset of dependent unknowns which are written as linear combinations of the independent ones, Notice that the list of dependent and independent unknowns also depends on the chosen ordering (or weight) of the unknowns. The output of a linear solver is a list with the coefficients c ij appearing in this solution. More specifically they are the rows of the matrix stored in row-major order. If only the homogeneous part of the solution is needed, the elements c i0 are removed from the output. A node representing a linear solver is schematically depicted in fig. 3. It often happens that only a subset of the unknowns of a system is actually needed. We therefore also have the possibility of optionally specifying a list of needed unknowns. When this is provided, only the part of the solution which involves needed unknowns on the left-hand side is returned. This also allows to perform some further optimizations during the solution of the system, as we will show later.
As mentioned in section 3.2, a linear solver is an algorithm which needs a learning step. During this step, with a few numerical solutions of the system, the list of dependent and independent unknowns is learned. This step is also used in order to identify redundant equations, i.e. equations which are reduced to 0 = 0 after the solution, which are thus removed from the system, improving the performance of later evaluations. Moreover, the list of dependent and independent unknowns is checked during every evaluation against the one obtained in the learning step, since accidental zeroes may change the nature of the solution of the system. If the two do not agree, the evaluation fails and the input is treated as a singular point.
It is useful to distinguish between dense and sparse systems of equations. Even if they represent the same mathematical problem, from a computational point of view they are extremely different.
Dense systems of equations are systems where most of the entries in the matrix A defined above are non-zero. For these systems, we store the n rows of the matrix as contiguous arrays of m + 1 integers. We also add an (m + 2)-th entry to these arrays which assigns a different numerical ID to each equation, for bookkeeping purposes. The solution is a straightforward and rather standard implementation of Gauss elimination. This distinguishes two phases. The first, also known as forward elimination, puts the system in row echelon form. The second, also known as back substitution, effectively solves the systems by putting it in reduced row echelon form. The algorithm we use for dense systems works as follows.
Forward elimination We set a counter r = 0, and loop over the unknowns x k for k = 1, . . . , m, i.e. from higher to lower weight. At iteration k, we find the first equation E j with j ≥ r where the unknown x k is still present. If there is no such equation, we move on to the next iteration. Otherwise, we move equation E j in position r, and we "solve" it with respect to x k , i.e. we normalize it such that x k has coefficient equal to 1. We thus substitute the equation in all the remaining equations below position r. We then increase r by one and proceed with the next iteration.
Back substitution
We loop over the equations E r , for r = n, n − 1, . . . , 1, i.e. from the one in the last position to the one in the first position. At iteration r, we find the highest weight unknown x j appearing in equation E r (note that this is guaranteed to have coefficient equal to one, after the forward elimination). If equation E r does not depend on any unknown, we proceed to the next iteration. Otherwise, we substitute equation E r , which contains the solution for x j , in all the equations E k with k < r. We then proceed with the next iteration in r.
During the learning phase, the system is solved in order to learn about the independent variables and the independent equations. The remaining equations, once the system has been reduced, will become trivial (i.e. 0 = 0) and will, therefore, be removed. We also identify unknowns which are zero after the solution. These are then removed from the system and this allows to find, through another numerical evaluation, a smaller set of independent equations needed to solve for the non-zero unknowns. We recall that solving a dense n × n system has O(n 3 ) complexity, hence it scales rather badly with the number of equations and unknowns, and it greatly benefits from the possibility of removing as many equations as possible.
We now discuss the reduction of sparse systems of equations, i.e. systems where most of the entries of the matrix A which defines it are zero. In other words, in such a system, most of the equations only depend on a relatively small subset of variables. We represent sparse systems using a sparse representation of the rows of the matrix (4.7). More specifically, for each row, we store a list of non-vanishing entries, with the number of their columns and their numerical value. These are always kept sorted by column index from the lowest to the highest, or equivalently by the weight of the corresponding unknown from the highest to the lowest. We also store additional information, namely the number of non-vanishing terms in the row, and the index of the equation corresponding to that row. When solving such systems, it is crucial to keep the equations as simple as possible at every stage of the solution. This way the complexity of the algorithm can have a much better scaling behaviour than the one which characterizes dense systems (the exact scaling strongly depends on the system itself, and it can be as good as O(n) in the best scenarios, and as bad as O(n 3 ) in the worst ones). For these reasons, we implement a significantly different version of Gauss elimination for sparse systems, which shares many similarities with the one illustrated in [28]. We first sort the equations by complexity, from lower to higher. The complexity of an equation is defined the same way as in ref. [28], and is determined by the following criteria, sorted by their importance, • the highest weight unknown in the equation (higher weight means higher complexity) • the number of unknowns appearing in the equation (a higher number means higher complexity) • the weight of the other unknowns in the equation, from the ones with higher weight to the ones with lower weight (i.e. if two equations have the same number of unknowns, the most complex one is the one with the highest weight unknown among those that are not shared by both equations).
If all the three points above result in a tie between two equations, it means that they depend on exactly the same list of unknowns, and we say that they are equally complex, hence their relative order does not matter. Obviously, other more refined definitions of this complexity are possible, but we find that this one works extremely well for systems over finite fields, despite its simplicity. Once the equations are sorted, the algorithm for sparse systems works as follows for the forward and back substitution.
Forward elimination We create an array S whose length is equal to the number of unknowns. This will contain, at position j, the index S(j) of the equation containing the solution for the unknown x j , or a flag indicating that there is no such equation. We loop over the equations E i for i = 1, . . . , n, from lower to higher complexity. If any equation is trivial (i.e. 0 = 0), we immediately move to the next one. We find the first unknown x k appearing in E i for which a solution is already found, via lookups in the array S.
The equation E S(k) is then substituted into E i . This is repeated until all the unknowns in E i have no solution registered in S. We then take the highest weight unknown x h and "solve" the equation with respect to it. Once again, this means that we normalize the equation such that the coefficient of x h is one. We then register this solution in S by setting S(h) = i, and proceed with the next iteration in i.
Back substitution
We remove from the system any equation which has become trivial (0 = 0), but otherwise, we keep them in the same order. We also update the array S to take this change into account. Let thus n I be the number of independent equations which survived after the forward elimination. We loop again over the remaining equations E i for i = 1, . . . , n I − 1, from lower to higher complexity, excluding the last one. If a list of needed unknowns was specified, and the highest weight unknown in equation E i is not in it, the equation is skipped. We then find the first unknown x k in E i , excluding the highest weight one, for which a solution is registered in S, and substitute equation E S(k) in E i . This is repeated until none of the unknowns in E i , except the one with the highest weight, has a registered solution in S.
Similarly as before, during the learning step we identify the independent equations, removing all the other ones, and the independent unknowns. For each equation E i , we also keep track of all the other equations which have been substituted into E i either during the forward elimination or the back substitution. This information can optionally be used in order to further reduce the number of needed equations. Indeed, while after the learning stage the system is guaranteed to contain only independent equations, there might be a smaller subset of them which is still sufficient in order to find a solution for all the needed unknowns, which sometimes are a significantly smaller subset of the ones appearing in the system. This simplification is obtained, when requested, by means of the mark and sweep algorithm. 3 After the learning stage, for each equation E j we have a list of dependencies L j . If E k ∈ L j , then E j depends on E k , because E k was substituted into E j at some point during the Gauss elimination. We identify a set R containing the so-called roots, which in our case are the equations containing solutions for the needed unknowns. We then "mark" all the equations in R. "Marking" is a recursive operation achieved, for any equation E j , by setting a flag which says that E j is needed, and then recursively marking all the equations in L j whose flag hasn't already been set. Finally, we "sweep", i.e. discard all the equations which have not been marked. Notice that the mark and sweep algorithm loses some information about the system, and therefore it is only performed upon request. It is however extremely useful, e.g. when solving IBP identities, since it often reduces the size of the system by a factor even greater than the simplification achieved in the learning stage. We also implement a dense solver algorithm called node dense solver, which takes the elements of the matrix in Eq. (4.7) from its input node, in row-major order, rather than from analytic formulas. In the future, we may implement a node sparse solver as well, which only takes the non-vanishing elements of that matrix from its input node, and uses a sparse solver for the solution.
It goes without saying that these linear solvers can also be used in order to invert matrices, using the Gauss-Jordan method. Indeed, the inverse of a n × n matrix A ij is the output of a linear solver node which solves the system with respect to the following unknowns, sorted from higher to lower weights, In particular, when only the homogeneous part of the solution is returned, the output of such a node will be a list with the matrix elements A −1 ij in row-major order. Both the dense and the sparse solver can be used for this purpose, depending on the sparsity of the matrix A ij . Also, notice that the matrix A ij is invertible if and only if {x j } is the list of dependent unknowns and {t j } is the list of independent unknowns. This can be checked after the learning phase has completed.
Linear fit
Linear fits are another important algorithm which is often part of calculations in high energy physics. For instance, it is the main building block of integrand reduction methods (see section 7). They are also used, for instance, in order to match a result into an ansatz and to find linear relations among functions.
In general, in a linear fit, we have two types of variables, which in this section we call z = {z j } and τ = {τ j }. In particular, the z variables are simply regarded as free parameters. A linear fit is thus defined by an equation of the form where f j and g are known (or otherwise computable) rational functions and the coefficients x j are unknown. While f j and g depend on both sets of variables, the unknown coefficients x j can depend on the free parameters z only. For each numerical value of z, Eq. (4.9) is sampled for several numerical values of the variables τ . This will generate a linear system of equations for the unknowns x j . Linear fits are thus just a special type of dense linear systems. Hence, we refer to the previous section for information about the implementation of the reduction and the output of this algorithm. In particular, each equation is associated with a particular numerical sample point for the variables τ = {τ j }. In total, we use m + n checks sample points, where m is the number of unknowns and n checks is the number of additional equations added as a further consistency check (we typically use n checks = 2). Notice that, just like in any other linear system, redundant equations (including the additional n checks ones) are eliminated after the learning phase.
In order to use this algorithm more effectively for the solution of realistic problems, and in particular integrand reduction, we made it more flexible by adding some additional features. The first one is the possibility of introducing a set of auxiliary functions a = a(τ, z) and defining several (known) functions g j on the right-hand side, in order to rewrite Eq. (4.9) as m j=0 x j (z) f j (z, a(τ, z)) + f 0 (z, a(τ, z)) = j w j g j (z, a(τ, z)). (4.10) This is useful when the functions f j and g j are simpler if expressed in terms of these auxiliary variables a, which do not need to be independent, and when the sum on the right-hand side is not collected under a common denominator. The value of the weights w j in the previous equation depends on the inputs of the node defining the algorithm. The first input list is always the list of variables z, similarly to the case of a linear system. If no other input is specified, then we simply define w j = 1 for all j. If other lists of inputs are specified, besides z, they are joined and interpreted as the weights w j appearing in Eq. (4.10). This allows to define these weights numerically from the output of other nodes. As we will see in section 7, this allows, among other things, to easily implement multi-loop integrand reduction over finite fields without the need of writing any low-level code. We provide two more usages of linear fits as nodes embedding subgraphs (introduced in section 3.3). The first one is used to find linear relations among the entries of the output of the subgraph G which has input variables {τ, z}. Let be the output of G. The subgraph fit algorithm solves the linear fit problem In particular, if z is chosen to be the empty list, and f m = 0, it will find vanishing linear combinations of the output of G with numerical coefficients. An interesting application of this is the attempt of simplifying the output of a graph. One can indeed estimate the complexity of each entry in the output at the price of relatively quick univariate reconstructions. A simple way of estimating the complexity is based on the total degrees of numerators and denominators, which can be found with one univariate reconstruction over one finite field, as we already explained. A more refined method would be counting the number of evaluation points needed for the reconstruction of each entry over a finite field, which can be found after the total and partial degrees have been computed and it is an upper bound on the number of non-vanishing terms in the functions. One can, of course, use any other definition or estimate for the complexity of the output functions based on other elements specific to the considered problem. Regardless of how we choose to define it, we then sort the entries by their complexity, from lower to higher, and we make sure that f m = 0, e.g. by appending to the graph a Take node (this will be described in section 4.4). After solving the linear fit above for the unknowns x j we are then able to write more complex entries of the output as linear combinations of simpler entries. When this is possible, only the independent entries need to be reconstructed. The second subgraph application of linear fits, which we call subgraph multi-fit, is a generalization of the previous one. If Eq. (4.11) represents, again, the output of a graph G, the subgraph multi-fit node, which has input variables z, is defined by providing a list of lists of the form where the sublists can be of any length and σ ij are integer indexes in the interval [1, m]. For each sublist {σ ij } j , the subgraph multi-fit node solves the linear fit with respect to the unknowns x ij . Since this amounts to performing a number of linear fits, this node obviously has a learning phase, where independent unknowns, independent The arrows represent lists with the matrix elements A ij , B ij and C ij in row-major order. Their number of rows and columns is defined when the node is created. equations, and zero unknowns are detected for each one of them. Notice that all the fits can share the same evaluations of graph G, for several values of τ and fixed values of z. An application of this algorithm is the case when the functional dependence of a result on the subset of variables τ (which may also be the full set of variables, if z is the empty list) can be guessed a priori by building a basis of rational functions. In this case, one may create a graph G which contains both the result to be reconstructed and the elements of the function basis, and a second graph with a subgraph-fit node using G as a subgraph. This allows to reconstruct the result via a simpler functional reconstruction over the z variables only, or via a numerical reconstruction if z is the empty list. An example of this is given at the end of section 6.2.
Basic operations on lists and matrices
The algorithms listed in this subsection have a simple implementation and they can be thought as utilities for combining in a flexible way outputs of other numerical algorithms in the same graph. While they typically execute very quickly compared to others, they greatly enhance the possibilities of defining complex algorithms by means of the graph-based approach described in this paper. They are: Take Takes any number of lists as input and returns a specified list of elements {t 1 , t 2 , . . .} from them, where t j can be any element of any of the input lists. The same element may also appear more than once in the output list. This is a very flexible algorithm for rearranging the output of (combinations of) other nodes. Indeed many of the listmanipulation algorithms below can also be implemented as special cases of this.
Chain Takes any number of lists as input, chains them and return them as a single list.
Slice Takes a single list as input and returns a slice (i.e. a contiguous subset of it) as output.
Matrix Multiplication Given three positive integers N 1 , N 2 and N 3 , this node takes two lists as input, interprets them as the entries of a N 1 × N 2 matrix and a N 2 × N 3 matrix (in row-major order) respectively, multiplies them and return the entries of the resulting N 1 × N 3 matrix (still in row-major order). This node is depicted in fig. 4. Notice that, because different nodes of this type can interpret the same inputs as matrices of different sizes (as long as the total number of entries is consistent), this algorithm can also be used to contract indexes of appropriately stored tensors, multiplying lists and scalars, and other similar operations. As an example, consider a node whose output are the entries of a rank 3 tensor T ABC with dimensions N A , N B , N C , and another one which represents a matrix M CD with dimensions N C and N D . We can then perform a tensormatrix multiplication using this node with Similarly, we can multiply the tensor T ABC by a scalar, the latter represented by a list of length one, by setting Sparse Matrix Multiplication Similar to the Matrix Multiplication above, but more suited for cases where the matrices in the input are large and sparse, so that one wants to store only their non-vanishing entries in the output of a node. This algorithm is defined by the three dimensions N 1 , N 2 and N 3 as above, as well as by a list of potentially nonvanishing columns for each row of the two input matrices. The two inputs are then interpreted as lists containing only these potentially non-vanishing elements. The output of this node lists, as before, the elements of the resulting N 1 × N 3 matrix, stored in a dense representation in row-major order.
Addition Takes any number of input lists of length L, and adds them element-wise.
Multiplication Takes any number of input lists of length L, and multiplies them elementwise.
Take And Add Similar to the Take algorithm above, except that it takes several lists from its inputs {{t 1j } j , {t 2j } j , . . .}, where each of them might have a different length. It then returns the sum of each of these sub-lists { j t 1j , j t 2j , . . .}.
Non-Zeroes
This node takes one list as input and returns only the elements which are not identically zero. The node requires a learning step where the non-zero elements are identified via a few numerical evaluations (two by default). Because some algorithms have a rather sparse output (i.e. with many zeroes), it is very often useful to append this node at the end of a graph and use it as output node. This can remarkably improve memory usage during the reconstruction step. Given its benefits and its minimal impact on performance, we also recommend using such an algorithm as the output node when the sparsity of the output is not known a priori.
Laurent expansion
In physical problems, one is often interested in the leading coefficients of the Laurent expansion of a result with respect to one of its variables, which in this section we call . The most notable examples in high-energy physics are scattering amplitudes in dimensional regularization, which are expanded for small values of the dimensional regulator. Other applications can be the expansion of a result around special kinematic limits. The coefficients of this expansion are often expected to be significantly simpler than the full result. Hence, it is beneficial to be able to compute the Laurent expansion of a function without having to perform its full reconstruction first. The Laurent expansion algorithm is another algorithm whose node embeds a subgraph. Consider a graph G representing a multi-valued (n + 1)-variate rational function in the variables { , z}. The Laurent expansion node takes a list of length n as input, which represents the variables z, and returns for each output of G the coefficients of its Laurent expansion in the first variable , up to a given order in .
Without loss of generality, we only implement Laurent expansions around = 0. Expansions around other points, including infinity, can be achieved by combining this node with another one implementing a change of variables, which in turn can be represented by an algorithm evaluating rational functions.
When the node is defined, we also specify the order at which we want to truncate the expansion. We can specify a different order for each entry of the output of G. This node has a learning phase, during which it performs two univariate reconstructions in of the output of G, for fixed numerical values of the variables z. The first reconstruction uses Thiele's formula, and it is used to learn the total degrees in of the numerators and the denominators of the outputs of G. Subsequent reconstructions will use the univariate systemsolving strategy discussed in section 2.2. For each output of G, any overall prefactor p , where p can be a positive or negative integer, is also detected and factored out to simplify further reconstructions (notice that, after this, we can assume the denominators to have the constant term equal to one). These prefactors also determine the starting order of the Laurent expansion, which therefore is known after the learning phase. The second reconstruction in the learning phase is simply used as a consistency check.
On each numerical evaluation, for given values of the inputs z, this node performs a full univariate reconstruction in of the output of G and then computes its Laurent expansion up to the desired order. Numerical evaluations of G are cached so that they can be reused for reconstructing several entries of its output for the same values of z. The coefficients of the Laurent expansions of each element are then chained together and returned.
Algorithms with no input
We finally point out that it is possible to define nodes and graphs with no input.
Nodes with no input correspond to algorithms whose output may only depend on the prime field Z p . Some notable examples are nodes implementing the solution of linear systems and linear fits (already discussed in the previous sections) in the special case where they do not depend on any list of free parameters z. Another example is a node evaluating a list of rational numbers over a finite field Z p . Nodes with no input have depth zero, by definition.
A graph with no input is a graph with no input node. The nodes with the lowest depth of such a graph are nodes with no input. The output of this graph only depends on the prime field Z p used. These graphs thus represent purely numerical (and rational) algorithms and no functional reconstruction is therefore needed. For these, we perform a rational reconstruction of their output by combining Wang's algorithm and the Chinese remainder theorem, as explained in section 2.1.
Reduction of scattering amplitudes
One of the most important and phenomenologically relevant applications of the methods described in this paper is the reduction of scattering amplitudes to a linear combination of master integrals or special functions. This is indeed a field which, in recent years, has received a notable boost in our technical capabilities, thanks to the usage of finite fields and functional reconstruction techniques. In particular, the results in [14,17,21] have been obtained using an in-development version of the framework presented here.
Integration-by-parts reduction to master integrals
Loop amplitudes are linear combinations of Feynman integrals. Consider an -loop amplitude A, or a contribution to it, with e external momenta p 1 , . . . , p e . The amplitude, in dimensional regularization, is a linear combination of integrals over the d-dimensional components of the loop momenta k 1 , . . . , k . It is convenient to write down these integrals in a standard form. For each topology T , let {D T,j } n j=1 be a complete set of loop propagators, including auxiliary propagators or irreducible scalar products, such that any scalar product of the form k i · k j and k i · p j is a linear combination of them. In principle, there could also be scalar products of the form k i · ω j where ω j are vectors orthogonal to the external momenta p j , but these can be integrated out in terms of denominators D T,j an auxiliary (see e.g. ref. [29]), hence they are not considered here. Effective methods for obtaining this representation of an amplitude are integrand reduction (discussed in section 7) and the decomposition into form factors (discussed in section 8). Hence, given a list of integers α = (α 1 , . . . , α n ), we consider Feynman integrals with the standard form Notice that the exponents α j may be positive, zero, or negative. Amplitudes may be written as linear combinations of the integrals above as where the coefficients a j are rational functions of kinematic invariants, and possibly of the dimensional regulator = (4 − d)/2. While the computation of the coefficients a j can be highly non-trivial for high-multiplicity processes, in this section we assume them to be known. Notice that they don't need to be known analytically, but it is sufficient to have a numerical algorithm for obtaining them. As already mentioned, popular and successful examples of these algorithms are integrand reduction and the decomposition into form factors, which we will talk about in sections 7 and 8.
In general, the integrals I j appearing in Eq. (5.2) are not all linearly independent. Indeed they satisfy linear relations such as integration-by-parts (IBP) identities, Lorentz invariance identities, symmetries, and mappings. The collection of these relations form a large and sparse system of equations satisfied by these integrals. The most well known and widely used method for generating such relations is the Laporta algorithm [30]. In this case, these identities can be easily generated using popular computer algebra systems, especially with the help of public tools (for instance, the package LiteRed [31] is very useful for generating these relations in Mathematica). However, any other method can be used for building this system, as long as this is provided in the form of a set of linear relations satisfied by Feynman integrals.
As explained in section 4.2, in order to properly define this system we need to introduce an ordering between the unknowns, in this case, the integrals I j = I T, α , by assigning a weight to them [30]. The efficiency of the linear solver, as well as the number of equations left after applying the mark-and-sweep method described in section 4.2, strongly depends on this ordering. However, there is no unique good choice of it, and any choice can be specified when the system is defined. An example which we found has good properties and prefers integrals with no higher powers of denominators is provided in Appendix B.
By solving this large system, which we henceforth refer to as IBP system, we reduce the amplitude to a linear combination of a smaller set of integrals G j , known as master integrals (MIs), where the coefficients c jk are rational functions of the kinematic invariants and the dimensional regulator . Notice that the master integrals G k do not need to have the form in Eq. (5.1), but they can be arbitrary combinations of integrals of that form. In general, one may have a list of preferred integrals which are defined as special linear combinations of those in Eq. (5.1) characterized by good properties, such as a simpler pole structure or a better analytic behaviour (a convenient property to have is uniform transcendental weight [32], see also section 6.2). In such cases, we add the definition of these integrals to the system of equations and we assign to them a lower weight so that they are automatically chosen as independent integrals, to the extent that this is possible, during the Gauss elimination. Another important fact to note is that the list of master integrals is determined after the learning phase of the linear solver, which only requires a few numerical evaluations. After IBP reduction, amplitudes are written as linear combinations of master integrals where the coefficients A k , which are rational functions of the kinematic invariants and the dimensional regulator , can be obtained via a matrix multiplication between the coefficients of the unreduced amplitude in Eq. Putting these ingredients together, it is very easy to define a simple dataflow graph representing this calculation, which is depicted in fig. 5.
• The input node of the graph represents the variables { , x} where is the dimensional regulator and x can be any number of kinematic invariants.
• The node a j takes as input the input node { , x} and evaluates the coefficients of the unreduced amplitude in Eq. (5.2). If these are known analytically this can simply be a node evaluating a list of rational functions, otherwise, it can represent something more complex, such as one of the algorithms we will discuss later.
• The IBP node is a sparse linear solver which takes as input the input node { , x} and returns the coefficients c jk obtained by numerically solving the IBP system. Because these systems are homogeneous, we only return the homogeneous part of the solutions (the removed constant terms are zero). After the learning phase is completed, we strongly recommend running the mark-and-sweep algorithm to reduce the number of equations.
• Finally, the output node, which can be defined after the learning phase of the IBP node has been completed, is a matrix multiplication which takes as inputs the node a j and the IBP node. The graph we just described, which is depicted on the left of fig. 5, ignores a technical subtlety. The reduction coefficients c jk returned by the IBP node express the non-master integrals in terms of master integrals. However, depending on our choice of masters, the master integrals themselves may also appear on the r.h.s. of the unreduced amplitude in Eq. (5.2). This creates a mismatch which does not allow to properly define the final matrix multiplication. More explicitly, if n MIs is the number of master integrals, and n non-MIs is the number of nonmaster integrals appearing in Eq. (5.2), then the IBP node returns a n non-MIs × n MIs matrix. However, if the n MIs masters also appear on the r.h.s. of Eq. (5.2), then the output of the a j node has length n non-MIs + n MIs , which makes it incompatible with the IBP solution matrix it should be multiplied with. This can, however, be easily fixed by defining an additional node representing the reduction of the master integrals to themselves, which is trivially given by the n MIs × n MIs identity matrix I n MIs (this is a node with no input, which evaluates a list of rational numbers, see also section 4.6). After this is chained (see section 4.4) to the output of the IBP node, we obtain a (n non-MIs + n MIs ) × n MIs matrix containing the reduction to master integrals of all the n non-MIs + n MIs Feynman integrals in Eq. (5.2). Hence the final matrix multiplication is well defined. This graph is depicted on the right of fig. 5. Notice that these two extra nodes are not necessary when all the master integrals have been separately defined and don't appear in our representation of the unreduced amplitude, because in this case the output of the a j node has length n non-MIs and can be directly multiplied with the matrix computed by the IBP node. The dataflow graph we just described computes the coefficients of the reduction of an amplitude to master integrals. By evaluating this graph several times, one can thus reconstruct the analytic expressions of these coefficients, without the need of deriving large and complex IBP tables. This represents a major advantage, since IBP tables for complex processes can be extremely large, significantly more complex than the final result for the reduced amplitude, hard to compute, and also hard to use -since they require to apply a huge list of complex substitutions to the unreduced amplitude. On the other hand, using the approach described here, IBP tables are always computed numerically, and only the final result is reconstructed analytically. Hence, by building a very simple dataflow graph consisting of only a few nodes, we are able to sidestep the bottleneck of computing and using large, analytic IBP tables. This approach has already allowed (e.g. in ref.s [14,21]) to perform reductions in cases where the IBP tables are known to be too large and complex to be computed and used with reasonable computing resources.
Reduction to special functions and Laurent expansion in
The expansion in the dimensional regulator of the master integrals can often be computed in terms of special functions, such as multiple polylogarithms or their elliptic generalization. When this is possible, the result for the expansion of a scattering amplitude might be significantly simpler than the one in terms of master integrals. For the sake of argument, we assume to be interested in the poles and the finite part of the amplitude, but everything we are going to discuss can be easily adapted to different requirements.
Let {f k = f k (x)} be a complete list of special functions (which may also include numerical constants) such that every master integral G j , expanded up to its finite part, can be expressed in terms of these as where g jk are rational functions in and x (typically, they will be a Laurent polynomial in , but this is not important for the discussion). Recalling Eq. (5.4), we can thus write the amplitude in terms of these functions as where the rational functions u k are defined as We are interested in the expansion in of the coefficients u k , i.e. in the coefficients u where p is such that the leading pole of the amplitude is proportional to −p .
Computing the coefficients u k (x) in our framework is straightforward. We start from the dataflow graph described in section 5.1, which computes the coefficients A j of the master integrals. We first extend this graph in order to get the unexpanded coefficients u k ( , x). This is simply done by adding a node g jk , which evaluates the rational functions g jk ( , x) defined in Eq. (5.6), and a matrix multiplication node between the node A j (which was the output node in the previous case) and g jk , as one can see from Eq. (5.8). Let us call this dataflow Figure 6: Two graphs which, combined, compute the expansion of the coefficients of scattering amplitudes in terms of special functions. In the first graph G 1 , A j represents the calculation of the coefficients of the master integrals presented in section 5.1 and fig. 5. The graph G 2 then takes the graph G 1 as subgraph in one of its nodes, which computes its Laurent expansion in . graph G 1 . We then create a new graph G 2 with input variables x. Inside the latter, we create a Laurent expansion node, which takes as its subgraph G 1 . The output of this node will be the coefficients u (j) k of the Laurent expansion in Eq. (5.9). This is depicted in fig. 6. Because the coefficients u (j) k (x) might not be all linearly independent, we also recommend running the subgraph fit algorithm described in section 4.3 in order to find linear relations between them. In particular, this can be used to rewrite the most complex coefficients as linear combinations of simpler ones, yielding thus a more compact form of the result, which is also easier to reconstruct.
We finally point out that one can further elaborate the graph G 1 in order to include renormalization, subtraction of infrared poles, and more. This is done by rewriting these subtractions, which are typically known analytically since they depend on lower-loop results, in terms of the same list of functions {f k } as the amplitude. After doing so, the coefficients of the subtraction terms multiplying the functions f k are added to the graph as nodes evaluating rational functions and summed to the output using the Addition node described in section 4.4. This may thus simplify the output of the Laurent expansion computed in the graph G 2 , which will, therefore, be easier to reconstruct.
It goes without saying that, even if we focused on scattering amplitudes, the same strategy can be applied to other objects in quantum field theory which have similar properties, such as correlation functions and form factors.
Differential equations for master integrals
Integration-by-parts identities are not only useful to reduce amplitudes to linear combinations of a minimal set of independent master integrals, but they are also helpful for the calculation of the master integrals themselves via the method of differential equations [33,34]. Indeed the master integrals G j satisfy systems of coupled partial differential equations with respect to the invariants x, Solving these systems of differential equations is one of the most effective and successful methods for computing the master integrals.
Reconstructing differential equations
The differential equation matrices can be easily computed within our framework, using a strategy which is completely analogous to the one described in section 5.1 for the reduction of scattering amplitudes to master integrals. We first determine the master integrals by solving the IBP system numerically over finite fields. For this, we need to specify a list of needed integrals, i.e. a list of needed unknowns for which the system solver is asked to provide a solution since in general one cannot reduce to master integrals all the integrals appearing in an IBP system. We then make a conservative choice which is likely to be a superset of all the integrals which need to be reduced for computing the differential equations.
Then, the derivatives of master integrals with respect to kinematic invariants can be easily computed analytically, where the integrals I j have the standard form defined in Eq. (5.1), and a (x) jk are rational functions of the invariants x. At this stage, we may reset the list of needed unknowns of the IBP system to include only the ones appearing on the r.h.s. of Eq. (6.2). After that, we also strongly suggest running the mark-and-sweep algorithm for removing unneeded equations.
By solving the IBP system, we reduce the integrals I j to master integrals. This defines the coefficients c jk of the reduction, as in Eq. (5.3). The differential equation matrices A (x) jk are thus obtained via the matrix multiplication A dataflow graph representing this calculation can, therefore, be almost identical to the one described in section 5.1, and it is depicted on the left side of fig. 7. In particular, it has an input node representing the variables { , x}, a node evaluating the rational functions a (x) ij appearing in the unreduced derivatives of Eq. (6.2), a node with the IBP system, and an output node with the final matrix multiplication in Eq. (6.3). Similarly to the case of the amplitudes, if the master integrals are chosen such that they can also appear on the r.h.s. of the unreduced derivatives in Eq. (6.2), then we also add an identity matrix node, and a node chaining this to the IBP node (see section 5.1 and fig. 5 for more details).
By defining this graph, we can reconstruct the differential equations of the master integrals directly, without the need of computing IBP tables analytically, similarly to the case of the reduction of amplitudes. This usually yields a substantial simplification of the calculation. fig. 5 for the reduction of amplitudes to master integrals. On the right, a dataflow graph computing the differential equation matrices divided by . As explained in section 6.2 we can verify the -form of the differential equations by checking numerically that the output of the latter graph does not depend on .
Differential equations in -form
It has been observed in ref. [32] that the differential equation method becomes more powerful and effective if the master integrals are chosen such that they are pure functions of uniform transcendental weight, henceforth UT functions for brevity (we refer to ref. [32] for a definition). Remarkably, as pointed out in ref. [32], one can build a list of integrals having this property without doing any reduction at all, by using some effective rules or by analyzing the leading singularities of Feynman integrals. A systematic algorithm which implements this analysis of leading singularities was developed and described in [35], and recently extended in ref. [36]. Once a (possibly over-complete) list of UT integrals has been found, their definitions can be added as additional equations to the IBP system. By assigning a lower weight to these integrals, they will be automatically chosen as preferred master integrals by the system solver.
If {G k } represents a basis of UT master integrals, the differential equation matrices take the form [32] A ij (x), (6.4) i.e. their dependence is simply an prefactor. This greatly simplifies the process of solving the system perturbatively in . When this happens, the system of differential equations is said to be in -form or in canonical form. If a list of UT candidates is known, as we said, we may add their definition to the IBP system, and then we divide the final result for the matrices by . This is done by modifying the dataflow graph defined before, with the addition of a node which evaluates the rational function 1/ , and a new output node which multiplies the 1/ node with the older output node. As mentioned in section 4.4, the multiplication can be accomplished using a matrix multiplication node, which interprets 1/ as a 1 × 1 matrix and its second input node as a matrix with only one row. This modified graph is depicted on the right side of fig. 7. Once the graph is defined, we can evaluate it numerically for several values of while keeping x fixed, in order to check that the system is indeed in -form.
Differential systems in -form for UT integrals are typically much easier to reconstruct since they have a particularly simple functional structure. Hence they benefit even more from the functional reconstruction methods described in this paper, which allow to reconstruct this result directly without dealing with the significantly more complex analytic intermediate expressions one would have in a traditional calculation.
A large class of Feynman integrals can be written as linear combinations of iterated integrals of the form (using the notation in [37]) where the d log arguments w k are commonly called letters. A complete set of letters is called alphabet. While the alphabet of a multi-loop topology is often inferred from the differential equations for the master integrals, there are some cases where this can instead be guessed a priori. In such cases, finding differential equations for UT master integrals can be even simpler, since the calculation can be reduced to a numerical linear fit [38]. Indeed, it is well known that differential equations matrices for UT master integrals, aside from their prefactor, are expected to be linear combinations of first derivatives of logarithms of letters, with rational numerical coefficients. More explicitly, if W = {w 1 , w 2 , . . .}, with w k = w k (x), is the alphabet of a topology, the differential equation matrices for a set of UT master integrals take the form where C (x,k) ij are rational numbers. Hence, rather than employing multivariate functional reconstruction methods, in this case, we can compute the differential equation matrices just with a linear fit. For this purpose, we can apply the subgraph multi-fit algorithm described in section 4.3. More explicitly, we create a graph G 1 whose output contains both the derivatives ∂ log w k /∂x of the letters and the (non-vanishing) matrix elements A (x) ij . We then build a second graph G 2 , with a subgraph multi-fit node containing G 1 , which performs a fit of each matrix element with respect to the basis of functions {∂ log w k /∂x}, as described in section 4.3. Notice that G 2 has no input node, and therefore we run a numerical reconstruction of its output over Q using Wang's algorithm and the Chinese remainder theorem, as already explained in section 4.6.
Differential equations with square roots
In our discussion of differential equations for UT integrals, we have so far neglected the potential issue of the presence of square roots in their definition. Indeed, there are cases where, in order to define UT integrals, one needs to take rational linear combinations of integrals of the form of Eq. (5.1) and multiply them by a prefactor equal to the square root of a rational function of the invariants x. Even in cases where these square roots may be removed via a suitable change of variables, one may still wish to compute differential equations in terms of the original kinematic invariants, at least as a first step. While square roots may be accommodated in our framework by considering finite fields which are more general than Z p , we would like to point out in this section that this is not necessary for computing differential equations.
Let us rewrite the master integrals G j as where R j is either equal to one or to the square root of a rational function of the invariants x, and {G r.f. j } are a set of root-free master integrals, which can be written as rational linear combinations of standard Feynman integrals of the form of Eq. (5.1). We first observe that the quantity which can be easily computed analytically, is also a rational combination of standard Feynman integrals. This is indeed manifest on the r.h.s. of the equation, since if R is the square root of a rational function then R /R is rational. This implies that, via IBP identities, we can reduce the root-free quantity in Eq. (6.8) to the root-free master integrals and obtain where the matrixà jk is also rational since the IBP reduction itself cannot introduce any non-rational factor. One can finally show that the matrixà (x) jk is related to the differential equation matrix A (x) jk we wish to compute by jk , (6.10) i.e. simply by rescaling each matrix element by a prefactor. We can, therefore, apply the methods described above to the quantity in Eq. (6.8) (rather than to the simple derivatives of the master integrals), use it to reconstruct the rational matrixÃ
Integrand reduction
In section 5, we explained how to compute the reduction of a scattering amplitude, either to a linear combination of master integrals, or to a combination of special functions expanded in the dimensional regulator. One of the ingredients of the algorithm discussed there was a representation of the unreduced amplitude (cfr. with Eq. (5.2)) as a linear combination of Feynman integrals cast in a standard form, such as the one in Eq. (5.1). In particular, within the FiniteFlow framework, we need a numerical algorithm capable of computing the coefficients a j of such a linear combination. This is trivial if an analytic expression is known for the a j , however this is not always the case. Indeed, for complex processes, casting the amplitude in such a form is a very challenging problem. In this section we discuss integrand reduction methods [3][4][5][6][7][8]29], which are an efficient way of obtaining this representation of the amplitude and are suitable for complex processes.
Integrand reduction via linear fits
Amplitudes are linear combinations of integrals of the form where N is a polynomial numerator in the loop components, and D j are denominators of loop propagators. For simplicity we consider only one topology, identified by a set of loop denominators, but we understand that the approach discussed here should be applied to all the topologies contributing to the amplitude we wish to compute. Integrand reduction methods rewrite the integrand as a linear combination of functions belonging to an integrand basis where ∆ β ≡ ∆ β 1 ···βn has the form ∆ β = j c β,j m β,j (k 1 , . . . , k ). (7.3) In the previous equations, the functions m β,j are a complete set of irreducible numerators, i.e. numerators which, at the integrand level, cannot be written in terms of the loop propagators they are sitting on. In other words, the terms m β,j (k 1 , . . . , k ) Once an integrand basis has been chosen, the unknown coefficients c β,j can be determined via a linear fit. For this purpose, we can use the algorithm described in section 4.3, using kinematic invariants as the free parameters z, loop variables as the additional set of variables τ , and c β,j as the unknowns of the system. In particular, in a dimensional regularization scheme where the external states are four-dimensional (such as the t'Hooft-Veltman [39] and Four-Dimensional-Helicity [40] schemes) the integrand depends on 4 + ( + 1) 2 loop variables. These can be chosen, for instance, as the four-dimensional components of the loop momenta with respect to a basis of four-dimensional vectors, plus the independent scalar products between the extra-dimensional projections of the loop momenta While performing a global fit of all the coefficients at the same time is theoretically possible, in practice it is extremely inefficient and impractical, because it involves solving a dense system of linear equations of the same size as the number of the unknown coefficients. One can however greatly simplify the problem by splitting it into several smaller linear fits, using the so-call fit-on-the-cut approach [3]. This consists of evaluating the integrand on multiple cuts, i.e. values of the loop momenta such that a subset of loop propagators vanish (we also understand that vanishing denominators should be removed from the integrand when applying a cut). On each cut, we also have fewer independent loop variables τ , namely those which are not fixed by the cut conditions. This method is best used in a top-down approach. We first cut (i.e. set to zero) as many propagators as possible, and use linear fits on maximal cuts for determining a first set of coefficients. We then proceed with linear fits on cuts involving fewer and fewer propagators. When performing a fit on a multiple cut, on-shell integrands which have already been fixed on previous cuts are first subtracted from the integrand. These subtractions are sometimes referred to as subtractions at the integrand level. If an integrand has all denominator powers α j equal to one, with this approach we determine the coefficients of one and only one on-shell integrand ∆ β on each cut. If higher powers of propagators are present, more than one off-shell integrand must be determined at the same time on some cuts, but this doesn't qualitatively change the algorithm for the linear fit (this point is discussed more in detail in ref.s [41,42]). Subtractions at the integrand level can be implemented using the linear fit algorithm described in Eq. (4.10). In particular we define a dataflow graph where each multiple cut corresponds to a different node, whose output are the coefficients c β,j determined by a linear fit. Each node takes as input, besides the kinematic variables z, the output of all the higherpoint cuts with non-vanishing subtractions on the current cut. The coefficients returned by the input nodes will be used as weights w j (cfr. with Eq. (4.10)) for the subtractions, while the integrand will typically have weight one. Notice that the linear fit described in Eq. (4.10) also allows to define a set of auxiliary functions, in terms of which we can express both the integrand and the integrand basis. This is very convenient since it allows to express these objects in terms of scalar products, spinor chains, or other auxiliary functions which may yield a simple representation. Hence, we only have to explicitly substitute the cut solutions inside these functions, which are then evaluated numerically. In particular, we don't need to substitute the cut solutions inside the full integrand or the full set of integrand basis elements appearing in the subtraction terms, which may yield complicated expressions in some cases.
We also note that, when using the loop variables described above, finding a rational parametrization of the cut solutions is a simple problem of linear algebra. As already explained in [29] one can proceed by splitting the cut denominators into categories, such that denominators in the same category depend on the same subset of loop momenta. For each category, we choose a representative, and we take differences between all the other denominators and this representative. This gives a linear system of equations for the four-dimensional components of the loop momenta which live in the space spanned by the external legs. Next, we complete this solution by setting to zero the representatives of each category. This gives a system of equations which is linear in the variables µ ij . Notice that this is only true when we work in d dimensions.
If neither the integrand nor the integrand basis depend on the dimensional regulator , it is convenient to embed the integrand reduction nodes in a memoized subgraph, as described at the end of section 3.3. During the Laurent expansion, this avoids repeating the integrand reduction for several values of and fixed values of the kinematic invariants. If the integrand has a polynomial dependence on , as it happens for amplitudes in the t'Hooft-Veltman regularization scheme, we can still implement this improvement by using several memoized subgraphs, i.e. one for each power of in the numerator.
The algorithm we described allows to define a dataflow graph implementing a full multiloop integrand reduction over finite fields, starting from a known integrand and an integrand basis. This is particularly convenient when using FiniteFlow from a computer algebra system. The output of all these nodes can then be collected, using either a Chain or a Take algorithm (see section 4.4), and used as input for subsequent stages of the reduction, such as IBP reduction, and the decomposition in terms of known special functions, as described in section 5. In our experience, this strategy is very efficient, even on complex multi-loop integrands, especially if compared with the more time-consuming IBP reduction step.
It is also worth mentioning that integrand reduction is often used in combination with generalized unitarity [4,[9][10][11][12]. On multiple cuts the integrand factorizes as a product of treelevel amplitudes, which in turn may be evaluated efficiently, over a numerical field, using Berends-Giele recursion [43]. We refer to ref. [2] for a complete description of an implementation of generalized unitarity over finite fields. It should be noted that, while generalized unitarity is an extremely powerful method which can substantially reduce the complexity of the calculation, it also has some limitations. For instance, one needs to find rational finitedimensional parametrizations for the internal states of the loop on the cut solutions, which is not always easy. Moreover, in its current state, it cannot be easily applied to processes with massive internal propagators. These difficulties and limitations are not present when applying integrand reduction to a diagrammatic representation of the amplitude.
Choice of an integrand basis
It is worth making some observations on possible choices for an integrand basis. In the oneloop case, one can choose a basis which yields a linear combination of known integrals [3,4]. With this choice, IBP reduction is not needed. At higher loops, this is not the case, and one should therefore take into account that the elements of an integrand basis should be later reduced via IBP identities.
A particularly simple but effective choice, especially at the multi-loop level, consists of writing any on-shell integrand ∆ β in terms of the denominators and auxiliaries {D T,j } of its parent topology T such that β j = 0, i.e. excluding the ones that ∆ β is sitting on. In processes with fewer than five external legs, one must also include scalar products of the form k i · ω j where {ω j } are a complete set of four-dimensional vectors orthogonal to all the external momenta p 1 , . . . , p e . Hence, ∆ β can be parametrized as the most general polynomial in this set of variables, whose total degree is compatible with the theory. For instance, a renormalizable theory allows at most one power of loop momenta per vertex, in the sub-topology defined by the denominators of ∆ β . After integrand reduction, the scalar products of the form k i · ω j can be integrated out in terms of denominators and auxiliaries D T,j . As explained e.g. in [29], this can be easily done via a tensor decomposition in the (d − e + 1)-dimensional subspace orthogonal to the e external momenta. Notice that this is very simple even for complex processes, since it only involves the orthogonal projection of the metric tensor g µν [d−e+1] and no external momentum. Alternatively, one can achieve the same result via an angular loop integration over the orthogonal space, which can be made even simpler using Gegenbauer polynomials [29]. This choice of integrand basis directly yields, after orthogonal integration, a linear combination of integrals which are suitable for applying standard IBP identities. Given also its simplicity, it is a recommended choice in most cases.
Other choices can be made for the sake of having either a simpler integrand representation or a larger set of elements of the integrand basis which integrate to zero. One can, for instance, choose to replace monomials in an on-shell integrand with monomials involving also the extra-dimensional scalar products µ ij . Because monomials with µ ij can be rewritten as linear combinations of the other ones, one can easily obtain a system of equations relating these two types of monomials. By solving this system, assigning a lower weight to monomials involving µ ij , one can maximize the presence of integrands which vanish in the four-dimensional limit. Since we are only interested in a list of independent monomials, it is sufficient to solve the system numerically (possibly over finite fields). This is heuristically found to yield simpler integrand representations. However, it also makes IBP reduction harder to use, since integrands with µ ij then need to be converted to the ones in a standard form. If only the finite part of the amplitude is needed, one may however choose some integrands involving µ ij which are O( ) after integration and then drop them before the IBP reduction step. This may result in notable simplifications. As an example, at one loop, no on-shell integrand with more than four denominators contributes to the finite part of an amplitude, if µ 11 is chosen to be the numerator of the five-denominator integrands in the integrand basis.
Another popular choice is the usage of scalar products involving momenta which, for an on-shell integrand ∆ β , are orthogonal to the external momenta of the topology defined by its own denominators (as opposed to the ones of the parent topology). One can indeed build suitable combinations of these scalar products which vanish upon integration. Their coefficients can then be dropped after the integrand reduction.
Another very successful strategy is the usage, for each on-shell integrand ∆ β , of a complete set of surface terms, i.e. terms which vanish upon integration and are compatible with multiple cuts [44][45][46][47]. These are chosen to be an independent set of IBP equations without higher powers of denominators. These define suitable polynomial numerators for ∆ β which vanish upon integration. When this approach is used, IBP reduction is embedded in the integrand reduction and therefore it is not needed as a separate step. A possible disadvantage is that it makes the integrand reduction more complicated, since these surface terms are typically more complex than the elements of other integrand bases, and they introduce a dependence on the dimensional regulator which is otherwise not present in the integrand reduction stage. Another disadvantage is that, in the form it is usually formulated, this strategy can yield incomplete reductions for some processes. 4 We finally point out that, if there is no one-to-one correspondence between elements of the integrand basis and Feynman integrals to be reduced via IBPs, one needs to convert between the two. This step may also include the transverse integration, if needed. The conversion, as in many other cases, can be implemented via a matrix multiplication. For this purpose, we recommend using either the Take And Add algorithm or the Sparse Matrix Multiplication algorithm described in section 4.4.
Writing the integrand
When using integrand reduction together with Feynman diagrams, one would typically provide the integrands in Eq. (7.1) analytically. Even if several methods exist for generating integrands numerically at one loop, with the notable exception of generalized unitarity (which is however not based on Feynman diagrams and has the limitations mentioned above) they have not been generalized to higher loops. When integrands are provided in some analytic form, they will also depend on external polarization vectors, spinor chains, and possibly other objects describing the external states. On one hand, this means that we need to provide a rational parametrization for these objects. On the other, we may use the algorithms described above in order to keep these rational expressions as compact as possible. This is done by performing the substitutions which would yield complex expressions only numerically over finite fields.
A rational parametrization for four-dimensional spinors, polarization vectors, external momenta, as well as higher-spin polarization states, can be obtained, in terms of a minimal set of invariants, by means of the so-called momentum twistor parametrization [48][49][50]. The independent kinematic invariants are called in this case momentum twistor variables. A comprehensive description of the usage of this parametrization for describing external states in the context of numerical calculations over finite fields is given in ref. [2] and will not be repeated here.
In amplitudes with only scalars and spin-one external particles, the only additional loopdependent objects appearing in the integrand, besides the loop denominators and auxiliaries, are scalar products between loop momenta and polarization vectors. If external fermions are present, one also has spinor chains involving loop momenta. These can be dealt with by splitting the loop momenta in a four-dimensional and a (−2 )-dimensional part and performing the t'Hooft algebra on the extra-dimensional components in order to explicitly convert all the dependence on k into the extra-dimensional scalar products µ ij defined in Eq. (7.5).
The four dimensional part of the loop momenta is often decomposed into a four-dimensional basis. Given a generic loop momentum k and three massless momenta p 1 , p 2 , p 3 , we can use the decomposition, in spinor notation, of denominators, or seed integrals with more propagators, are needed to fully reduce a given sector, the method above will not yield a complete reduction to a minimal basis of master integrals. Examples where we explicitly checked that additional seed integrals are needed are several two-loop topologies involving massive internal propagators (e.g. topologies for amplitudes with two fermion pairs having different masses), and some massless four-loop topologies (including most of those reduced in ref. [17]).
The massless momenta can be chosen depending on the cut, but it is also possible, and often easier, to define a global basis of momenta and therefore use the same set of loop variables y j and µ ij everywhere. If there aren't enough massless external legs, one may use massless projections of massive ones, or arbitrary massless reference vectors. In some cases it is convenient to make the substitution in Eq. (7.7) directly in the analytic integrand, since it provides simplifications for explicit choices of external helicity states. In other cases, one may instead make a list of all the loop-dependent objects (scalar products, spinor chains, etc. . . ) appearing in the integrand and express them individually as functions of the variables y j and µ ij . This defines a list of substitutions which can instead be done numerically inside the linear fit procedure, through the definition of the auxiliary functions a appearing in Eq. (4.10), while keeping the integrand written as a rational function of objects which yield a more compact expression for it. As we explained, on a multiple cut, the variables y j and µ ij are no longer all independent, but a subset of them can be written as rational functions of the others. Once again, we note that one does not need to perform these substitutions explicitly in the integrand, but only numerically using the auxiliary functions a as before.
We finally remark that it is often a good idea to group together diagrams which share the same denominator structure, or can be put under the same set of denominators as one of the parent topologies of the process. Thanks to the fact that, in the linear fit algorithm we defined in Eq. (4.10), we allow an arbitrary sum of contributions on the r.h.s., this grouping can be easily performed by including each diagram in this list of contributions (which, we recall, here includes the integrand and the subtraction terms), without having to explicitly sum them up analytically.
Decomposition of amplitudes into form factors
In this section, we briefly discuss the possibility of using the FiniteFlow framework for an alternative and widely used method for expressing amplitudes as linear combinations of standard Feynman integrals.
The method consists of considering an amplitude stripped of all the external polarization states. This amplitude will have a set of free indexes λ 1 . . . , λ e , which may be Lorentz indexes, spinor indexes, or other indexes representing higher-spin states. One can thus write down the most general linear combination of tensors T λ 1 ···λe j having these indexes, compatible with the known properties of the amplitude, such as gauge invariance and other constraints. More explicitly The form factors F j are rational functions of the kinematic invariants, which can be computed by contracting the amplitude on the l.h.s. with suitable projectors P λ 1 ···λe where In the previous equations, a dot product between two tensors is a short-hand for a full contraction between their indexes. There are at least two bottlenecks in this approach for which the FiniteFlow framework can be highly beneficial. The first is the inversion of the matrix defined in Eq. (8.4). This inversion can be obviously computed using one of the linear solvers described in section 4.2 -typically the dense solver if the tensors T j do not have special properties of orthogonality. The inversion can also be performed numerically, since it is only required in an intermediate stage of the calculation, and can be represented by a node in the dataflow graph. We find that, even in cases where the inverse matrix is very complicated, its numerical inversion takes a negligible amount of time compared with other parts of the calculation (e.g. IBP reduction). The other bottleneck which can be significantly mitigated by our framework is the difficulty of computing the contraction on the r.h.s. of Eq. (8.2), in cases where the projectors are particularly complicated. Indeed, by substituting Eq. (8.3) into Eq. (8.2) we get This means that we can compute the contractions T k ·A instead, which are usually significantly simpler, and multiply them (numerically) by the matrix T −1 jk at a later stage. This allows to reconstruct the form factors directly without ever needing explicit analytic expressions for the projectors. One can further elaborate the algorithm by contracting the free indexes of Eq. (8.1) with explicit polarization states, for the direct reconstruction of helicity amplitudes rather than the form factors themselves.
Finding integrable symbols from a known alphabet
As we already stated, many Feynman integrals can be cast as iterated integrals in the form of Eq. (6.5). It is customary to associate to these integrals an object called symbol [37,51]. For the purposes of this paper, we define the symbol as where, as already mentioned in section 6.2, w k are called letters, and a complete set of letters W = {w k } is called alphabet. Because the symbol does not depend on the integration path and the boundary terms, it contains less information than the full iterated integral, but it is still a very interesting object to study for determining the analytic structure of an amplitude. More information on symbols, their properties, and their relations to multiple polylogarithms can be found in [51]. Given a known alphabet W , one can build symbols of weight n as linear combinations of those defined in Eq. (9.1), namely S = j 1 ,...,jn c j 1 ···jn w j 1 ⊗ · · · ⊗ w jn .
(9.2) However, in general, such a linear combination is not integrable, i.e. it does not integrate to a function which is independent of the integration path. As pointed out in [52], a necessary and sufficient condition for the symbol in Eq. (9.2) to be integrable is for all k = 1, . . . , n − 1 and all pairs (z l , z m ), where z = {z j } are the kinematic variables the letters depend on. In the previous equation,ŵ k indicates the omission of the letter w k . By solving these integrability conditions, which amounts to solve a linear system for the coefficients c j 1 ···jn , one can build a complete list of integrable symbols of weight n. It is worth mentioning that there are additional conditions one can impose to restrict the number of terms in the ansatz of Eq. (9.2), namely additional conditions on the allowed entries of a symbol. For instance, the first entry, which is related to the discontinuity of the function, may be restricted to contain only letters associated to physical branch points of an amplitude.
Here we discuss a simple method 5 for finding all integrable symbols from a known alphabet, up to a specified weight n, exploiting the algorithms of the framework we presented in this paper.
We first observe that the only dependence of Eq. (9.3) on the explicit analytic expressions of the letters is via the crossed derivatives In order to simplify the notation, let us define a multi-index J = (i, j, l, m) such that The only relevant information about these derivatives which is needed for the purpose of solving Eq. (9.3) are possible linear relations which may exist between different elements d J . These relations only depend on the alphabet, and not on the weight of the symbols which need to be considered. Once all these linear relations have been found for a given alphabet, the integrability conditions can be solved at any weight using a numeric linear system over Q, and without using the analytic expressions of the letters again. In order to find these linear relations, we first compute analytic expressions for all the functions d J , which can usually be done in seconds even for complex alphabets. If the functions d J have no square root in them, we simply solve the linear-fit problem J x J d J = 0, (9.6) where the unknowns x J are Q-numbers, while the functions d J depend on the variables z. This equation is solved with respect to the unknowns x J using the (numerical version of the) linear fit algorithm already described in this paper. Linear relations between the unknowns x J are thus easily translated into relations between the functions d J (notice that independent unknowns multiply dependent functions, and the other way around). In order to simplify the linear fit, it is convenient to extract a priori some obvious relations, such as relations of the form d J = 0 or d J 1 = ±d J 2 , which are more easily identifiable from the analytic expressions. If the functions d J depend on a set of (independent) square roots, we first rewrite each of them in a canonical form, such that each function is multi-linear in the square roots. This can be easily done, one square root at the time, by replacing a given square root √ f with an auxiliary variable, say r, and computing the remainder of d J = d J (r) with respect to r 2 − f , via a univariate polynomial division with respect to r (note that univariate polynomial remainders are easily generalized to apply to rational functions 6 ). The result will be linear in r. If all the square roots are chosen to be independent, after putting the functions d J in this canonical form, one can simply solve the linear fit in Eq. (9.6) by replacing each square root with a new independent variable. This works because, if Eq. (9.6) holds and the d J are put in this canonical form, then the terms multiplying independent monomials in the square roots must vanish separately. This is effectively equivalent to performing a linear fit where each square root is treated as an independent variable. We find that, even in cases where square roots are rationalizable, this approach is often more efficient than using a change of variables which rationalizes the square roots.
Once a complete set of linear relations between the crossed derivatives d J has been found, one can use this information alone to solve the integrability conditions. This is best done recursively from lower to higher weights. As already stated, we understand that other conditions may still restrict the ansatz at any weight and therefore the list of integrable symbols.
At weight n = 1, every letter trivially defines an integrable symbol. At higher weights, it is customary to exploit the lower weight information in order to build a smaller ansatz than the one in Eq. (9.2). If {S (n−1) j } j is a complete set of integrable symbols at weight n − 1, we find the integrable symbols S (n) j at weight n as follows. We write our ansatz as Because the symbols S (n−1) j are already integrable, we only need to impose the integrability condition on the last two entries. Hence, for all possible pairs of variables (z l , z m ) we make the substitution w j 1 ⊗ · · · ⊗ w jn → (w j 1 ⊗ · · · ⊗ w j n−2 ) d j n−1 jn such that our ansatz is written in terms of linearly independent functions (still represented by independent variables in the formulas), and we impose that the coefficient of each independent structure with the form of the r.h.s. of (9.8) vanishes. This strategy builds a numeric sparse linear system of equations for the coefficients c jk in Eq. (9.7), which can be solved with the algorithm already discussed in this paper. Linear relations between the coefficients c jk are then easily translated into a set of linearly independent symbols at weight n satisfying the integrability conditions.
Proof-of-concept implementation
With this paper, we also publicly release a proof-of-concept implementation of the Finite-Flow framework. The code is available here https://github.com/peraro/finiteflow and can be installed and used following the instructions given at that URL. This code is the result of experimentation and trial and error, and should not be regarded as an example of high coding standards or as a final implementation of this framework. Despite this, it has already been used for obtaining several cutting-edge research results in high energy physics, and we believe its public release can be highly beneficial to the community. It also includes the FiniteFlow package for Mathematica, which provides a high-level interface to the routines of the library.
We also release a collection of packages and examples using the Mathematica interface to this code, at the URL https://github.com/peraro/finiteflow-mathtools which includes several applications described in this paper. In particular, it contains the following packages: FFUtils Utilities implementing simple general purpose algorithms, such as algorithms for finding linear relations between functions.
LiteMomentum Utilities for momenta in Quantum Field Theory. It does not use Finite-Flow, but it is used by other packages and examples in the same repository.
LiteIBP Utilities and tools for generating IBP systems of equations and differential equations for Feynman integrals, to be used together with the LiteRed [31] package.
Symbols Scripts for building integrable symbols from known alphabets.
We note that these packages should be regarded as a set of utilities rather the implementation of fully automated solutions for specific tasks. They are also meant as examples of how to build packages on top of the Mathematica interface to the code. The same repository also contains several examples of usage of the FiniteFlow package. While these examples have been chosen to be simple enough to run in a few minutes on a modern laptop, they can be used as templates to be adapted to significantly more complex problems. We therefore recommend reading the documentation which comes with them and the comments inside their source as an introduction to the usage of this code for the applications described in this paper.
In this section, we give some details on some aspects and features of our implementation of the FiniteFlow framework and provide some observations about possible improvements for the future.
The code is implemented in C++ and we provide a high-level Mathematica interface. At the time of writing, the Mathematica interface is the easiest and more flexible way of using FiniteFlow, since it allows to combine the features of our framework with the ones of a full computer algebra system. Interfaces to other high-level languages, such as Python, and computer algebra systems are likely to be added in the future.
This implementation uses finite fields Z p where p are 63-bit integers. We have explicitly hard-coded a list of primes satisfying 2 63 > p > 2 62 -namely the 201 largest primes with this property -which define all the finite fields we use. In particular, by making the assumption that all the primes we use belong to that range, we are able to perform a few optimizations in basic arithmetic operations. We use a few routines and macros of the Flint library for basic operations of modular arithmetic (we also optionally provide a heavily stripped down version of Flint with only the parts which are needed for FiniteFlow, with fewer dependencies, as well as an easier and faster installation), such as the calculation of multiplicative inverses and modular multiplication using extended precision and precomputed reciprocals [55].
We use several representations of polynomials and rational functions, depending on the task. As already explained in section 4.1, if we need to repeatedly evaluate polynomials and rational functions, we store the data representing them as a contiguous array of integers and evaluate them by means of the Horner scheme. For polynomials in Newton's representation, we store an array with the sequence {y j } and another one with the coefficients a j . The latter is an array of integers in the univariate case (see Eq. (2.5)) and an array of Newton polynomials in fewer variables in the multivariate case (see Eq. (2.9)). Univariate rational functions in Thiele's representation (given in Eq. (2.8)) are stored similarly to univariate Newton polynomials. For every other task, we use a sparse polynomial representation which consists of a list of non-vanishing monomials. Each monomial is, in turn, a numerical coefficient (in Z p or Q) and an associated list of exponents for the variables. This representation is used for most algebraic operations on polynomials, e.g. when converting Newton's polynomials in a canonical form, or when shifting variables (we recall that a shift of variables is typically required by the functional reconstruction algorithm we use). It is also the most convenient representation for communicating polynomial expressions between FiniteFlow and other programs such as computer algebra systems.
Our system for dataflow graphs distinguishes several types of objects, namely sessions, graphs, nodes and algorithms.
Sessions are objects which contain a list of graphs and are responsible for doing most operations using them, such as evaluating them while handling parallelization, and running functional reconstruction algorithms. Since a session can contain any number of dataflow graphs, for most applications there is no reason for using more than one session in the same program, although it is obviously possible. The concept of a session is not (explicitly) present in the Mathematica interface since the latter only uses one global session. Graphs in the same session, as well as nodes in a graph, are associated with a non-negative integer ID. In the Mathematica interface, these IDs can instead be any expression, which is seamlessly mapped to the correct integer ID when communicating with the C++ code. Graphs, as already explained, are collections of nodes. Nodes are implemented as wrappers around algorithms and contain a list of IDs corresponding to their inputs. When building a new node for a graph, the program checks that the expected lengths of its input lists are consistent with the ones of the output lists of its input nodes. Algorithms are the lowest-level objects responsible for the numerical evaluations, and they have associated procedures for it. Algorithms might also have a procedure for their learning phase and, in that case, they also specify how many times this should be called (with different inputs).
Because an algorithm might have to run in parallel for different input values, it is made of two types of data. The first type is read-only data, i.e. data which is not specific to an evaluation point and can be shared across several threads during parallelization. This might also include data which is mutable only during the learning phase. The second type of data can instead be modified during any numerical evaluation. In multi-threaded applications, mutable data needs to be cloned across all the threads in order to avoid data races. Algorithm objects thus have associated routines for cloning mutable data.
In the future, we might further split mutable data into two types. The first is mutable data which only depends on the finite field Z p . This data only needs to be copied a number of times equal to the maximum number of fields used at the same time in a parallel evaluation, which is typically no larger than two. The second is data which can depend on both the prime p and the variables z which are the input of a given graph. Only for the latter one needs to make a copy for each thread. Therefore, even though it is not currently implemented, this further split can improve memory usage by significantly reducing the amount of cloned data. As an example, consider a linear system with parametric entries depending on variables z. The rational functions defining the entries of the system as rational functions over Q, as well as the list of independent unknowns and equations, are immutable data. The same functions mapped with over Z p depend on the prime p but not on the points z. Finally, the numerical system, obtained by evaluating such functions numerically for specific inputs z, depends on both the prime field and the evaluation point.
We point out that the usage of dataflow graphs also greatly simplifies multi-threading. It is indeed sufficient that each type of basic algorithm has an associated procedure for cloning its non-mutable data. From these, the framework is able to automatically clone the mutable data of any complex graph, and correctly use it for the purpose of performing multi-threaded evaluations. A similar potential advantage regards serialization of algorithms, although this feature is not implemented at the time of writing. In principle, each basic algorithm may have an associated procedure for serializing and deserializing its data. From these, one would be able to serialize complete graphs representing arbitrarily complex calculations. This could be useful for both sharing graphs and loading them up more quickly, together with the information about the learning phases which have already been completed.
We now turn to the caching system used to store the evaluations of a graph. We recall that, in the multivariate case, we start by performing some preliminary univariate reconstructions, which determine (among other things) a list of evaluation points needed to reconstruct the output of a graph. In principle, for each evaluation point, we may need to store the input variables, the whole output list of the graph, and the prime p which defines the finite field. Unfortunately, when the output of a graph is a long list and a large number of evaluation points is needed, this straightforward strategy can yield issues related to memory usage. This can be true even when a Non-Zeros node is appended to a graph (see section 4.4), as we have already recommended. Hence, we adopt a slightly more refined strategy which works well in realistic scenarios. Heuristically, we observe that, when the output of a graph is a long list, the complexity of the elements of the list can vary significantly. In particular, many elements correspond to relatively simple rational functions while, usually, only a few of them have high complexity. Simpler rational functions obviously need fewer evaluation points in order to be reconstructed. Hence, one could improve this strategy by storing a shorter output list containing, for each evaluation point, only the elements of the output which need that point for their reconstruction. In practice, we proceed as follows. Once a complete list of evaluation points has been determined, for each element of the output we tag all the points needed for its reconstruction. In our implementation, this tagging requires one bit of memory for each output element. If an evaluation point is never tagged, it is removed from the list. Then, after a graph is evaluated on a given point, we only store the entries of the output for which that point is needed. This typically allows to store a much shorter output list on most evaluation points, therefore yielding a major improvement in memory usage. When combined with the usage of Non-Zeros nodes, we find that when using this strategy the caching of the evaluations is hardly ever a bottleneck in terms of memory usage, especially when the code is run on high-memory machines available in clusters and other computing facilities often used for intensive scientific computations.
We also point out that, as explained more in detail in section 10.1, one can generate lists of needed evaluation points and separately evaluate subsets of them, either sequentially or in parallel. On top of being a powerful option for parallelization, this feature also allows to split long calculations into smaller batches and save intermediate results to disk, such that they are not lost in case of system crashes or other errors which may prevent the evaluations to successfully complete.
The FiniteFlow library implements the basic numerical algorithms described in this paper, the functional reconstruction methods we discussed, as well as the framework based on dataflow graphs. When the latter is used, one can easily define complex numerical algorithms without any low-level coding. This can be done even more easily from the Mathematica interface. The latter also offers some convenient wrappers for common tasks, such as solving analytic or numeric linear systems or linear fits. These wrappers hide the dataflow-based implementation. However, as discussed in this paper, the approach based on dataflow graphs offers a flexibility which greatly enhances the scope of possible applications of this framework.
The approach based on dataflow graphs is the preferred way of defining algorithms with the library, especially when using the Mathematica interface. However, the library can also be enhanced by custom numerical algorithms written in C++. For instance, the results presented in [14,42] used a custom C++ extension of the linear fit algorithm which computes generalized unitarity cuts via Berends-Giele currents, as explained in ref. [2] (this extension is not included in the public code).
It should also be clear that the FiniteFlow framework is not designed to solve one specific problem, but as a method to implement solutions for a large variety of algebraic problems. By building on top of this public code, one can, of course, implement higher-level and easier-to-use solutions for more specific tasks.
Parallel execution
As discussed in section 2.3, one of the main advantages of functional reconstruction algorithms is that they can be massively parallelized. In our current implementation, we offer two strategies for parallelization, which can also be used together.
The first and easier-to-use strategy is multi-threading. This is handled completely automatically by the code when the dataflow-based approach is used. Data which cannot be shared among threads is cloned as needed and parallelization is achieved by splitting the calculation over an appropriate number of threads, as explained in section 2.3. The number of threads which is used can either be specified manually or chosen automatically based on the hardware configuration. We recommend specifying it manually when using the code on clusters or machines shared among several users, since the automatic choice might not be the most appropriate one in such cases.
The second method allows to further enhance parallelization possibilities by using several nodes of a cluster, or even several (possibly unrelated) machines, for the evaluations of the function to be reconstructed. In order to use this method, after defining a numerical algorithm, we compute and store the total and partial degrees of its output. As explained, this is done via univariate reconstructions which are much quicker than a full multivariate one. From this information, we also build and store a list of inputs for the evaluations. For this, we need to make a guess of how many prime fields will be needed. One can, however, start by assuming only one prime field is needed, and add more points at a later time if this is not the case. The stored list of needed evaluation points can be shared across several nodes or several machines, where any subset of them can be computed and saved independently. Of course, these evaluations can (and will, by default) be further parallelized using multithreading, as discussed above. Finally, the evaluations are collected on one machine where the reconstruction is performed. Should the reconstruction fail due to the need of more prime fields, we increase our guess on the number of primes needed and create a complementary list of evaluation points. We then proceed with the evaluation of these additional points, across several nodes or machines as for the previous one, and collect them for the reconstruction. We proceed this way until the reconstruction is successful. This method greatly increases the potential parallelization options, at the price of being less automated, since the lists of evaluations need to be generated and copied around by hand. 7 This option can be very beneficial for reconstructing particularly complex functions, or functions whose numerical evaluation is very time-consuming. As already mentioned, it also provides a method splitting up long calculations in smaller batches and saving intermediate results on disk.
Conclusions
We presented the FiniteFlow framework, which establishes a novel and effective way of defining and implementing complex algebraic calculations. The framework comprises an efficient low-level implementation of basic numerical algorithms over finite fields, a system for easily combining these basic algorithms into computational graphs -known as dataflow graphs -representing arbitrarily complex algorithms, and multivariate functional reconstruction techniques for obtaining analytic results of out these numerical evaluations.
Within this framework, complex calculations can be easily implemented using high-level languages and computer algebra systems, without being concerned with the low-level details of the implementation. It also offers a highly automated way of parallelizing the calculation, thus fully exploiting available computing resources.
The framework is easy to use, efficient, and extremely flexible. It can be employed for the solution of a huge variety of algebraic problems, in several fields. It allows to directly reconstruct analytic expressions for the final results of algebraic calculations, thus sidestepping the appearance of large intermediate expressions, which are typically a major bottleneck.
In this paper, we have shown several applications of this framework to highly relevant problems in high-energy physics, in particular concerning the calculation of multi-loop scattering amplitude.
We also release a proof-of-concept implementation of this framework. This implementation has already been successfully applied to several state-of-the-art problems, some of which proved to be beyond the reach of traditional computer algebra, using reasonable computing resources. Notable examples are recent results for two-loop five-gluon helicity amplitudes in Yang-Mills theory [14,21] and the reduction of for four-loop form factors to master integrals [17]. We point out that these two types of examples are complex for very different reasons. In the former, a large part of the complexity is due to the high number of scales, while in the latter, which only has one scale, it is due to the huge size of the IBP systems one needs to solve. Quite remarkably, the techniques described in this paper have been able to tackle both these cases, showing that they are capable of dealing with a wide spectrum of complex problems.
We believe the algorithms presented in this paper, and their publicly released proof-ofconcept implementation, will contribute to pushing the limits of what is possible in terms of algebraic calculations. Due to their efficiency and flexibility, they will be useful in the future for obtaining more scientific results concerning a wide range of problems.
of lower weight unknowns. The complexity of the Gauss elimination algorithm for a sparse system can strongly depend on this choice of weight. Therefore, even if any choice of weight can be specified when defining a system, it is worth giving an example which we found works well for IBP systems.
In the case of an IBP system, the unknowns are Feynman integrals. For the purpose of assigning a weight to them, it is customary to associate to each integral in Eq. (5.1) the following numbers: • t is the number of exponents α j such that α j > 0 It is generally understood that the higher these numbers are, the more complex an integral should be considered [30]. It is also customary to use the notion of sector of an integral, which is identified by the list of indexes j such that the exponents α j are positive, i.e. {j|α j > 0}. In other words, two integrals belonging to the same sector depend on the same list of denominators, possibly raised to different powers, and possibly with a different numerator. As an example, a definition of weight for Feynman integral can be determined, by the following criteria, in order of importance: • the positive integer r − t, where a higher number means higher weight • the positive integer t, where a higher number means higher weight • the positive integer r, where a higher number means higher weight • the positive integer s, where a higher number means higher weight • integrals in a topology T 1 are considered to be of higher weight if they belong to a sector mapped to a different topology T 2 • integrals in a sector of a topology T are considered to be of higher weight if they belong to a sector mapped to another sector of the same topology • the positive integer max({−α j } j|α j <0 ), where a higher integer means higher weight.
If the criteria above are not sufficient to uniquely sort two different integrals, we fall back to any other criterion which defines a total ordering, such as the intrinsic ordering built in a computer algebra system to sort expressions. The choice above prefers integrals with powers of denominators no higher than one -indeed, this is used as the very first criterion for determining the weight of a Feynman integrals. We found that this choice is particularly effective when combined with the mark-and-sweep algorithm for filtering out unneeded equations, since it often yields a smaller set of needed equations than other choices. We however stress again that, of course, many other definitions of weight are possible and can be specified instead of the one suggested here.
We make a few more observations about the generation of IBP systems. These equations -which include IBPs, Lorentz invariance identities, symmetries among integrals of the same sectors, and mappings between integrals of different sectors -are typically first generated for generic Feynman integrals of the form of Eq. (5.1) with arbitrary symbolic exponents. These are sometimes called template equations. The IBP system is thus generated by writing down these template equations for specific Feynman integrals (i.e. for specific values of the exponents), which in this context are known as seed integrals. It is interesting to understand how many and which seed integrals must be chosen in order to successfully reduce a given set of needed integrals to master integrals. To the best of our knowledge, there is no way of determining a priori a minimal choice which works, but common choices which are expected to work in most cases (despite not being minimal) exist. A popular choice which usually works is selecting a range for the integers s and r of the seed integrals, based on the choice one must make for the top-level sectors. However, we find that it is often more convenient to specify a range in s and r − t instead. In particular, for most topologies one only needs to select seed integrals for which the value of r − t is either the same or one unity higher that the maximal one between the integrals which need to be reduced. We however also point out that, while an over-conservative choice of seed integrals will result in a slowdown of the learning phase, the equations generated from unneeded seed integrals may be all successfully filtered out by the mark-and-sweep algorithm, hence reducing the system to the same one would have obtained with a more optimal choice. However, we also point out that this may or may not happen depending on the chosen ordering for the Feynman integrals. We have empirically observed that it does happen for the choice of ordering based on the definition of weight we suggested above.
We conclude this appendix with an observation about sector mappings which we haven't found elsewhere in the literature. This concerns kinematic configurations which have symmetries with respect to permutations of external legs, i.e. permutations of external momenta which preserve all the kinematic invariants. Notable examples are three-point kinematics with two massless legs, and four-point fully massless kinematics. For these kinematic configurations we can distinguish two types of sector mappings. The first one, which we call here normal mappings, simply consists of shifts of the loop momenta which map a sector into a different one. The second one, which we call generalized mappings, consists of a permutation of external legs which preserves the kinematic invariants, optionally followed by a shift of the loop momenta. The most typical approach to deal with these mappings does not distinguish between the two types. In particular, for all mapped sectors, only sector mappings are generated in the system of equations, and no IBP identity, Lorentz identity or sector symmetry. The rationale is that one would expect the other identities to be automatically covered by combining sector mappings with identities generated for the unique (unmapped) sectors. However, we explicitly verified that this is not always the case for generalized mappings. In other words, given a set of seed integrals for a generalized mapped sector, there are some identities which are independent of the ones generated by combining sector mappings for the same set of seed integrals, and identities for the unique sectors. The missing identities can be recovered by adding more seed integrals to the mapped sectors and to the unique sectors, at the price of obtaining a more complex system of equations. Notice that this is similar to what happens for Lorentz invariance identities, which in principle can be replaced by IBP identities only, at the price of using more seed integrals and making the system more complex. A simple example of this is the two-loop massless double box. We indeed found that this topology can be reduced to master integrals, for any range in s, by considering only seed integrals with r − t = 0, as long as IBPs and Lorentz invariance identities are generated also for sectors satisfying generalized mappings. When these additional identities are not included, we need to add seed integrals with r − t = 1 in order to successfully perform the reduction. We therefore recommend to generate, alongside generalized mappings, also IBPs, Lorentz identities and symmetries for sectors which satisfy them. This is even more convenient when using the mark-and-sweep algorithm for simplifying the system, since the simpler equations with lower r − t are automatically selected if available. This can eventually yield a smaller system with easier equations to solve. For similar reasons, we recommend to always add Lorentz invariance identities, regardless of the topology. | 35,352.8 | 2019-05-20T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
A method of tracking and positioning of industrial robots
: In this paper, an improved tracking and localization algorithm of an omni-directional mobile industrial robot is proposed to meet the high positional accuracy requirement, improve the robot’s repeatability positioning precision in the traditional trilateral algorithm, and solve the problem of pose lost in the moving process. Laser sensors are used to identify the reflectors, and by associating the reflectors identified at a particular time with the reflectors at a previous time, an optimal triangular positioning method is applied to realize the positioning and tracking of the robot. The experimental results show that positioning accuracy can be satisfied, and the repeatability and anti-jamming ability of the omni-directional mobile industrial robot will be greatly improved via this algorithm.
Introduction
Across many domains, there is an increasing demand for robots capable of performing complex and dexterous manipulation tasks. A typical example is the need for factory assembly lines. With the combination of a dexter-ous manipulator and mobile platforms, mobile robots are well suited to these complex tasks. Given any complex scenario, the mobile robot must be able to achieve localization (Suwoyo, Abdurohman, et al., 2022;Yu et al., 2023). Localization is a key functionality of any navigation system as it tracks and determines the position of a mobile robot in the environment. This is a challenging topic in the area of autonomous mobile robot research (Akbar Qureshi et al., 2022;J. Zhang et al., 2023).
Many methods are proposed to address the problem of mobile robot localization; these methods can be divided as two categories: relative positioning and absolute positioning. In relative positioning, dead reckoning and inertial navigation are commonly used to calculate mobile robots' positions. This method does not need to perceive the external environment, but its drift error accumulates over time. Absolute positioning relies on detecting and recognizing different features in the environment in order to realize the position of the robot. These environment features are normally divided into two types: artificial landmarks and natural landmarks (Suwoyo & Harris Kristanto, 2022). Compared with the natural landmarks, artificial landmarks have advantages of high recognition and high accuracy. There is no cumulative error when a localization method based on artificial landmarks is utilized; however, the key challenge is accurately identifying and extracting the needed information from the artificial landmarks. Researchers have proposed several solutions, such as fuzzy reflector-based localization and color reflector-based self-localization. The main drawbacks of these methods are that the amount of computation power they need is too large and the antiinterference ability is poor (Suwoyo, Hidayat, et al., 2022;Tai & Yeung, 2022).
To tackle these problems pertaining to the localization method based on artificial landmarks, Madsen and Andersen proposed a three reflectors positioning method using the constraints of three reflectors and the triangulation principle to realize positioning. Betke and Gurvits proposed a multi-channel positioning method with the three-dimensional positioning principle and the least squares method to accomplish localization. Because of the unavoidable errors in position and angle measurement of reflectors, use of only a trilateral or triangular method will not achieve the required positioning accuracy (Fan et al., 2023). In the navigation of a mobile robot, some reflectors may be obscured or confused with other obstacles, leading to pose information loss of the positioning system. To solve these problems, this paper presents an improved tracking and locating algorithm of an omni-directional mobile industrial robot. The robot is used in drilling and riveting processing of sheet metal parts, and laser sensors are used to identify the reflectors. By associating the reflectors identified at a particular moment with the reflectors at a previous time, an optimal triangular positioning method is applied to realize the positioning and tracking of the robot in a global environment (Suryaprakash et al., 2021;Tan et al., 2023).
Research Method
Optimal triangulation positioning algorithm based on angle measurement In order to achieve accurate positioning, a mobile robot must have a basic ability to perceive external information through extracting the reflector features (Zhang et al., 2022). In this research, there are n reflectors, and the feature of each reflector Bi (i = 1,2,•••, n) is extracted from the measurement data. The feature extraction algorithm consists of three steps: (i) filtering and clustering, (ii) identification, and (iii) feature extraction.
Coordinate system description
The measurement data obtained by each scan cycle of the laser sensor are a set of discrete data sequences {{( , ), }i | = 1,2, ⋯ , }, where ( , )i is the polar coordinate, the distance from the target point to the sensor, the polar angle, and λi the intensity value of the ith data point.
In the process of data analysis, the outlier points that are contaminated by noise need to be filtered out (D. Zhang et al., 2023).
Considering that the density of the collected data points is proportional to the distance from the target point to the laser sensor, and to improve the efficiency of the feature extraction process, an adaptive clustering method is adopted. Unless the distance between two data points is less than the threshold d, these data points are clustered for one reflector (Li et al., 2023;Ostanin et al., 2022). (1) where i-1 is the distance value of the − 1th data point, ∆ the angle resolution of the laser sensor, β an auxiliary constant parameter, and y the measurement error. The values of parameters y and β are given as 0.01 m and 10°, respectively.
Result and Discussion
The experimental data are obtained by the LMS 100 laser sensor with a scanning range of 270° and an angular resolution of 0.25°. The outside of the reflector is wrapped by reflective tape.
Repeatability positioning results
The optimal triangulation method based on the angle measurement is used for validation by the repeatability of the omni-directional mobile industrial robot. In the stationary state of the omni-directional mobile industrial robot, the environment is scanned by laser sensors to realize the positioning of the robot. Each positioning result is indicated by a red dot in Fig. 3.
The repeatability obtained by the trilateral method is nearly 18 mm, while the repeatability of the optimal method is only 9 mm. It can be shown that the optimal method is better than the traditional method (Sun et al., 2023).
Positioning accuracy results
The omni-directional mobile industrial robot moves in the direction of the arrow in Fig. 3, and each time the robot moves a certain distance, the navigation and positioning system will perform a positioning experiment, i.e., it will use the left rear laser sensor to calculate the current position. An average of 30 samples is taken for each experiment. Figure 4 shows the results of static positioning accuracy. The maximum distance error is 18 mm and the maximum angle error is 2°, which satisfies the positioning requirement of the omnidirectional mobile industrial robot.
Tracking location results
The omni-directional mobile industrial robot moves in the designated route, and it needs to constantly calculate and record its own positioning in the moving process. As shown in Fig. 5, the trajectory of the moving robot based on the tracking and positioning method is smoother.
Conclusions
This paper demonstrates the feasibility of a tracking and locating algorithm for omnidirectional mobile industrial robot. The following conclusions can be drawn from this study: (i) In the detection of a reflector in the sensor coordinate system, the angle repeatability of the reflector is better than that of the distance repeatability based on the feature extraction algorithm; (ii) The repeatability positioning accuracy using the optimal triangulation method based on the angle measurement is nearly 9 mm, which is better than that of the trilateral method; (iii) The positioning error of the robot is 18 mm, which satisfies the positioning requirement of omni-directional mobile industrial robot. Improvements in the location method based on reflectors, such as optimizing the layout of reflectors and the map of reflectors selection strategy for positioning, are still needed. In the future, we intend to extend our work to research a positioning method based on reflectors and the environmental profile, and achieve better accuracy on the localization of the mobile robot. | 1,888.6 | 2023-07-23T00:00:00.000 | [
"Computer Science"
] |
Discovered Solar Positronium
I describe a method for the observation of Positronium (Ps) involvement in the solar radiation spectrum. In this method, Rydberg-Ritz’s principle and Planck’s radiation formula are used to acquire information of the atomic transitions of Ps alike Hydrogen and Helium. In order to perform this experiment, an advanced solar spectrum monitor is constructed by utilizing light emitting diodes (LED) of various colors. A detailed study on this method provides qualitative agreement with experimental data, giving insight to the physical process involved in the solar radiation spectrum and confirming the existence of solar Ps.
Introduction
Living creatures can sustain only in the third planet of the solar system and maximum energies consumed by it are being received from the Sun.Positronium (Ps) is a purely leptonic H-like atom formed from an electron ( ) e − and its anti particle the positron ( ) e + .Particle antiparticle interaction is an electromagnetic process and a good test of QED leading to discover the similar phenomena in the environment of Sun by studying the chromaticity of the solar spectrum [1].It is thought that solar energy produced by the radiation consisting of atomic transitions of H and He, and what maximum of us did not aware of Ps which is one of the important ingredient of solar radiation.From the best of my knowledge on the experimental and the theoretical literature survey, solar Ps spectrum is being demonstrated here for the first time.Radioactive isotope, pair production and fusion reac-tion are the major sources of e + .The huge applications for e + include atomic, nuclear, astrophysics experi- ments [1]- [5], positron emission tomography (PET) [6] [7], studies of defects, surfaces, electron momentum of materials [8] [9] and material with medicinal values [10].With the advent of sophisticated technology scientists are capable to produce high intense pulsed positron beam, accumulation of e + and Ps, production of Ps 2 mole- cule, intensive studies of Ps laser cooling for the achievement of Ps Bose-Einstein Condensation (PsBEC) [11]- [15].Short pulses of Ps atoms is suitable for laser spectroscopy of the Lyman-α -like transition in the dipo- sitronium (Ps 2 ) molecule at a UV wavelength of 251 nm [16], as well as Ps formation and dynamics in various target materials and efficient production of Rydberg Ps (binding energy −6.8 eV which is just half of the H due to the reduced mass of Ps) atoms.A. P. Mills and co-workers succeeded to make the e + beam intensity ~10 10 to ~10 11 per cm 2 by adding Ps-forming target and a pulsed magnet and Ps-Lyman-α spectroscopy can be found elsewhere [17].Dipositronium and Ps can be produced simultaneously on the metal surface by the highest intense slow e + beam bombardment which is significant for studying the Rydberg Ps atoms and observing the PsBEC state.Hence laboratory based e + and Ps studies enrich our knowledge that helps me to unearth the so- lar Ps and find out its characteristics in the enormous temperature of the Sun.The detection of Ps outside the laboratory experiments were first observed by Chuup et al. (1973) [18] and first identified by Levethal et al. (1978) from the Galactic centre [19] via detection of the Ps annihilation, but innovation of the solar light color spectroscopy did not exist.Recent novelty of light emitting diodes (LED) for which "Nobel Prize-2014" is conferred to Isamu Akasaki, Hiroshi Amano and Shuji Nakamura for their greatest contribution in social, industrial and entertainment applications.How LEDs can play a crucial role in science & technology development and pave the way of discovering the solar Ps will be illustrated in the following sections.
Formation of Solar Positronium
The fusion reaction [1] is a source of solar Ps production that persistent in thermal plasma at 5
K T ≤
[20] for a while that is good enough to measure the transition energies between the quantum states.Since the reduced mass of Ps is 2 e m just half of that of H atom and hence the binding energy of Rydberg Ps is −6.8 eV.Correspondingly Ps-Lyman-α has a wavelength of 243 nm and was observed by Canter et al. [21].The total wave function of the Ps state is the product of three wave functions depending on the spin, space and charge coordinates: ( ) ( ) ( ) ( ) This is a symmetric-antisymmetric wave function depends on the spin functions for a combination of two spin-1/2 particle-antiparticle ( e + and e − ) can be expressed in the following four possible blends [22]: where the first three form a spin triplet ( ) S of 1 S = and 1 z S = , 0, −1 states have the property that they are symmetric under particle interchange, and the last one is a singlet ( ) state has the property of antisymmetric.The symmetry of the spin function α under particle interchange is ( ) , where S is the total spin.Particle interchange is equivalent to space inversion, introducing a factor ( ) − , where l is the orbital angular momentum of the system.The charge wave function β acquire a charge conjugation factor ( ) where n is the number of photons under particle interchange.The product of the factors applying to the separate spin, space, and charge function must then be that of the total wave function ϕ , which we de- note by K , so that ( ) ( )
Comparison of Exotic and Non Exotic Atoms
Since most of the previous studies have focused on the Galactic centre as the most promising source of Ps, although visual extinction is very high.However, detection of Ps recombination line made by Puxley & Skimer (1996) who searched for Ps Paschen-β from the Galactic centre but undetected [23].Very recently study chromaticity of solar spectrum lead me to discover the solar Ps which is described in the following sections.This renewed interest is motivated by several recent and imminent advances in technology.From the literature survey it is found that Ps formation, recombination and quenching largely take place at temperature <10 6 K. Therefore, most astrophysical environments (~10 3 -10 6 K) for Ps formation will be the dominant process leading to annihilation [24].A comparative study of Ps, H and He are shown in Table 2.
The spectral lines (wavelength) of solar radiation spectrum are achieved from the Rydberg-Ritz formula: where h is the Planck's constant, c is the speed of light, R α is the Rydberg energy (shown in Table 2).
Hence, wavelength of each atom will be different i.e., different color bands in the spectrum.The quantum numbers i n and f n respectively are the initial and final states of the atomic transition.For the Lyman series 1 ... etc.The intensity of each color bands will be distributed according to the Planck's radiation formula which is given below and details of this formulation can be found elsewhere [25].
( ) where K is the Boltzmann constant and T is the temperature of the photosphere (5800 K).The wavelength λ of the corresponding H, He and Ps can be obtained from Equation (1).
Detector Development and Data Acquisition
In order to study the solar Ps an advanced Solar Spectrum Monitor (SSM) development is extremely vital.Using the various colored LEDs (keeping in mind of VIBGYOR in the solar spectrum Red, Orange, Yellow, Green and White LEDs are taken into account) and a Freeduino the SSM detector is constructed.LEDs which are commer-Table 1. Properties of singlet and triplet states Ps.
Results and Discussions
Data obtained by different colored LEDs are compared with the results of the simulation considering the model based on Rydberg-Ritz-Planck principle rigorously.
Simulation
In order to understand the solar radiation spectrum intensively, an extensive simulation is carried out for the atomic transitions of He, Ps and H according to the Rydberg-Ritz principle (see Equation ( 1
Experimental Results
Similarly experimental data are analyzed by subtracting each background from the respective colored bands and converted energy (eV) to wavelength (nm) which is depicted in Figures 4(a)-(d).In order to see the contribu- 3 and considered for the evaluation with a LED manufacturer's quoted values found elsewhere [26] for better understanding.
Contribution of H-Balmer 54% in Green, H-Paschen 4.2% in Orange and 66% in Red spectra are estimated from the fitted frequencies.Similarly He-Paschen 8% in Green, 2.34% in Yellow, and He-Brackett 34% in Green, 93% in Yellow; and Ps-Balmer 1% in Yellow, 41% in Orange and 26% in Red are estimated.In total contributions of H-(Balmer + Paschen), He-(Paschen + Brackett) and Ps-(Balmer) respectively are determined to be 31%, 52% and 17% in the visible range of solar radiation spectrum.
The extremity at right hand side of each spectrum shows the temperament of Planck's radiation formula.In order to find exact distribution of this formula more colored LEDs (Violet, Indigo and Blue) data are required.However reported results confirm a good agreement with the theoretical achievement of Ridberg-Ritz-Planck's principle within this constraint.The differences what are merely appeared due to the diverse transitional times among H, He and Ps in the solar environment, refraction and reflection of light due to solar and earth atmospheres and statistical error due to electronics fluctuations which are not determinative.The FWHM of each Gaussian distribution represents the wide range of colors originated from the respective atoms of different transition series.The FWHM of Red spectrum shows maximum due to the widest range of different series and the highest contribution of H-Paschen (66%) series.Yellow spectrum shows minimum because of the shorter range and maximum contribution of He-Brackett (93%) series.The contribution of the He atom is maximum because it is stable and inert.On the other hand, higher ionization probability leads H for taking part in the fusion reaction.Ps is unstable, quenching in the huge magnetic field and have the shortest lifetimes; as a result transitional probabilities of Ps are fewer than H and He.In the Orange spectrum (600 -700 nm), around 40% contribution comes from the He-Pfund series.But I suppose this probability is very low in compare to He-Brackett and He-Paschen series in the range of visible light.Hence, it is better to think other type of lighter atoms forming in the fusion mechanisms whose binding energy (~8.25 eV) is fewer than H and greater than Ps atom.That hypothetical atom (presume Nm) will provide the color band of wavelength (600 -700 nm) from the transition of Nm-Balmer series.
Conclusion
The data of Solar Spectrum Monitor conclude the existence of solar Ps which is one of the main constituent of solar radiation spectrum.By monitoring and characterizing the colored radiation spectra solar Ps will be estimated and that will tell us the present, past and future of the astrophysical phenomena.If SSM detector is possible to send in the outer space with necessary modification and collect the γ -rays of e e + − annihilation more precisely and those analyzed data will provide intense information about the space, extra galactic solar system and their circumstances, celestial Ps, dark matter and possibly the giant source of PsBEC.Hence, solar Ps spectroscopy plays a crucial role for the determination of sun's environment, fusion reaction in various cells and constituents.The intensity distribution of the solar spectrum and associate color bands exactly tell us the nuclear and fusion reaction mechanisms, radiation energy and particle-antiparticle plasma in the colossal magnetic fields.
. Individuality of the singlet and the triplet states Ps are presented in )). Simulation results are shown in Figures 2(a)-(c) and solar spectrum (sum of a, b and c) is depicted in (d) where components of each atomic transitions are clearly exhibited with the colour bands, but data do not illustrate the distribution of solar radiation spectrum.Hence wavelengths which are obtained by the Rydberg-Ritz's formula stated in Equation (1) are fed into the Planck's radiation formula.The wavelength distributions are shown in Figures 3(a)-(c).After combining those data of Ps, He and H the solar radiation spectrum is achieved and that is revealed in (d) and consequently the area of interest (720 -830 nm, Ps-Balmer series) is illuminated.Theoretical contribution of H-(Balmer+Paschen), He-(Paschen+Brackett) and Ps-(Balmer) spectra in visible range of solar radiation respectively are estimated to be 37%, 41% and 22%.
Figure 2 .
Figure 2. Origin of the solar spectrum.
Figure 3 .
Figure 3. Distribution of solar radiation with other constituent spectra.
Figure 4 .
Figure 4. Observation of solar radiation by coloured LEDs.tion of each component in the solar radiation spectrum data are fitted with Gaussian.The mean and FWHM ( ) 2.35 σ = × of the Gaussian are shown in Table3and considered for the evaluation with a LED manufacturer's quoted values found elsewhere[26] for better understanding.Contribution of H-Balmer 54% in Green, H-Paschen 4.2% in Orange and 66% in Red spectra are estimated from the fitted frequencies.Similarly He-Paschen 8% in Green, 2.34% in Yellow, and He-Brackett 34% in Green, 93% in Yellow; and Ps-Balmer 1% in Yellow, 41% in Orange and 26% in Red are estimated.In total contributions of H-(Balmer + Paschen), He-(Paschen + Brackett) and Ps-(Balmer) respectively are determined to be 31%, 52% and 17% in the visible range of solar radiation spectrum.The extremity at right hand side of each spectrum shows the temperament of Planck's radiation formula.In order to find exact distribution of this formula more colored LEDs (Violet, Indigo and Blue) data are required.However reported results confirm a good agreement with the theoretical achievement of Ridberg-Ritz-Planck's principle within this constraint.The differences what are merely appeared due to the diverse transitional times among H, He and Ps in the solar environment, refraction and reflection of light due to solar and earth atmospheres and statistical error due to electronics fluctuations which are not determinative.The FWHM of each Gaussian distribution represents the wide range of colors originated from the respective atoms of different transition series.The FWHM of Red spectrum shows maximum due to the widest range of different series and the highest contribution of H-Paschen (66%) series.Yellow spectrum shows minimum because of the shorter range and maximum contribution of He-Brackett (93%) series.The contribution of the He atom is maximum because
Table 2 .
Comparison among the Ps, H and He atoms.thepublicmarket are collected, hence manufacture's quote of each colored wavelength of LEDs are unavailable.Freeduino is a tool for the interfacing of data to a computer from the physical parameters, e.g., sound, light etc.It is based on the Atmel's ATMEGA328 micro controller systems.The size, shape and number of each colored LEDs are kept equal and connected in series to each group.The size of the Vero-board is 15 × 10 cm 2 and each colored section is about 4 × 4 cm 2 .There are five channels of the color group.The analogue output signals from the SSM are fed into Freeduino digital microprocessor unit 0 -4.The Freeduino gets its input i.e., analogue signals from the SSM detector when it is directly exposed to the ray of Sun for an hour.These signals are digitized by the ADC (1024 Channels) of the board that is interfaced with a PC via USB ports and output data is scrutinized in the monitor by using external software.This data is now displayed and stored in the ASCII format for further analysis.In order to take the background data SSM detector is covered with a piece of thick black cloth and data are taken for the same time.A schematic diagram of the SSM detector is shown in Figure1.
Table 3 .
[26]uation of experimental results with manufacturer's quoted value..Reference[26]is not the same manufacture of LEDs which are used in this experiment. a | 3,588.2 | 2014-11-18T00:00:00.000 | [
"Physics"
] |
Tunable and enhanced light emission in hybrid WS2-optical-fiber-nanowire structures
In recent years, the two-dimensional (2D) transition metal dichalcogenides (TMDCs) have attracted renewed interest owing to their remarkable physical and chemical properties. Similar to that of graphene, the atomic thickness of TMDCs significantly limits their optoelectronic applications. In this study, we report a hybrid WS2-optical-fiber-nanowire (WOFN) structure for broadband enhancement of the light–matter interactions, i.e., light absorption, photoluminescence (PL) and second-harmonic generation (SHG), through evanescent field coupling. The interactions between the anisotropic light field of an optical fiber nanowire (OFN) and the anisotropic second-order susceptibility tensor of WS2 are systematically studied theoretically and experimentally. In particular, an efficient SHG in the WOFN appears to be 20 times larger than that in the same OFN before the WS2 integration under the same conditions. Moreover, we show that strain can efficiently manipulate the PL and SHG in the WOFN owing to the large configurability of the silica OFN. Our results demonstrate the potential applications of waveguide-coupled TMDCs structures for tunable high-performance photonic devices.
Introduction
Layered transition metal dichalcogenides (TMDCs) have attracted significant renewed interest in recent years, from fundamental physics to applications, owing to advances in graphene research 1,2 . Although most of the TMDCs have been studied for decades, it has been recently revealed that atomically thin TMDCs can exhibit distinct properties compared with their bulk counterparts 2,3 . For example, MoS 2 exhibits a transition from an indirect bandgap in the bulk to a direct bandgap in the monolayer owing to the lateral quantum confinement effect 4,5 . The reduced dielectric screening of the Coulomb interactions contributes to the extremely strong exciton effects 6,7 . The broken inversion symmetry and strong spin-orbit coupling in the monolayer lead to robust spintronics and valleytronics 8 , with a possibility for optical manipulation 9,10 . The lack of centrosymmetry in odd layers contributes to the giant second-order optical nonlinearity 11,12 . These pioneering studies indicate that TMDCs are promising candidates for electronic 13 , photonic [14][15][16][17][18][19] , and optoelectronic [20][21][22][23] applications. Although the layered TMDCs exhibit considerably strong light-matter interactions in the visible/near-infrared spectrum owing to the exciton resonance effects 7,24,25 , an interaction enhancement is possible when considering the large discrepancy between the light wavelength and atomic thickness of the TMDCs, especially in the nonresonant spectrum region. The inherent flexibleness of two-dimensional (2D) TMDCs is advantageous for their integration to photonic structures, including optical waveguides [26][27][28] , microcavities [14][15][16][17][29][30][31] , and plasmonic structures [32][33][34][35][36][37][38] . Nevertheless, most of these hybrid structures do not utilize the tunable properties of the TMDCs, which can be easily manipulated by doping 6,23,39 , strain 40,41 , and other environmental effects 42 .
In this study, we report a direct integration of monocrystalline monolayer WS 2 to an optical fiber nanowire (WOFN) for broadband enhancement of light-matter interactions (Fig. 1a), i.e., photoluminescence (PL) and second-harmonic generations (SHGs). Through the evanescent field coupling effects in the optical fiber nanowire (OFN), the light-WS 2 interaction length can be significantly extended 43,44 , which is free from the limitations of the atomic thickness in monolayer WS 2 . Moreover, the waveguide structure can also efficiently collect the light emission from the WS 2 through the near-field coupling effect 27,45 . Although several previous studies have reported an integration of nanoflakes of TMDCs to an optical waveguide, most of them focused on the TMDCs' extrinsic properties, such as saturation absorption effects in the infrared range for pulsed fiber lasers 46,47 , which can be attributed to defects in the TMDCs. Here, we experimentally demonstrate an in-waveguide tuning of the exciton wavelength of WS 2 under a uniaxial strain 41 , owing to the large configurability and mechanical strength of the silica OFN 43,48 . In addition, we show that the SHG in the OFN can be significantly enhanced with the introduction of the WS 2 layer, when considering its large second-order nonlinearity 49 . In the WOFN waveguide, the interactions between the anisotropic light field of the OFN and the anisotropic second-order susceptibility tensor of WS 2 are carefully explored theoretically and experimentally. Furthermore, we reveal that the SHG in the WOFN can be controlled by the strain with a high sensitivity through the nonlinear multibeam interference effects. Our study can reveal a novel approach for tunable high-performance optical-waveguide-integrated linear and nonlinear devices.
Results
The schematic of the WOFN is illustrated in Fig. 1a, which is achieved by laminating a piece of a WS 2 monolayer in the waist region of an OFN using a modified microtransfer technique (Figs. S1 and S2) 50 . The OFN is fabricated by flame brushing techniques, while the WS 2 film is grown by chemical vapor deposition (CVD). Considering the typical grain size of the CVD-grown single-crystalline WS 2 , the effective encapsulating length of WS 2 in the WOFN is usually within 100μm. The crystal structure of WS 2 is illustrated in Fig. 1b, where two layers of sulfur atoms (S) are separated by one layer of tungsten (W) atoms; the W atoms exhibit a trigonal prismatic coordination. The PL spectrum of the transferred WS 2 on a glass substrate indicates the direct bandgap characteristics ( Fig. 1c) 41,51,52 .
Atomic force microscopy (AFM) was used to determine the thickness of WS 2 (Fig. 2a), which clearly indicated monolayer characteristics. To demonstrate the quality of the transferred WS 2 , we measured in situ the PL and Raman spectra using a continuous 532-nm excitation light source for the WOFN and WS 2 on a glass substrate, as shown in Fig. 2b, c. The WOFN was put on a glass slide for measurement convenience. The inset in Fig. 2b 630 nm, which corresponds to the Aexciton (trion), which is the direct interband transition at the K-point in the hexagonal Brillouin zone. The shoulder peak of the PL at~612 nm could be attributed to the neutral exciton A. We believe that the unintentional doping during the transfer process leads to the PL fingerprints of WS 2 53 . The redshift of the A/Aexciton of the WOFN (positions 3 and 4), compared with WS 2 on the glass substrate (positions 1 and 2), most likely emerges owing to the geometrical curvature of the OFN and the residual strain introduced in the transfer process. With regard to the Raman fingerprint, for example, for position 1, five peaks are clearly resolved by the Lorentz fitting, at 296.4, 324.8, 349.3, 356.7, and 417.7 cm -1 , which corresponds to different vibration modes of WS 2 54 . Figure 2d compares the measured absorption spectra of WS 2 deposited on the end-face of a fiber patch cord and WOFN. The length of the integrated WS 2 in the WOFN is 60 μm. An optical-fiber-coupled halogen light source (SLS201/M, Thorlabs) is employed; the output spectra are analyzed using a fiber-coupled optical spectrometer (Fig. S3). Two prominent absorption peaks appeared at 610.2 nm (A exciton) and 510.6 nm (B exciton). The energy separation (~400 meV) between the A and B excitons is attributed to the energy splitting of the valence band owing to the spin-orbit coupling effect 24,52 . The magnitude of the exciton absorption in the WOFN (A exciton:~97.7%) is significantly enhanced compared with the free-space illumination (A exciton: 13.0%), owing to the enhanced light-matter interactions in the WOFN. In addition, we employ the finite-element method to simulate the transmission spectrum of the WOFN (Fig. S4), as shown in Fig. 2e, which agrees well with the experimental results. The measured transmission loss in the infrared region is approximately 0.5 dB (Fig. S5), which is beneficial for nonlinear optics applications. Figure 2f shows the output PL spectra of the WOFN for different pump power values. The PL intensity exhibits an almost linear relationship with a pump power of up to 56 μW, as shown in the inset. We also conducted contrast experiments, and the results showed that the output PL intensity of the WOFN was higher than that of WS 2 directly deposited on the optical fiber end-face. Moreover, the numerical simulation shows that the average one-directional coupling efficiency of the WS 2 exciton emission to the OFN is 12%, which attests to the superiority of the waveguidecoupled-WS 2 structure for light excitation and collection (Fig. S7). Strain engineering has been widely employed owing to the corresponding evolution of the electronic band structure of the 2D materials, including graphene and TMDCs 40,41,55,56 . Most studies employed the free-space coupling technique to detect the optical spectra of 2D materials as a function of the strain. This method is simple; however, miniaturization and integration are challenging. Figure 3a shows the experimental set-up for an in-line manipulation of the PL spectra of WS 2 . A uniaxial strain in the WOFN is applied by stretching using the translation stage; the strain is transferred to the attached WS 2 film. The WOFN was illuminated using an excitation light source (~40 μW, 532 nm); the output PL was analyzed using an optical-fiber-coupled spectrometer. Unless otherwise stated, the WOFN sample under the strain manipulation is the same as that presented in Fig. 2d, the diameter of which is 795 ± 6 nm (Fig. S8); the strain values are calculated using the ratio of the elongated length of the WOFN to its original length. Figure 3b, c summarize the PL and absorption spectra of the WOFN as a function of the strain, which was increased from 0% to 1.35% and then decreased to 0% (from bottom to top). The emission spectra exhibit a prominent redshift with the increase in the strain; the corresponding absorption spectra exhibit similar patterns. A linear fitting shows that the slope of the PL peak wavelength with respect to the strain is 10.1 nm/% strain (-30 meV/% strain) during the increase in the strain, which is comparable to the values reported in other studies 40,41,57,58 . The tuning range of the exciton wavelength in TMDCs is mainly limited by the direct-indirect bandgap transition induced by certain strain magnitude. Both Variations of the d PL peak wavelength and e absorption peak wavelength in the A exciton region with the increase and decrease in the strain. f Dependence of the absorption peak wavelength of the WOFN during strain loading and unloading. The violet region corresponds to the increase in the strain, while the green region corresponds to the decrease in the strain. One cycle contains four steps of strain loading/unloading; each step corresponds to a strain of 0.22% the emission and absorption spectra are not completely reversible. Quasi "hysteresis" loops of the PL peak wavelength and absorption peak wavelength are observed in Fig. 3d, e. Further, we measured the peak wavelength of absorption during a strain loop test, as shown in Fig. 3f. The spectral response is almost recovered after one cycle, even though there is a hysteresis. The hysteresis of the WOFN could be attributed to the interface relaxation effect in WS 2 -silica; further studies are required to elucidate the origin of this phenomenon. A possible solution to the hysteresis problem is to coat a thin layer of a lowrefractive-index elastomer (polydimethylsiloxane (PDMS)) on the surface of the WOFN, which can help to fasten WS 2 on the substrate; 57 however, the waveguide dispersion can be significantly modulated. Although the waveguide dispersion has a small effect on the PL, it can significantly influence the nonlinear optical phenomena in the WOFN, as discussed in the next section. It should be noted that for practical applications, the WOFN should be well encapsulated to enhance the robustness and longterm stability 59 .
Monolayer TMDCs exhibit a large second-order nonlinearity (χ (2) ) owing to the breaking of the inversion symmetry; χ (2) can be further enhanced in the exciton resonant region 11,12,49 . Most of the previous studies reported an SHG in the TMDCs when using the freespace coupling technique with a low conversion efficiency, which is limited by the small light-matter-interaction cross section. An intuitive method to improve the SHG conversion is to employ the optical waveguide coupling techniques. In contrast to the direct illumination method, a phase matching is needed for a high conversion efficiency in waveguides. In a fused silica fiber, the value of χ (2) in the bulk is low, while that at the surface is considerable owing to the symmetry breaking at the air-silica interface. To characterize the enhancement of the SHG in the WOFN, we compared the SHG in an OFN before and after the transfer of WS 2 . Figure 4a shows the OFN/WOFN dispersions as a function of the diameter of the waveguide for the fundamental wave (FW) at 1550 nm and second-harmonic (SH) at 775 nm. The waveguide dispersion is slightly modified upon the introduction of the WS 2 layer. In particular, the phase-matching point is shifted (in terms of the OFN diameter) by~30 nm, as shown in the inset of Fig. 4a. Although there are other optical modes, such as HE 11 -(FW)-TM 01 -(SH) of the WOFN, that satisfy the phasematching conditions, the symmetry of the second-order nonlinearity tensor of WS 2 11,12,49 inhibits the harmonic generation (Supplementary Note 2.2). By solving the coupling-wave equation in the small signal approximation, we can find that the SHG intensity (P SHG ) can be derived as follows: where P FW is the pump power of FW, ρ 2 is the nonlinear coupling parameter, L is the effective interaction length along the waveguide, and Δβ = 2β FW -β SHG is the phase mismatch between the fundamental and secondharmonic waves. The nonlinear coupling parameter ρ 2 is defined as the overlap integral: 60 where ω 2 is the second-harmonic frequency, and N 1 and N 2 are the normalized field factors for FW and SH, respectively. P (2) is the second-order nonlinear polarization, which can be calculated according to the second-order susceptibility tensor of the materials (Supplementary Note 2.2). Figure 4b compares the nonlinear coupling parameters |ρ 2 | of the OFN and WOFN, as a function of the waveguide diameter. The values of |ρ 2 | of the WOFN are one order of magnitude larger than those of the OFN, which implies that the power conversion efficiency of the WOFN is two orders of magnitude larger than that of the OFN under the same conditions. As the physical interpretation of ρ 2 is attributed to the overlap integral of the optical mode of the FW and SH 60 , |ρ 2 | initially increased with the decrease in the waveguide diameter, and then decreased after the matching point. The crystal orientation alignment in the WOFN has a slight influence on |ρ 2 | (Fig. S9). The quadratic dependences of the output SHGs in the OFN and WOFN on the pump power are clearly demonstrated in Fig. 4c. The SHG intensity of the WOFN is approximately 20 times larger than that of the OFN, which is comparable to the theoretical value considering the insertion loss and imperfect transfer of WS 2 (Fig. S11). In addition, we pumped a sample with WS 2 directly deposited on the surface of a cleaved optical fiber, and no SHG was detected for input powers of up to 60 mW. Intuitively, the waveguide enhancement of SHG compared to the free-space coupling will be proportional to the effective interaction length square if the phase matching conditions are satisfied and the additional insertion loss is neglected.
To investigate the possible effects on the SHG in the WOFN, we set-up an experimental configuration, as shown in Fig. 5a. The output SHG intensity depends on the linear polarization of the pump FW, as shown in Fig. 5b. The SHG intensity should be independent of the polarization of the FW owing to the circular symmetry of the WOFN, assuming a perfect WS 2 encapsulation. Nevertheless, incomplete coverage of WS 2 on the WOFN is always present owing to the transfer technique, which leads to the polarization extinction. A theoretical fitting reveals that the WS 2 coverage ratio is 75%. The polarization extinction spectrum of the WOFN can serve as a guide to characterize the WS 2 transfer quality (Fig. S10). It is intuitive that thinner poly (methyl methacrylate) (PMMA) film leads to a higher WS 2 coverage ratio, while the strength of the film will be compromised, which is challenging for the transfer process. Figure 5c shows the SHG intensity as a function of the applied strain; the oscillations are clearly resolved. The modulation process is almost reversible, as shown in Fig. 5d. The SHG intensity fluctuations are within 7%, most likely owing to the instability of the pump power and measurement configuration. As the measured SHGs are far away from the exciton resonant region of WS 2 , we conclude that this modulation is most likely not caused by the change of χ (2) , but attributed to the nonlinear interference between the harmonic waves generated at different parts of the WOFN, i.e., at positions with and without a WS 2 deposition (Figs. S13 and S14). Although the modulation strategy here is less reproducible in the OFN platform experimentally, theoretically, if we can well control the geometry of the WOFN, the output SHG can be well predicted. Furthermore, this method can be readily employed in flexible on-chip devices, in which the configuration is highly reproducible.
Discussion
In summary, we demonstrated a hybrid optical fiber waveguide integrated with a WS 2 monolayer for the enhancement of the PL and SHG through evanescent field coupling. We revealed that the in-line strain can efficiently manipulate the photon-electron and photon-photon interactions in WS 2 . The waveguidecoupled PL spectra and exciton absorptions of WS 2 were experimentally linearly tuned over a wavelength range of 10 nm. Moreover, we systematically analyzed the harmonic generation in the WOFN structure and showed that the SHG in the WOFN was more than one order of magnitude larger than that in the bare OFN under the same conditions. This value can be further enhanced if the pump light is tuned at the exciton resonance or peak joint-density-of-states regions (Fig. S12) 49 . Nevertheless, the waveguide matching conditions imply that a shorterwavelength pump light source requires a thinner OFN waveguide structure 60 , which is very challenging to be experimentally achieved (Fig. S6). This unique platform can have broad applications in optical fiber sensing and nonlinear optics. For optical fiber sensing, this kind of sensor can operate in a passive light absorption mode or active light emission mode depending on the optical measurement system. Compared to the traditional optical nanofiber/microfiber sensors based on external resonating structures 43,61 , this hybrid sensor is based on the electronic band structure of WS 2 and its response to the loaded strain 40,41 , which will be robust to environmental perturbations. For the nonlinear optics, we experimentally and theoretically show that the SHG in the hybrid waveguide can be dynamically tuned with the strain, which can be attributed to the nonlinear interference effects. Another possible application is to integrate this device to an active fiber laser circuit for tunable pulsed light generations 62,63 , in which the WS 2 might serve as tunable saturable absorbers. We believe that our structure design can be easily applied to other TMDCs, which can pave the way for the design of tunable waveguide-coupled light sources.
Material and device characterizations
The surface morphology of WS 2 (6Carbon Technology, Shenzhen) on the sapphire substrate was measured using AFM (Cypher ES Polymer Edition, Asylum Research). The Raman and PL spectra of WS 2 on the flat substrate were recorded at room temperature in air using a LabRam HR 800 Evolution system (HORIBA Jobin Yvon) with an excitation line of 532 nm. We used gratings with 1800 gr/ mm and 600 gr/mm for the Raman measurement and PL characterization, respectively. The Raman band of Si at 520 cm −1 was used as a reference to calibrate the spectrometer. The absorption and PL spectra of the WOFN device were measured using two optical-fiber-coupled spectrometers, USB2000+ (~0.4 nm resolution, Ocean Optics) and NOVA (~0.8 nm resolution, Idea Optics Co., China). A filtered nanosecond pulsed fiber laser (pulse width: 10 ns, repetition frequency: 1 MHz, APFL-1550-B-CUSTOM, SPL Photonics Co., Ltd) was used to pump the WOFN for the SHG characterizations; the output signal was filtered and analyzed using a fiber-coupled spectrometer.
Strain response measurement
The WOFN was clamped on two translation stages; the in-line stretching of the WOFN was generated by the linear motor stage (XML, Newport). As the applied strain in the WOFN is highly nonuniform, we calibrated the strain value of the waist region by modeling the geometry of the WOFN. | 4,797.8 | 2019-01-16T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |